this post was submitted on 07 Jan 2024
22 points (86.7% liked)

Privacy

833 readers
9 users here now

Privacy is the ability for an individual or group to seclude themselves or information about themselves, and thereby express themselves selectively.

Rules

  1. Don't do unto others what you don't want done unto you.
  2. No Porn, Gore, or NSFW content. Instant Ban.
  3. No Spamming, Trolling or Unsolicited Ads. Instant Ban.
  4. Stay on topic in a community. Please reach out to an admin to create a new community.

founded 2 years ago
MODERATORS
 

The Biden administration doesn’t seem quite certain how to do it – but it would clearly like to see AI watermarking implemented as soon as possible, despite the idea being marred by many misgivings.

And, even despite what some reports admit is a lack of consensus on “what digital watermark is.” Standards and enforcement regulation are also missing. As has become customary, where the government is constrained or insufficiently competent, it effectively enlists private companies.

With the standards problem, these seem to none other than tech dinosaur Adobe, and China’s TikTok.

It’s hardly a conspiracy theory to think the push mostly has to do with the US presidential election later this year, as watermarking of this kind can be “converted” from its original stated purpose – into a speech-suppression tool.

The publicly presented argument in favor is obviously not quite that, although one can read between the lines. Namely – AI watermarking is promoted as a “key component” in combating misinformation, deepfakes included.

And this is where perfectly legal and legitimate genres like parody and memes could suffer from AI watermarking-facilitated censorship.

Spearheading the drive, such as it is, is Biden’s National Artificial Intelligence Advisory Committee and now one of its members, Carnegie Mellon University’s Ramayya Krishnan, admits there are “enforcement issues” – but is still enthusiastic about the possibility of using technology that “labels how content was made.”

From the Committee’s point of view, a companion AI tool would be a cherry on top.

However, there’s still no actual cake. Different companies are developing watermarking which can be put in three categories: visible, invisible (i.e., visible only to algorithms), and based on cryptographic metadata.

And while supporters continue to tout watermarking as a great way to detect and remove “misinformation,” experts are at the same time pointing out that “bad actors,” who are their own brand of experts, can easily remove watermarks – or, adding another layer to the complication of fighting “misinformation” windmills – create watermarks of their own.

At the same time, insisting that manipulated content is somehow a new phenomenon that needs to be tackled with special tools is a fallacy. Photoshopped images, visual effects, and parody, to name but a few, have been around for a long time.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 4 points 11 months ago (1 children)

How did JFK not see that thing coming at him?

[–] [email protected] 3 points 11 months ago* (last edited 11 months ago)

It was AI in the grassy knoll