Photo: Canva
Reuters released a report about Joe Biden announcing how “AI companies including OpenAI, Alphabet (GOOGL.O) and Meta Platforms (META.O) have made voluntary commitments to the White House to implement measures such as watermarking AI-generated content to help make the technology safer.”
The White House held a meeting with representatives from seven companies in total, including “Anthropic, Inflection, Amazon.com (AMZN.O) and OpenAI partner Microsoft (MSFT.O),” all of which voiced their commitment to “developing a system to ‘watermark’ all forms of content, from text, images, audios, to videos generated by AI so that users will know when the technology has been used.”
Due to concerns about the possible disruptions that AI is capable of, the companies made a pledge to conduct comprehensive system testing prior to their release and share information on risk reduction and investing in cybersecurity measures.
With ChatGPT becoming increasingly popular since its recent release to the public in November 2022, lawmakers quickly moved to put together a set of regulations that would help mitigate any potential dangers that AI could pose to the general public, the economy, and national security.
For example, “Congress is considering a bill that would require political ads to disclose whether AI was used to create imagery or other content.” This precaution is meant to prevent political parties from creating AI-generated propaganda about their opponents that might lead the public to believe things that aren’t necessarily true. Beyond that, there are also other political entities and movements that could use AI to create harmful or hateful material.
China has already moved quickly, with the Cyberspace Administration of China (CAC) requiring all AI-generated outputs to contain a watermark, according to the Georgetown Journal of International Affairs (GJIA). The CAC has also gone so far as to make it illegal to delete, alter, or hide any AI watermarks. Additionally, Reuters commented on how the European Union is likewise moving full speed ahead with regulating artificial intelligence.
But in the United States, this type of regulation is easier said than done. The GJIA discussed some specific concerns, such as that “watermarking content and providing imperfect tools to detect AI-generated content can encourage institutions to create real harms when detection tools falsely flag human-generated content as AI-generated.”
Another concern noted is the possibility of nefarious use when “adversaries purposefully mimic a watermark,” such as to “block a company’s product in its country, to accuse a company of election interference, etc. It could mimic a watermark and then use it for its disinformation campaigns, creating a false trail.”
Other critics suggest bypassing watermarks and instead “labeling AI-generated content prominently. This might be viewed like Article 17 of the CAC regulations––akin to food labels or health warnings on tobacco products.”
Ultimately, there is cause for concern because AI technology is rapidly developing and will look very different two, five, and eight years from now. This means that AI-detecting tools and watermarks will need to be constantly updated, making them prone to bugs, glitches, and other unforeseen issues.
The companies involved in the meeting with the White House also emphasized that they plan to protect users’ privacy as AI develops and ensure that AI technology remains free of bias and is not used in any way to discriminate against targeted groups.
Leave a Reply
You must be logged in to post a comment.