WASHINGTON/NEW YORK, July 21 (Reuters) – Top AI companies including OpenAI, Alphabet ( GOOGL.O ) and Meta Platforms ( META.O ) have made voluntary commitments to the White House to implement measures such as watermarking AI-generated content to help make the technology safer, the Biden administration said.
The companies – which also include Anthropic, Inflection, Amazon.com ( AMZN.O ) and OpenAI partner Microsoft ( MSFT.O ) – have pledged to thoroughly test systems before release and share information on how to reduce risks and invest in cyber security.
The move is seen as a victory for the Biden administration’s effort to control the technology that is fueling investment and consumer demand.
Since generative AI, which uses data to create new content like ChatGPT’s human voice prose, became popular this year, lawmakers around the world began to think about how to mitigate the emerging technology’s dangers to national security and the economy.
US Senate Majority Chuck Schumer called in June for “comprehensive legislation” to promote and ensure protections for artificial intelligence.
Congress is considering a bill that would require political ads to disclose whether AI was used to create images or other content.
President Joe Biden, who will host executives from the seven companies at the White House on Friday, is also working on developing an executive order and bipartisan legislation on AI technology.
As part of the effort, the seven companies promised to develop a system to give “Water” to all types of content, from text, images, audio recordings, to videos generated by AI so that users know when the technology has been used.
This watermark, which is technically embedded in the content, is likely to make it easier for users to see deeply fake images or audio recordings that could, for example, depict violence that did not happen, create a better scam or distort a photo of a politician to make the person look unfair.
It is not clear how the watermark will appear when the information is shared.
The companies also pledged to focus on protecting user privacy as AI develops and ensuring the technology is free from bias and not used to discriminate against vulnerable groups. Other commitments include developing AI solutions to scientific problems such as medical research and mitigating climate change.
Reporting by Diane Bartz in Washington and Krystal Hu in New York Editing by Matthew Lewis
Our Standards: Thomson Reuters Trust Principles.
Focused on US antitrust as well as corporate regulation and legislation, with experience covering war in Bosnia, elections in Mexico and Nicaragua, as well as stories from Brazil, Chile, Cuba, El Salvador, Nigeria and Peru.
Krystal reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and characters, with a focus on growth-stage startups, technology investments and AI. She previously covered M&A for Reuters, breaking stories about Trump’s SPAC and Elon Musk’s Twitter funding. Before that, she reported on Amazon for Yahoo Finance, and congressional lawmakers cited her investigation into the company’s retail practices. Krystal began a career in journalism by writing about technology and politics in China. She has a master’s degree from New York University, and enjoys scooping Matcha ice cream as well as scooping at work.