The regulation of artificial intelligence has been a hot topic in Washington in recent months, with lawmakers holding hearings and news conferences and the White House announcing voluntary AI safety commitments by seven tech companies on Friday.
But a closer look at the activity raises questions about how meaningful the actions and policies are being set in relation to the rapidly developing technology.
The answer is that it is still not very wise. The United States is only at the beginning of what will be a long and difficult path to creating AI rules, lawmakers and policy experts said. Although there have been hearings, meetings with high-tech executives at the White House and speeches to introduce AI bills, it is too early to predict even the roughest sketches of regulations to protect consumers and include the risks of the technology to jobs, the spread of disinformation and security.
“It’s still early days, and nobody knows what a law will look like yet,” said Chris Lewis, president of the consumer group Public Knowledge, which has called for the creation of an independent agency to regulate AI and other technology companies.
The United States remains far behind Europe, where lawmakers are preparing to enact an AI law later this year that would impose new restrictions on the technology’s perceived most dangerous uses. In contrast, there is still much disagreement in the United States about how best to handle a technology that many American lawmakers are still trying to understand.
That is appropriate for many of the technology companies, policy experts said. While some of the companies have said they welcome AI rules, they have also argued against strict regulations like those being created in Europe.
Here’s a summary of the state of AI regulation in the United States.
At the White House
The Biden administration has been on a quick listening tour with AI companies, academics and civil society groups. The effort began in May with Vice President Kamala Harris meeting at the White House with the chief executives of Microsoft, Google, OpenAI and Anthropic, where she pushed the tech industry to take safety more seriously.
On Friday, representatives from seven tech companies appeared at the White House to announce a set of principles to make their AI technologies safer, including third-party security checks and watermarking AI-generated content to help prevent the spread of misinformation.
Many of the practices announced were already in place at OpenAI, Google and Microsoft, or on track to be implemented. They are not enforceable by law. Commitments to self-regulation were not as good as consumer groups had hoped.
“Voluntary commitments are not enough when it comes to Big Tech,” said Caitriona Fitzgerald, deputy director of the Electronic Privacy Information Center, a privacy group. “Congress and federal regulators must put meaningful, enforceable safeguards in place to ensure that the use of AI is fair, transparent and protects the privacy and civil rights of individuals.”
Last fall, the White House introduced the Blueprint for an AI Bill of Rights, a set of guidelines for consumer protections with the technology. The guidelines are also not regulations and are not enforceable. This week, White House officials said they were working on an executive order on AI, but did not disclose details and timing.
In the Conference
The loudest drumbeat for regulating AI has come from lawmakers, some of whom have introduced bills on the technology. Their recommendations include the creation of an agency to oversee AI, liability for AI technologies that spread disinformation and the requirement for licensing of new AI tools.
Lawmakers have also held hearings about AI, including one in May with Sam Altman, chief executive of OpenAI, which makes the chatbot ChatGPT. Some lawmakers have tossed around ideas for other regulations during the hearings, including nutrition labels to inform consumers about the risks of AI.
The bills are in their early stages and so far lack the support they need to move forward. Last month, Senate leader Chuck Schumer, Democrat of New York, announced a month-long process to create AI legislation that included education sessions for members in the fall.
“In many ways we are starting from scratch, but I believe Congress is up to the challenge,” he said during a speech at the Center for Strategic and International Studies.
At federal agencies
Regulatory agencies are beginning to act by policing a number of issues arising from AI
Last week, the Federal Trade Commission opened an investigation into OpenAI’s ChatGPT, requesting information about how the company secures its systems and how the chatbot could harm consumers by creating false information. FTC Chairwoman Lina Khan said she believes the agency has plenty of power under consumer protection and competition laws to police the problematic behavior of AI companies.
“Waiting for Congress to act is not ideal given the normal timeline of congressional action,” said Andres Sawicki, a law professor at the University of Miami.