June 24, 2024

Making Sense of Tech Companies’ AI Commitments To The White House

Friday, July 21, 2023 the White House Announced that seven prominent technology companies have agreed to commitments to artificial intelligence (AI). The seven companies are a mix of major tech companies, including Amazon, Google, Meta, and Microsoft, and well-known startups at the forefront of AI, including Anthropic, Inflection, and OpenAI. Notably absent are other big tech companies like Apple, and other notable startups in this space like Elon Musk’s X.AI.

The commitments are grouped into three categories: (1) safety, (2) security, and (3) trust. Each of these commitments has trade-offs, as described in detail below. An important point is that these commitments are voluntary and non-binding, so there is no enforcement mechanism if the companies do not comply.

Some of those promises are quite sensible and are probably already being made, at least in part, by the companies. For example, the first commitment is that the companies “commit to internal and external security testing of their AI systems before release.” All these companies definitely already test their products before releasing them. What is new is the use of external parties to carry out some of the testing. However, it is not clear how the third party test would be implemented. Among the open questions are: Which external parties will do the testing? Will the government “certify” the external testers? What criteria will the testers use to say whether a new AI product is safe or not? How long will the test take?

Some of these questions will be resolved over time, but the last one deserves extra attention—how long will the test take? Keep in mind that there are many significant companies that have not signed the pledges that will be able to race to market with their product without waiting for third party testing to be completed. If the seven tech companies that signed the pledge feel they are at a competitive disadvantage waiting for third-party test results, they may abandon the pledge. Again, remember that all of these commitments are voluntary.

The fifth commitment is that the companies “commit to developing strong technical mechanisms to ensure that users know when AI content is generated, such as a watermark system.” Some companies already have this capability. Google, for example, announced in March 2023 that it would consider water signs interior images created by their AI models. The use of watermarks may be useful in reducing misleading advertising. US Representative Yvette Clarke recently introduced legislation to disclose the use of AI in political ads. Therefore, the fifth commitment is something that the companies are already able to do and it is aligned with the proposed legislation.

Other promises seem ambitious, at best. The eighth pledge states: “The companies pledge to develop and deploy advanced AI systems to help tackle society’s greatest challenges. From preventing cancer to mitigating climate change so much in between, AI — if managed properly — can make a huge contribution to the prosperity, equality and security of all.” These are noble goals, and if successful would generate a tremendous amount of value for your company. But, nothing prevents the companies from pursuing other (less noble) goals as well, such as AI to target better online ads or AI for faster stock trading, for example.

While the White House’s announcement of these commitments raises many questions, including some of those raised above, it at least demonstrates the Biden Administration’s interest in working with companies on AI-related policy. This collaborative approach may sound toothless to those concerned about social harm from AI, but it is a step in the right direction. In addition, this careful and measured approach avoids making policy mistakes, which can hinder innovation.

Leave a Reply

Your email address will not be published. Required fields are marked *