June 17, 2024

“AI Safety” is not Safe

Leaders of America’s largest AI-focused companies – including Meta, Amazon, Google, Microsoft, Anthropic, Inflection AI, and OpenAI –hit with President Biden on Friday, pledging to continue doing what they were already doing to make AI safe, according to their own skewed definition of safety.

The meeting can be considered a symptom of the dominance of AI executives in the conversation about the dangers of their own products. That dominance has resulted in a proliferation of dire scenarios focused on the hypothetical and potentially dire long-term outcomes of the technology.

Nitasha Tiku Put down the phenomenon in a recent article i The Washington Post: “In recent years, Silicon Valley has been driven by a particular vision of how superintelligence might become obsolete,” she writes. “In these cases, AI does not need to be sentient. Instead, he becomes fixated on a goal – even a global goal, like making paper clips – and encourages human destruction to optimize his task.”

There is a connection between the tech billionaires’ reliance on these stories and threats and their belief in them long term. The idea can be understood as philosophy’s answer to procrastination: it encourages the art of making short-term trade-offs in order to secure a vague, noble purpose, the long-term benefit of humanity.

Conveniently, this concept allows them to continue developing profitable technological applications while claiming paternalistic, karmic superiority: not only are they bringing life-enhancing technology to the masses, they are single-handedly ensuring that the technology they make doesn’t go rogue and kill us all.

The White House may just be buying this story; President Biden said Friday that the gathered executives are critical to ensuring that AI develops “with responsibility and safety by design.”

Adhering to these ideals allows creators of mass AI applications to control the narrative of “AI risk,” and its inverse, “AI safety.” The retention and proliferation of terms such as “alignment,” which refers to the degree to which artificial intelligence conforms to the intentions of its human handlers, ignores less sexist terms like “copyright infringement” and “sexist and racist algorithms” associated with current AI practices, if not its wildest future destination.

Meanwhile, in a country where there are no national regulations on data privacy, let alone artificial intelligence, the most immediate challenges of AI are not “risks”. They are facts.

By focusing on big and hypothetical threats, AI executives can continue to spread user data, ignore the dangers of AI-generated misinformation, and engage in other heedless behaviors — all the while solemnly pledging “AI safety” alongside the President of the United States.

“AI safety” has a great ring to it. And it’s confusing enough that everyone can be using the same words and mean different things. “Safety” could be used in AI to indicate that people feel their jobs are secure despite the ability of large language models to perform some of their functions. It could make a person of color feel confident that algorithms aren’t being used to screen them out of job or mortgage applicant pools. It could be public trust that facial recognition will not be used as evidence in arrests – or it could simply mean that users trust the information they receive from ChatGPT.

But the discussion about creating “safety” in AI ignores the fact that the stuff it’s made of – data – remains completely unregulated in the United States. At a congressional hearing in May, OpenAI CEO Sam Altman urged lawmakers to regulate the technology enriching it, implicitly suggesting they ask how they should go about doing so.

Conveniently, he can claim that his company already has teams in charge of AI risk management (head of trust and safety OpenAI), like those of his colleagues who attended the photo-op on Friday. step down on Friday) that pursue safety standards that can be systematically taken into account in their business models.

Since AI companies were founded in a country that above all else protects innovation and innovators, they took it upon themselves to design theories and standards for safety that were consistent with their own moral and financial views. The basic problem is that there were such unregulated waters waiting for them to enter.

The AI ​​”threat” of China’s domestic development is likely to fuel Washington, DC’s choice to abandon perceived innovation. According to them, winning the US-China immersion competition depends on it. This story omits the fact that China has clamped down on its own AI companies, sometimes to the detriment of profitability or “innovation”.

While Beijing is controlling technology largely (but not entirely) because of its interest in controlling popular opinion and behavior, the fact remains that the average US citizen is more vulnerable to an unmarked deepfake than their Chinese counterpart. Ironically, it could be argued that synthetic images and text make more of a difference in democratic countries, where information drives public opinion and public opinion on election outcomes.

It’s better than anything that both the White House and tech CEOs are invested enough in the delivery (and optics) of secure artificial intelligence that they’re willing to comply and come up with voluntary, albeit still unspecified, generally positive-sounding measures. And it remains to be seen whether the companies will come up with more concrete guidelines in the coming days and weeks; Anthropic Announced He would soon share “more specific plans for cyber security, red teaming, and responsible scaling.”

But trying to make AI “safe” without data privacy regulations would be like trying to regulate the drinking of wine in restaurants – but not the commercial process of turning grapes into wine. You could make sure the glass is the right shape, but you would have no way of knowing if there was any alcohol in the contents. It could even be poisonous.

Leave a Reply

Your email address will not be published. Required fields are marked *