June 17, 2024

This week in AI: Companies voluntarily submit to AI guidelines – now

Keeping up with an industry as fast-paced as AI is a tall order. So, until AI can do it for you, here’s a handy roundup of new stories in the world of machine learning, along with notable research and experiments we didn’t cover ourselves.

This week in AI, we saw OpenAI, Anthropic, Google, Inflection, Microsoft, Meta and Amazon voluntarily commit to pursuing shared AI safety and transparency goals ahead of a planned Executive Order from the Biden administration.

As my colleague Devin Coldewey wrote, there is no rule or enforcement being proposed, here — the practices agreed upon are purely voluntary. But the commitments, in general, reflect the AI ​​regulatory approaches and policies that all vendors in the United States and abroad may adopt.

Among other commitments, the companies have successfully conducted security tests on AI systems before release, shared information on AI mitigation techniques and developed watermarking techniques that facilitate the identification of AI-generated content. They also said they would invest in cybersecurity to protect private AI data and facilitate vulnerability reporting, as well as prioritize research into societal risks such as systemic bias and privacy issues.

Commitments are an important step, to be sure — even if they are not enforceable. But one does not mind if the undersigned have the underlying motives.

OpenAI has reportedly drafted an internal policy memo that shows the company supports the idea of ​​requiring government licenses from anyone trying to develop AI systems. CEO Sam Altman first raised the idea at a US Senate hearing in May, when he supported the creation of an agency that could issue licenses for AI products — and revoke them if anyone violates set rules.

In a recent press interview, Anna Makanju, OpenAI’s VP of global affairs, insisted that OpenAI was not “pushing” licenses and that the company only supports licensing systems for AI models that are more powerful than OpenAI’s current GPT-4. But government-issued licenses, if implemented the way OpenAI proposes, will set the stage for potential conflict with startups and open source developers who may see them as an effort to make it harder for others to break into the space.

I think Devin said it best when he described me as “throwing nails on the track behind them in a race.” At the very least, it shows the two-faced nature of AI companies that seek to fix regulators while crafting policies in their favor (putting small competitors at a disadvantage in this case) behind the scenes.

It is a worrying situation. But, if policy makers step up to the plate, adequate safeguards are still expected without undue interference from the private sector.

Here are other top AI stories from the past few days:

  • OpenAI’s trust and safety heading goes down: Dave Willner, an industry veteran who was OpenAI’s head of trust and safety, announced in a post on LinkedIn that he left the job and moved to an advisory role. OpenAI said in a statement that it is looking for a replacement and that CTO Mira Murati will manage the team on an interim basis.
  • Custom instructions for ChatGPT: In more OpenAI news, the company has launched custom instructions for ChatGPT users so they don’t have to write the same instruction prompts to the chatbot every time they interact with it.
  • Google’s AI newsletter: Google is testing a tool that uses AI to write news stories and has started showing it to publications, according to new report from the New York Times. The tech giant sent the AI ​​system to The New York Times, The Washington Post and The Wall Street Journal owner, News Corp.
  • Apple tests a chatbot like ChatGPT: Apple is developing AI to challenge OpenAI, Google and others, according to a new report from Bloomberg’s Mark Gurman. Specifically, the tech giant created a chatbot that some engineers are referring to internally as “Apple GPT.”
  • Meta Lama 2 releases: Meta revealed a new family of AI models, Llama 2, designed to drive apps like ChatGPT OpenAI, Bing Chat and other modern chat bots. Trained on a combination of publicly available data, Meta claims that the performance of Llama 2 has improved significantly over the previous generation of Llama models.
  • Authors protest against AI generation: AI generation systems like ChatGPT are trained on publicly available data, including books – and not all content creators are happy with the arrangement. In an open letter signed by more than 8,500 authors For fiction, non-fiction and poetry, the technology companies behind major language models such as ChatGPT, Bard, LLaMa and many others are tasked with using their writing without permission or compensation.
  • Microsoft brings Bing Chat to the enterprise: At its annual Inspire conference, Microsoft announced Bing Chat Enterprise, a version of its AI-powered Bing Chat chatbot with business-focused privacy and data governance controls. With Bing Chat Enterprise, chat data is not saved, Microsoft cannot see a customer’s employee or business data and customer data is not used to train the underlying AI models.

More machine learning

Technically this was also a news item, but it’s worth mentioning here in the research section. Fable Studios, which has previously made CG and 3D short films for VR and other media, demonstrated an AI model it calls Showrunner that can (it says) write, direct, act and edit an entire TV show – in their demo, it was South Park.

I agree on this. On one hand, I think it’s in pretty bad taste to do this, not to mention a massive Hollywood strike involving compensation and AI issues. Although CEO Edward Saatchi said he believes the tool puts power in the hands of creators, the opposite can also be argued. In any case people in the industry did not take it seriously.

On the other hand, if someone on the creative side (which is Saatchi) doesn’t explore and demonstrate these abilities, then others with less connections will explore and demonstrate them. Even if the claims made by Fable are a bit extensive on what they actually showed (which has serious limitations) it is similar to the original DALL-E in that it provoked discussion and indeed concern even though it was not a real artist. AI will have a place in media production in one way or another – but for a number of reasons that warrant caution.

On the policy side, a little while back there was the The National Defense Authorization Act going through (as usual) some really funny policy reforms that have nothing to do with defense. But among them was one addition that the government must host an event where researchers companies can do their best to detect AI-generated content. This is definitely approaching “national crisis” levels so it’s a good thing it slipped in there.

Over at Disney Research, they’re always trying to find a way to reconcile the digital and the real — presumably for park purposes. In this case they have developed a way to map the virtual movements of a character or motion capture (say for a CG dog in a movie) onto an actual robot, even if that robot is a different shape or size. It relies on two systems of optimization and the other informing about what is ideal and what is possible, sort of like a small ego and a superego. This should make it much easier for robot dogs to act like regular dogs, but of course it can be generalized to other things as well.

And I hope AI can help us steer the world away from mining the bottom of the sea for minerals, because that’s definitely a bad idea. A multi-institutional The study put AI’s ability to filter signal from noise to the task of predicting the location of valuable minerals around the globe. As they write in the abstract:

In this work, we embrace the complexity and underlying “message” of our planet’s interconnected geological, chemical, and biological systems by using machine learning to define patterns embedded in the multidimensionality of minerals and assemblages.

The study predicted and verified the locations of uranium, lithium and other valuable minerals. And how about this for a final line: the system will “improve our understanding of mineralization and mineralization environments on Earth, throughout our solar system, and through deep time.” Great.

Leave a Reply

Your email address will not be published. Required fields are marked *