April 20, 2024

OpenAI’s head of trust and safety, Dave Willner, resigns

A significant personnel change is underway at Open AI, the artificial intelligence juggernaut that almost single-handedly introduced the concept of generative AI into global public discourse with the launch of ChatGPT. Dave Willner, an industry veteran who served as the startup’s head of trust and safety, announced in a post on LinkedIn last night that he left the job and moved into an advisory role. He plans to spend more time with his young family, he said. He was in the role for a year and a half.

His departure comes at a critical time for the AI ​​world.

In addition to all the excitement about the capabilities of AI generation platforms – which are based on large language models and are rapidly illuminating the easy-to-generate production of text, images, music and more based on simple user prompts – there is a growing list of questions. How best to regulate activity and companies in this brave new world? How best to mitigate any adverse impact across the full spectrum of issues? Trust and safety are fundamental parts of those conversations.

Just today, OpenAI president Greg Brockman is scheduled to appear at the White House along with executives from Anthropic, Google, Inflection, Microsoft, Meta, and Amazon to endorse voluntary commitments to pursue shared safety and transparency goals ahead of the ongoing AI Executive Order. That comes after a lot of noise in Europe about the regulation of AI, as well as transfer feelings among some others.

The importance of all this is not lost on OpenAI, which tried own position as a knowledgeable and responsible player on the field.

Willner makes no reference to any of that specifically in his LinkedIn post. Instead, he keeps it high, noticing that the demands of his OpenAI post were moved to a “high-intensity stage” after the launch of ChatGPT.

“I’m proud of everything our team has accomplished during my time at OpenAI, and while my job was one of the most innovative and interesting jobs available today, it has grown dramatically in scope and scale since I first started,” he wrote. Although he and his wife – Chariotte Willner, who is also a trust and safety specialist – made promises to always put family first, he said, “in the months after ChatGPT launched, it was harder for me to hold up to my end of the bargain.”

Willner has only been in the OpenAI position for 1.5 years, but comes from a long career in the field that includes leading trust and safety teams at Facebook and Airbnb.

The Facebook work is particularly interesting. There, he was an early employee who helped spell out the company’s first community standards position, which is still used as the foundation of the company’s approach today.

That was a very formative period for the company, and arguably — given the impact Facebook had on the development of social media around the world — for the internet and society in general. In some of those years there were very outspoken positions about freedom of speech, and how Facebook needed to resist calls to shut down controversial groups and controversial posts.

One case in 2009, which took place in the public forum, involved a lot of controversy over Facebook’s handling of accounts and posts from Holocaust Deniers. Some employees and outside observers felt that Facebook had an obligation to take a stand and ban these posts. Others believed that doing so was tantamount to censorship and sent the wrong message around free discourse.

Willner was in the latter camp, believing that “hate speech” was not the same as “direct harm” and should not be similarly mitigated. “I don’t believe that Holocaust Denial, as thought about it [sic] himself, is essentially a threat to the safety of others,” he wrote at the time. (For a blast from TechCrunch’s past, see the full post on this here.)

In retrospect, considering so much else played out, it was a pretty short-sighted position. But, it seems that at least some of those ideas have evolved. By 2019, the social network no longer has employees, it was speaking out against how the company wanted to give weaker content moderation exceptions to politicians and public figures.

But if the need to lay the right groundwork on Facebook was greater than expected at the time, it could be argued that the case is greater now for the new wave of technology. According to this The New York Times story since less than a month ago, Willner first called on OpenAI to help him figure out how to keep Dall-E, the startup’s image generator, from being misused and used for things like creating AI-generated child pornography.

But as the say goes, OpenAI (and the industry) needs that policy yesterday. “Within a year, we’ll be reaching a critical point in this area,” David Thiel, chief technologist at Stanford’s Internet Observatory, told the NYT.

Now, without Willner, who will lead the OpenAI charge to address that?

(We’ve reached out to OpenAI for comment and will update this post with any responses.)

Leave a Reply

Your email address will not be published. Required fields are marked *