Terry H. Schwadron

May 24, 2024

Rather than looking to government regulators, if we want to know the current boundaries on artificial intelligence, apparently we need to track threats of legal challenge.

The current case involves actress Scarlett Johansson, whose voice Open AI seems to have copied nearly perfectly as a “personal assistant” in its Chat GPT service, much to her annoyance, leading to letters from her lawyers asking the company to detail how it developed the voice.

I had to read rather than just know that Johansson used her own voice as a fictionalized AI figure as part of the romantic sci-fi film, “Her,” and that Sam Altman, the Open AI CEO, says the 2013 film is his favorite movie.

Just for the legal record, Open AI executives have denied any connection between Johansson and the new voice assistant — and now has dropped using the voice — but Altman had been pursuing the actress for months to grant her paid permission. From various accounts, Altman thought it would be “comforting to people” who are uneasy with AI technology.

Johansson consistently said no, finding the mimic voice a personal and legal affront.

“I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” she said in her own voice. She found the creation of an aped voice alarming at a moment when the internet is filled with disinformation.

A business column in The New York Times noted the clash as “another sign of eroding trust in OpenAI, which has taken fire from creative industries and former employees.” The Times is among several news organizations that have challenged Open AI for using its materials from the internet without payment.

AI and Creative Work

Whatever else we can deduce from the Johansson incident, we can all agree that AI is digging in fast into the tools we use for our daily lives. Every business is considering whether there are advantages worth pursuing either for cost-cutting or for speed.

A friend recently shared songs she had ordered up from AI engines to put poems she had written years ago to music in the various styles that she thought appropriate. It was a task that took AI a few seconds, with resulting sounds remarkably like the requested settings.  She prefaced her story by insisting that she had absolutely no talent to create music, but appreciated a easy button to provide a passable product. Since I have an adult son who composes and performs music, this was a personal head-scratcher about how exactly he is supposed to be compensated in a world that could turn to machines for instant product.

Though the lure of AI was in its ability to speed through “routine” work and research, we’re hearing a lot more these day about AI entering the worlds of creative output.

AI-produced work was a strong focus of the Screen Actors’ Guild and the Hollywood writers’ work stoppages, for example.

“Generative AI can already do a lot,” notes the Harvard Business Review. “It’s able to produce text and images, spanning blog posts, program code, poetry, and artwork (and even winning competitions, controversially).”

This week, Will Lewis, the CEO and publisher of The Washington Post, announced that the news outlet will be pivoting to artificial intelligence — in an attempt not to improve its work, but to turn around its dismal financial situation. As Semafor media industry editor Max Tani tweeted, Lewis told Post staffers that the newspaper will be looking to have “AI everywhere in our newsroom,” as it seeks to recoup some of the $77 million it lost last year.

Given the early record of AI products to produce unverified research and wrong interpretations of recorded events, this is a distressing note to someone who spent a lifetime in newsroom exactly because they called for rigor in sifting information.

Open AI has settled a deal with News Corp., parent of The Wall Street Journal and Fox News, that will bring content from its media outlets to ChatGPT and other OpenAI products.

We can be happy that Open AI is now open to paying for its content models, but still worried about how the results will be used without question or further checking.

There is concern in Washington — but no congressional action — over easy production of disinformation that is accelerating with the help of machine-produced, but potentially false information.

What there does not seem to be is any real questioning about perfectly legal replacement of information workers and creative artists with very good robots.

##

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.