Since the launch of OpenAI’s ChatGPT in November 2022, conversations about conversational or generative AI have become frequent, loud and filled with predictions about the opportunities and challenges ahead.
No matter how you feel about AI, there is no doubt that AI is here to stay and that it will continue to evolve because of how profoundly it is already changing the way we live, work, collaborate, think and create.
Over the past three months, I’ve been digging into all things conversational AI to address the opportunities and risks, companies and players working on new tools and policies, and some of the issues around this new technological frontier. Each week I’ll be sharing a few of the more notable things happening in the world of artificial intelligence that I think are worth paying attention to.
Since this is my first In the Loop on AI recap, I’m recapping some of the highlights from the past month or so, with links to the source material, so you can dive in.
AI could spell a bad end for humanity—or not. In March, prominent AI researchers and tech executives, including Apple co-founder Steve Wozniak and Twitter owner Elon Musk, signed an open letter asking for a six-month pause in AI development to give the industry time to establish safety standards around design and training these powerful and potentially dangerous systems.
“We’ve reached the point where these systems are intelligent enough that they can be used in ways that are dangerous to society,” said artificial intelligence pioneer Yoshio Bengio, director of the Montreal Institute for Learning Algorithms at the University of Montreal. told The Wall Street Journal in an interview at the time. “And we still don’t understand.”
The past two months have seen dueling posts about the potential threats and joys of AI. In a blunt, one-sentence open letter signed by prominent figures including OpenAI CEO Sam Altman and Jeffrey Hinton, who is known as the godfather of AI, experts said AI could pose a “risk of extinction” along with pandemics and nuclear war. In contrast, venture capitalist and Internet pioneer Marc Andreessen, whose company has backed numerous AI startups, wrote a nearly 7,000-word post on “Why AI Will Save the World.”
Which brings us to this week’s latest thoughts, which come from 119 CEOs from a variety of industries who responded to a survey for the Yale CEO Summit. Forty-two percent say AI could potentially destroy humanity – 34% say it could happen in 10 years and 8% give it five years – while the remaining 58% say it could never happen happened and that they are “not worried,” according to CNN’s summary of the results. In a separate question, Yale said 42 percent of respondents thought AI’s potential catastrophe was overstated, while 58 percent said it was not.
I’m glad everything is cleared up.
AI doesn’t always paint pretty pictures. What does the CEO look like? Or a drug dealer? That’s the question Bloomberg answered in his story about how text-to-image converters are creating a very distorted view of the world—a view that’s more biased than people are already biased. After analyzing over 5,000 images generated by Stable Diffusion (a competitor to OpenAI’s Dall-E), Bloomberg found that “The world according to Stable Diffusion is run by white male CEOs. Women are rarely doctors, lawyers or judges. Dark-skinned men commit crimes while dark-skinned women flip burgers.”
“We’re essentially projecting a single worldview into the world, rather than representing different kinds of cultures or visual identities,” Sascha Luccioni, a researcher at AI startup Hugging Face who co-authored a study on bias in text-to-image generative AI models, told Bloomberg. “The question is who is responsible?” “Do dataset providers? The training models? Or the creators?”
All good questions.
The Beatles return for one final song: A new ‘last’ Beatles song featuring the original Fab Four will be released this year thanks to AI. Paul McCartney told the BBC in June that AI was used to isolate John Lennon’s vocal track from a demo of an unreleased song (rumored to be a 1978 Lennon composition called Now And Then).
We know it’s possible to isolate voice recordings from recordings (hence Linda McCartney’s ear-splitting vocals on Hey Jude and Yoko Ono’s “painful” contribution to Lennon’s work).
From the BBC: “Sir Paul had received the demo a year earlier from Lennon’s widow, Yoko Ono. It was one of several songs on a tape labeled “For Paul” that Lennon had made shortly before his death in 1980. Lo-fi and embryonic, the songs were largely recorded on a boombox while the musician sat at a piano in his apartment in New York.”
McCartney generated so much news with this news that he posted a tweet on June 22 reiterating that it was actually the Fab Four singing and that AI was not used to generate new vocals.
Will the new Beatles song be good or bad? I don’t know, but what I do know is that it MIGHT not qualify for a Grammy. CNET reporter Nina Remont noted that the Grammy Awards will only consider music composed by humans to be eligible for the 2024 awards show, which will air on January 31. “Only human creators are eligible to be submitted for consideration,” read the new Grammy Awards rules. “Work that does not contain human authorship is not eligible in any category.” Artists can still use AI tools to create music, but submitted work must be “meaningful and more than de minimis.”
The $5000 Hallucination: In case you don’t know, some AI chatbots can “hallucinate,” a polite way of saying they make up things that sound like the truth but aren’t. Well, two Texas lawyers found out the hard way that hallucinating, at least when it comes to filing legal briefs in federal court, is definitely not okay.
Two lawyers who used ChatGPT to write their legal briefs were disciplined by a court after the chatbot was found to have invented non-existent cases which it then cited as precedents. They were fined $5,000.
“Technological advances are commonplace and there is nothing inherently wrong with using a reliable AI tool to assist,” Texas Judge P. Kevin Castel wrote in his reprimand. “But existing rules impose a watchdog role on lawyers to ensure the accuracy of their documents.”
Cats, Dogs, Work: AI engines like ChatGPT don’t have human-level intelligence and aren’t even as smart as a cat or a dog, Meta’s chief AI scientist Jan Lekun said at the Viva Tech conference in June. That’s because most generational AI engines trained on large language models, or LLMs, aren’t very intelligent because they’re only trained on language—not images or video.
“These systems are still very limited, they don’t have any understanding of the underlying reality of the real world because they’re trained entirely on text, massive amounts of text,” LeCun said. “Most human knowledge has nothing to do with language… so that part of human experience is not captured by AI.”
As an example, he notes that while an AI system can pass the bar exam for lawyers, it can’t load a dishwasher, which a 10-year-old can learn in 10 minutes.
“What does he tell you [is that] we’re missing something really big … to reach not just human-level intelligence, but even dog intelligence,” LeCun said. He also said that Meta is working on training its AI on video — which he says is much more complicated by the text. We have ” no idea how to reproduce that capacity with machines today. Until we can do that, we won’t have human-level intelligence, we won’t have dog- or cat-level intelligence [intelligence].”
Meanwhile, Airbnb CEO Brian Chesky says he’s not worried about AI taking jobs — he thinks AI will help create more startups because of all the time and money AI will save of coding tasks and because you won’t need to be a computer scientist to code. Here’s an excerpt of what Chesky said, according to CNBC:
AI is making Airbnb’s software engineers more efficient, Chesky said, with 30% of daily tasks being handled by ChatGPT-like tools within the next six months. That doesn’t mean those engineers’ jobs are necessarily at risk, he said, arguing that the time saved could allow them to focus on more difficult, more customized projects.
“Computer scientists aren’t the only potential beneficiaries,” he said. As artificial intelligence advances, you’ll be able to tell chatbots in plain English what you want in a website and the technology will build it for you, no coding languages needed, Airbnb’s CEO said.
“I think it will create millions of startups … entrepreneurship will be a boon,” Chesky said. “Anyone can essentially do the equivalent of what software engineering allowed you to do just five years ago.”
The downside for all these software engineers comes from Elon Musk, who said in May that it might be hard to get your job done “if AI can do your job better than you.”