Artificial intelligence – Digital Tech Blog https://digitaltechblog.com Explore Digital Ideas Fri, 24 May 2024 21:08:35 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.6 https://i0.wp.com/digitaltechblog.com/wp-content/uploads/2023/03/cropped-apple-touch-icon-2.png?fit=32%2C32&ssl=1 Artificial intelligence – Digital Tech Blog https://digitaltechblog.com 32 32 196063536 Google criticized as AI Overview makes obvious errors, such as saying former President Obama is Muslim https://digitaltechblog.com/google-criticized-as-ai-overview-makes-obvious-errors-such-as-saying-former-president-obama-is-muslim/ https://digitaltechblog.com/google-criticized-as-ai-overview-makes-obvious-errors-such-as-saying-former-president-obama-is-muslim/#respond Fri, 24 May 2024 21:08:35 +0000 https://digitaltechblog.com/google-criticized-as-ai-overview-makes-obvious-errors-such-as-saying-former-president-obama-is-muslim/

It’s been less than two weeks since Google debuted “AI Overview” in Google Search, and public criticism has mounted after queries have returned nonsensical or inaccurate results within the AI feature — without any way to opt out.

AI Overview shows a quick summary of answers to search questions at the very top of Google Search. For example, if a user searches for the best way to clean leather boots, the results page may display an “AI Overview” at the top with a multistep cleaning process, gleaned from information it synthesized from around the web.

But social media users have shared a wide range of screenshots showing the AI tool giving incorrect and controversial responses.

Google, Microsoft, OpenAI and other companies are at the helm of a generative AI arms race as companies in seemingly every industry rush to add AI-powered chatbots and agents to avoid being left behind by competitors. The market is predicted to top $1 trillion in revenue within a decade.

Here are some examples of errors produced by AI Overview, according to screenshots shared by users.

When asked how many Muslim presidents the U.S. has had, AI Overview responded, “The United States has had one Muslim president, Barack Hussein Obama.”

When a user searched for “cheese not sticking to pizza,” the feature suggested adding “about 1/8 cup of nontoxic glue to the sauce.” Social media users found an 11-year-old Reddit comment that seemed to be the source.

Attribution can also be a problem for AI Overview, especially in attributing inaccurate information to medical professionals or scientists.

For instance, when asked, “How long can I stare at the sun for best health,” the tool said, “According to WebMD, scientists say that staring at the sun for 5-15 minutes, or up to 30 minutes if you have darker skin, is generally safe and provides the most health benefits.”

When asked, “How many rocks should I eat each day,” the tool said, “According to UC Berkeley geologists, people should eat at least one small rock a day,” going on to list the vitamins and digestive benefits.

The tool also can respond inaccurately to simple queries, such as making up a list of fruits that end with “um,” or saying the year 1919 was 20 years ago.

When asked whether or not Google Search violates antitrust law, AI Overview said, “Yes, the U.S. Justice Department and 11 states are suing Google for antitrust violations.”

The day Google rolled out AI Overview at its annual Google I/O event, the company said it also plans to introduce assistant-like planning capabilities directly within search. It explained that users will be able to search for something like, “Create a 3-day meal plan for a group that’s easy to prepare,” and they’d get a starting point with a wide range of recipes from across the web.

“The vast majority of AI Overviews provide high quality information, with links to dig deeper on the web,” a Google spokesperson told CNBC in a statement. “Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce.”

The spokesperson said AI Overview underwent extensive testing before launch and that the company is taking “swift action where appropriate under our content policies.”

The news follows Google’s high-profile rollout of Gemini’s image-generation tool in February, and a pause that same month after comparable issues.

The tool allowed users to enter prompts to create an image, but almost immediately, users discovered historical inaccuracies and questionable responses, which circulated widely on social media.

For instance, when one user asked Gemini to show a German soldier in 1943, the tool depicted a racially diverse set of soldiers wearing German military uniforms of the era, according to screenshots on social media platform X.

When asked for a “historically accurate depiction of a medieval British king,” the model generated another racially diverse set of images, including one of a woman ruler, screenshots showed. Users reported similar outcomes when they asked for images of the U.S. founding fathers, an 18th-century king of France, a German couple in the 1800s and more. The model showed an image of Asian men in response to a query about Google’s own founders, users reported.

Google said in a statement at the time that it was working to fix Gemini’s image-generation issues, acknowledging that the tool was “missing the mark.” Soon after, the company announced it would immediately “pause the image generation of people” and “re-release an improved version soon.”

In February, Google DeepMind CEO Demis Hassabis said Google planned to relaunch its image-generation AI tool in the next “few weeks,” but it has not yet rolled out again.

The problems with Gemini’s image-generation outputs reignited a debate within the AI industry, with some groups calling Gemini too “woke,” or left-leaning, and others saying that the company didn’t sufficiently invest in the right forms of AI ethics. Google came under fire in 2020 and 2021 for ousting the co-leads of its AI ethics group after they published a research paper critical of certain risks of such AI models and then later reorganizing the group’s structure.

In 2023, Sundar Pichai, CEO of Google’s parent company, Alphabet, was criticized by some employees for the company’s botched and “rushed” rollout of Bard, which followed the viral spread of ChatGPT.

Correction: This article has been updated to reflect the correct name of Google’s AI Overview. Also, an earlier version of this article included a link to a screenshot that Google later confirmed was doctored.



]]>
https://digitaltechblog.com/google-criticized-as-ai-overview-makes-obvious-errors-such-as-saying-former-president-obama-is-muslim/feed/ 0 18568
OpenAI dissolves team focused on long-term AI risks, less than one year after announcing it https://digitaltechblog.com/openai-dissolves-team-focused-on-long-term-ai-risks-less-than-one-year-after-announcing-it/ https://digitaltechblog.com/openai-dissolves-team-focused-on-long-term-ai-risks-less-than-one-year-after-announcing-it/#respond Sat, 18 May 2024 17:49:09 +0000 https://digitaltechblog.com/openai-dissolves-team-focused-on-long-term-ai-risks-less-than-one-year-after-announcing-it/

OpenAI has disbanded its team focused on the long-term risks of artificial intelligence just one year after the company announced the group, a person familiar with the situation confirmed to CNBC on Friday.

The person, who spoke on condition of anonymity, said some of the team members are being reassigned to multiple other teams within the company.

The news comes days after both team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the Microsoft-backed startup. Leike on Friday wrote that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”

OpenAI’s Superalignment team, announced last year, has focused on “scientific and technical breakthroughs to steer and control AI systems much smarter than us.” At the time, OpenAI said it would commit 20% of its computing power to the initiative over four years.

OpenAI did not provide a comment and instead directed CNBC to co-founder and CEO Sam Altman’s recent post on X, where he shared that he was sad to see Leike leave and that the company had more work to do. On Saturday, OpenAI co-founder Greg Brockman posted a statement attributed to both himself and Altman on X, asserting that the company has “raised awareness of the risks and opportunities of AGI so that the world can better prepare for it.”

News of the team’s dissolution was first reported by Wired.

Sutskever and Leike on Tuesday announced their departures on social media platform X, hours apart, but on Friday, Leike shared more details about why he left the startup.

“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike wrote on X. “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

Leike wrote that he believes much more of the company’s bandwidth should be focused on security, monitoring, preparedness, safety and societal impact.

“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there,” he wrote. “Over the past few months my team has been sailing against the wind. Sometimes we were struggling for [computing resources] and it was getting harder and harder to get this crucial research done.”

Leike added that OpenAI must become a “safety-first AGI company.”

“Building smarter-than-human machines is an inherently dangerous endeavor,” he wrote. “OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.”

Leike did not immediately respond to a request for comment.

The high-profile departures come months after OpenAI went through a leadership crisis involving Altman.

In November, OpenAI’s board ousted Altman, saying in a statement that Altman had not been “consistently candid in his communications with the board.”

The issue seemed to grow more complex each day, with The Wall Street Journal and other media outlets reporting that Sutskever trained his focus on ensuring that artificial intelligence would not harm humans, while others, including Altman, were instead more eager to push ahead with delivering new technology.

Altman’s ouster prompted resignations or threats of resignations, including an open letter signed by virtually all of OpenAI’s employees, and uproar from investors, including Microsoft. Within a week, Altman was back at the company, and board members Helen Toner, Tasha McCauley and Ilya Sutskever, who had voted to oust Altman, were out. Sutskever stayed on staff at the time but no longer in his capacity as a board member. Adam D’Angelo, who had also voted to oust Altman, remained on the board.

When Altman was asked about Sutskever’s status on a Zoom call with reporters in March, he said there were no updates to share. “I love Ilya … I hope we work together for the rest of our careers, my career, whatever,” Altman said. “Nothing to announce today.”

On Tuesday, Altman shared his thoughts on Sutskever’s departure.

“This is very sad to me; Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend,” Altman wrote on X. “His brilliance and vision are well known; his warmth and compassion are less well known but no less important.” Altman said research director Jakub Pachocki, who has been at OpenAI since 2017, will replace Sutskever as chief scientist.

News of Sutskever’s and Leike’s departures, and the dissolution of the superalignment team, come days after OpenAI launched a new AI model and desktop version of ChatGPT, along with an updated user interface, the company’s latest effort to expand the use of its popular chatbot.

The update brings the GPT-4 model to everyone, including OpenAI’s free users, technology chief Mira Murati said Monday in a livestreamed event. She added that the new model, GPT-4o, is “much faster,” with improved capabilities in text, video and audio.

OpenAI said it eventually plans to allow users to video chat with ChatGPT. “This is the first time that we are really making a huge step forward when it comes to the ease of use,” Murati said.



]]>
https://digitaltechblog.com/openai-dissolves-team-focused-on-long-term-ai-risks-less-than-one-year-after-announcing-it/feed/ 0 18515
How Spotify AI plans to know what’s going on inside your head, and find the right track for it https://digitaltechblog.com/how-spotify-ai-plans-to-know-whats-going-on-inside-your-head-and-find-the-right-track-for-it/ https://digitaltechblog.com/how-spotify-ai-plans-to-know-whats-going-on-inside-your-head-and-find-the-right-track-for-it/#respond Sun, 14 Apr 2024 15:40:20 +0000 https://digitaltechblog.com/how-spotify-ai-plans-to-know-whats-going-on-inside-your-head-and-find-the-right-track-for-it/

With about 100 million tracks available and over 600 million subscribers, helping listeners find the music they will love has become a navigational challenge for Spotify. It’s the promise of personalization and meaningful recommendations that will give the vast catalog more meaning, and that is central to Spotify’s mission.

The streaming audio giant’s suite of recommendation tools has grown over the years: Spotify Home feed, Discover Weekly, Blend, Daylist, and Made for You Mixes. And in recent years, there have been signs that it is working. According to data released by Spotify at its 2022 Investor Day, artist discoveries every month on Spotify had reached 22 billion, up from 10 billion in 2018, “and we’re nowhere near done,” the company stated at that time.

Over the past decade or more, Spotify has been investing in AI and, in particular, in machine learning. Its recently launched AI DJ may be its biggest bet yet that technology will allow subscribers to better personalize listening sessions and discover new music. The AI DJ mimics the vibe of radio by announcing the names of songs and lead-in to tracks, something aimed in part to help ease listeners into extending out of their comfort zones. An existing pain point for AI algorithms — which can be excellent at giving listeners what it knows they already like — is anticipating when you want to break out of that comfort zone. 

The AI DJ combines personalization technology, generative AI, and a dynamic AI voice, and listeners can tap the DJ button when they want to hear something new, and something less-directly-derived from their established likes. Behind the dulcet tones of an AI DJ there are people, tech experts and music experts, who aim to improve the recommendation capacity of Spotify’s tools. The company has hundreds of music editors and experts across the globe. A Spotify spokesperson said the generative AI tool allows the human experts to “scale their innate knowledge in ways never before possible.”  

The data on a particular song or artist captures a few attributes: particular musical features, and which song or artist it has been typically paired with among the millions of listening sessions whose data the AI algorithm can access. Gathering information about the song is a fairly easy process, including release year, genre, and mood — from happy to danceable or melancholic. Various musical attributes, such as tempo, key, and instrumentation, are also identified. Combining this data associated with millions of listening sessions and other users’ preferences helps to generate new recommendations, and makes the leap possible from aggregated data to individual listener assumptions.

In its simplest formulation, “Users who liked Y also liked Z. We know you like Y, so you might like Z,” is how an AI finds matches. And Spotify says it’s working. “Since launching DJ, we’ve found that when DJ listeners hear commentary alongside personal music recommendations, they’re more willing to try something new (or listen to a song they may have otherwise skipped),” the spokesperson said. 

If successful, it’s not just listeners that get relief from a pain point. A great discovery tool is as beneficial to the artists seeking to build connections with new fans.

Julie Knibbe, founder & CEO of Music Tomorrow — which aims to help artists connect with more listeners by understanding how algorithms work and how to better work with them — says everyone is trying to figure out how to balance familiarity and novelty in a meaningful way, and everyone is leaning on AI algorithms to help make this possible. Be she says the balance between discovering new music and staying with established patterns is a central unresolved issue for all involved, from Spotify to listeners and the artists.

“Any AI is only good at what you tell them to do,” Knibbe said. “These recommender systems have been around for over a decade and they’ve become very good at predicting what you will like. What they can’t do is know what’s in your head, specifically when you want to venture out into a new musical terrain or category.” 

Spotify’s Daylist is an attempt to use generative AI to take into account established tastes, but also the varying contexts that can shape and reshape a listeners’ tastes across the course of a day, and make new recommendations that fit various moods, activities and vibes. Knibbe says it’s possible that improvements like these continue, and the AI gets better at finding the formula for how much novelty a listener wants, but she added, “the assumption that people want to discover new music all the time is not true.”

Most people still return, fairly happily, to familiar musical terrain and listening patterns. 

“You have various profiles of listeners, curators, experts … people put different demands on the AI,” Knibbe said. “Experts are more difficult to surprise, but they aren’t the majority of listeners, who tend to be more casual,” and whose Spotify usage, she says, often amounts to creating a “comfortable background” to daily life.

Technology optimists often speak in terms of an era of “abundance.” With 100 million songs available, but many listeners preferring the same 100 songs a million times, it’s easy to understand why a new balance is being sought. But Ben Ratliff, a music critic and author of “Every Song Ever: Twenty Ways to Listen in an Age of Musical Plenty,” says algorithms are less solution to this problem than a further entrenching of it.

“Spotify is good at catching onto popular sensibilities and creating a soundtrack for them,” Ratliff said. “Its Sadgirl Starter Pack playlist, for instance, has a great name and about a million and a half likes. Unfortunately, under the banner of a gift, the SSP simplifies the oceanic complexity of young-adult depression into a small collection of dependably ‘yearny’ music acts, and makes hard clichés of music and sensibility form more quickly.” 

Works of curation that are clearly made by actual people with actual preferences remain Ratliff’s preference. Even a good playlist, he says, might have been made without much intention and conscience, but just a developed sense of pattern recognition, “whether it’s patterns of obscurity or patterns of the broadly known,” he said.

Depending on the individual, AI may have equal chances of becoming either a utopian or dystopian solution within the 100-million track universe. Ratliff says most users should keep it more simple in their streaming music journeys. “As long as you realize that the app will never know you in the way you want to be known, and as long as you know what you’re looking for, or have some good prompts at the ready, you can find lots of great music on Spotify.” 

]]>
https://digitaltechblog.com/how-spotify-ai-plans-to-know-whats-going-on-inside-your-head-and-find-the-right-track-for-it/feed/ 0 18213
Deepfakes in an election year — is Asia ready to handle misinformation campaigns? https://digitaltechblog.com/deepfakes-in-an-election-year-is-asia-ready-to-handle-misinformation-campaigns/ https://digitaltechblog.com/deepfakes-in-an-election-year-is-asia-ready-to-handle-misinformation-campaigns/#respond Thu, 14 Mar 2024 00:29:26 +0000 https://digitaltechblog.com/deepfakes-in-an-election-year-is-asia-ready-to-handle-misinformation-campaigns/

2024 is set up to be the biggest global election year in history. It coincides with the rapid rise in deepfakes. In APAC alone, there was a surge in deepfakes by 1530% from 2022 to 2023, according to a Sumsub report.

Fotografielink | Istock | Getty Images

Ahead of the Indonesian elections on Feb. 14, a video of late Indonesian president Suharto advocating for the political party he once presided over went viral. 

The AI-generated deepfake video that cloned his face and voice racked up 4.7 million views on X alone. 

This was not a one-off incident. 

In Pakistan, a deepfake of former prime minister Imran Khan emerged around the national elections, announcing his party was boycotting them. Meanwhile, in the U.S., New Hampshire voters heard a deepfake of President Joe Biden’s asking them to not vote in the presidential primary. 

Deepfakes of politicians are becoming increasingly common, especially with 2024 set up to be the biggest global election year in history. 

Reportedly, at least 60 countries and more than four billion people will be voting for their leaders and representatives this year, which makes deepfakes a matter of serious concern.

Rise of election deepfake risks

According to a Sumsub report in November, the number of deepfakes across the world rose by 10 times from 2022 to 2023. In APAC alone, deepfakes surged by 1,530% during the same period.

Online media, including social platforms and digital advertising, saw the biggest rise in identity fraud rate at 274% between 2021 and 2023. Professional services, healthcare, transportation and video gaming were were also among industries impacted by identity fraud.

Asia is not ready to tackle deepfakes in elections in terms of regulation, technology, and education, said Simon Chesterman, senior director of AI governance at AI Singapore. 

In its 2024 Global Threat Report, cybersecurity firm Crowdstrike reported that with the number of elections scheduled this year, nation-state actors including from China, Russia and Iran are highly likely to conduct misinformation or disinformation campaigns to sow disruption. 

“The more serious interventions would be if a major power decides they want to disrupt a country’s election — that’s probably going to be more impactful than political parties playing around on the margins,” said Chesterman. 

Although several governments have tools (to prevent online falsehoods), the concern is the genie will be out of the bottle before there’s time to push it back in.

Simon Chesterman

Senior director AI Singapore

However, most deepfakes will still be generated by actors within the respective countries, he said. 

Carol Soon, principal research fellow and head of the society and culture department at the Institute of Policy Studies in Singapore, said domestic actors may include opposition parties and political opponents or extreme right wingers and left wingers.

Deepfake dangers

At the minimum, deepfakes pollute the information ecosystem and make it harder for people to find accurate information or form informed opinions about a party or candidate, said Soon. 

Voters may also be put off by a particular candidate if they see content about a scandalous issue that goes viral before it’s debunked as fake, Chesterman said. “Although several governments have tools (to prevent online falsehoods), the concern is the genie will be out of the bottle before there’s time to push it back in.”   

“We saw how quickly X could be taken over by the deep fake pornography involving Taylor Swift — these things can spread incredibly quickly,” he said, adding that regulation is often not enough and incredibly hard to enforce. “It’s often too little too late.”

How easy is it to make a deepfake video?

Adam Meyers, head of counter adversary operations at CrowdStrike, said that deepfakes may also invoke confirmation bias in people: “Even if they know in their heart it’s not true, if it’s the message they want and something they want to believe in they’re not going to let that go.”  

Chesterman also said that fake footage which shows misconduct during an election such as ballot stuffing, could cause people to lose faith in the validity of an election.

On the flip side, candidates may deny the truth about themselves that may be negative or unflattering and attribute that to deepfakes instead, Soon said. 

Deepfakes in the 2024 election: What you need to know

Who should be responsible?

As deepfakes grow, Facebook, Twitter and Google are working to detect and prevent them

“We should not just be relying on the good intentions of these companies,” Chesterman added. “That’s why regulations need to be established and expectations need to be set for these companies.”

Towards this end, Coalition for Content Provenance and Authenticity (C2PA), a non-profit, has introduced digital credentials for content, which will show viewers verified information such as the creator’s information, where and when it was created, as well as whether generative AI was used to create the material.

C2PA member companies include Adobe, Microsoft, Google and Intel.

OpenAI has announced it will be implementing C2PA content credentials to images created with its DALL·E 3 offering early this year.

“I think it’d be terrible if I said, ‘Oh yeah, I’m not worried. I feel great.’ Like, we’re gonna have to watch this relatively closely this year [with] super tight monitoring [and] super tight feedback.”

In a Bloomberg House interview at the World Economic Forum in January, OpenAI founder and CEO Sam Altman said the company was “quite focused” on ensuring its technology wasn’t being used to manipulate elections.

“I think our role is very different than the role of a distribution platform” like a social media site or news publisher, he said. “We have to work with them, so it’s like you generate here and you distribute here. And there needs to be a good conversation between them.”

Meyers suggested creating a bipartisan, non-profit technical entity with the sole mission of analyzing and identifying deepfakes.

“The public can then send them content they suspect is manipulated,” he said. “It’s not foolproof but at least there’s some sort of mechanism people can rely on.”

But ultimately, while technology is part of the solution, a large part of it comes down to consumers, who are still not ready, said Chesterman. 

Soon also highlighted the importance of educating the public. 

“We need to continue outreach and engagement efforts to heighten the sense of vigilance and consciousness when the public comes across information,” she said. 

The public needs to be more vigilant; besides fact checking when something is highly suspicious, users also need to fact check critical pieces of information especially before sharing it with others, she said. 

“There’s something for everyone to do,” Soon said. “It’s all hands on deck.”

— CNBC’s MacKenzie Sigalos and Ryan Browne contributed to this report.



]]>
https://digitaltechblog.com/deepfakes-in-an-election-year-is-asia-ready-to-handle-misinformation-campaigns/feed/ 0 17930
Sam Altman rejoins OpenAI board of directors as investigation into his ouster comes to a close https://digitaltechblog.com/sam-altman-rejoins-openai-board-of-directors-as-investigation-into-his-ouster-comes-to-a-close/ https://digitaltechblog.com/sam-altman-rejoins-openai-board-of-directors-as-investigation-into-his-ouster-comes-to-a-close/#respond Fri, 08 Mar 2024 23:24:21 +0000 https://digitaltechblog.com/sam-altman-rejoins-openai-board-of-directors-as-investigation-into-his-ouster-comes-to-a-close/

OpenAI on Friday announced its new board and the wrap-up of an internal investigation by U.S. law firm WilmerHale into the events leading up to OpenAI CEO Sam Altman’s ouster.

Sam Altman will also rejoin OpenAI’s board.

The new board members are:

  • Dr. Sue Desmond-Hellmann, former CEO of the Bill and Melinda Gates Foundation, who is also on the Board of Directors at Pfizer and on the President’s Council of Advisors on Science and Technology.
  • Nicole Seligman, former EVP and Global General Counsel of Sony and President of Sony Entertainment, who is also on the Board of Directors at Paramount Global, Meira GTx and Intuitive Machines, Inc.
  • Fidji Simo, CEO and Chair of Instacart, who is also on the Board of Directors at Shopify.

The three new members will “work closely with current board members Adam D’Angelo, Larry Summers and Bret Taylor as well as Greg, Sam, and OpenAI’s senior management,” according to a release.

OpenAI will continue to expand the board moving forward, according to a Zoom call with reporters.

OpenAI did not publish the investigation report but provided a summary of the findings.

“The review concluded there was a significant breakdown of trust between the prior board and Sam and Greg,” Taylor said, adding that the review also “concluded the board acted in good faith… [and] did not anticipate some of the instability that led afterwards.”

Taylor also said the board’s concerns did not arise regarding concerns over product safety and security, OpenAI’s finances or statements to customers or business partners, that it was “simply a breakdown in trust between the board and Mr. Altman.”

WilmerHale’s investigation began in December, and the lawyers submitted their report today, which included dozens of interviews with OpenAI’s prior board members and advisors, current executives and other witnesses. The investigation also involved reviewing more than 30,000 documents, according to a release.

“We have unanimously concluded that Sam and Greg are the right leaders for OpenAI,” Bret Taylor, chair of OpenAI’s board, said in a release.

“I am very grateful to Bret and Larry and WilmerHale,” Altman said on the Zoom call with reporters. He added, speaking of CTO Mira Murati, “Mira in particular is incremental to OpenAI all the time … but through that period in November, she has done an amazing job helping to lead the company.”

He added that he is “excited to be moving forward here” and for the situation to be “over.” He also mentioned he wished he had acted differently regarding differences in opinion with the board.

In November, OpenAI’s board ousted Altman, prompting resignations – or threats of resignations – including an open letter signed by virtually all of OpenAI’s employees, and uproar from investors, including Microsoft. Within a week, Altman was back at the company, and board members Helen Toner, Tasha McCauley and Ilya Sutskever, who had voted to oust Altman, were out. Adam D’Angelo, who had also voted to oust Altman, stayed on the board.

When Altman was asked about Sutskever’s status on the Zoom call with reporters, he said there were no updates to share.

“I love Ilya… I hope we work together for the rest of our careers, my career, whatever,” Altman said. “Nothing to announce today.”

Since then, OpenAI has announced new board members, including former Salesforce co-CEO Bret Taylor and former Treasury Secretary Larry Summers. Microsoft obtained a nonvoting board observer position.

After ChatGPT’s launch in November 2022, it broke records at the time as the fastest-growing consumer app in history, and now has about 100 million weekly active users, along with more than 92% of Fortune 500 companies using the platform, according to OpenAI. Last year, Microsoft invested an additional $10 billion in the company, making it the biggest AI investment of the year, according to PitchBook, and OpenAI has reportedly closed a deal that will allow employees to sell shares at an $86 billion valuation, though the deal reportedly took longer to close than expected due to the events surrounding Altman’s ouster.

The rollercoaster couple of weeks at the company are still affecting it months later.

This month, billionaire tech magnate Elon Musk sued OpenAI co-founders Sam Altman and Greg Brockman for breach of contract and breach of fiduciary duty, court filings revealed on Thursday.

In his complaint, Musk and his attorneys allege that the ChatGPT maker “has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft.” They also argue that this arrangement goes against a founding agreement and 2015 certification of incorporation that OpenAI established with Musk, who was a pivotal donor to a cofounder of OpenAI in its early years.

As part of Microsoft’s contract with OpenAI, the tech giant only has rights to OpenAI’s “pre-AGI” technology, and it is up to OpenAI’s board to determine whether the company has reached that milestone. Musk argued in his filing that since the OpenAI board shuffle in November – when Toner, McCauley and Sutskever were removed – the new board is “ill-equipped” to independently determine whether OpenAI has reached AGI and therefore whether its technology is outside the scope of the exclusivity deal with Microsoft.

Lawyers told CNBC that they had doubts about the legal viability of Musk’s case, and OpenAI has said it plans to file a motion to dismiss all of Musk’s claims.

In response to the high-profile lawsuit, OpenAI reproduced old emails from Musk in which the Tesla and SpaceX CEO encouraged the rising startup to raise at least $1 billion in funding, and agreed that it should “start being less open” over time and “not share” the company’s science with the public.

Musk’s lawsuit also follows some controversy over Altman’s previous chip endeavors and investments.

Just before Altman’s brief ouster, he was reportedly seeking billions for a new and not-yet-formed chip venture code-named “Tigris” to eventually compete with Nvidia, traveling to the Middle East to raise money from investors.

In 2018, Altman personally invested in an AI chip startup called Rain Neuromorphics, based near OpenAI’s San Francisco headquarters, and in 2019, OpenAI signed a letter of intent to spend $51 million on Rain’s chips. In December, the U.S. compelled a Saudi Aramco-backed venture capital firm to sell its shares in Rain.

]]>
https://digitaltechblog.com/sam-altman-rejoins-openai-board-of-directors-as-investigation-into-his-ouster-comes-to-a-close/feed/ 0 17888
College AI degree programs are booming. Are they worth the cost? https://digitaltechblog.com/college-ai-degree-programs-are-booming-are-they-worth-the-cost/ https://digitaltechblog.com/college-ai-degree-programs-are-booming-are-they-worth-the-cost/#respond Sat, 02 Mar 2024 14:50:02 +0000 https://digitaltechblog.com/college-ai-degree-programs-are-booming-are-they-worth-the-cost/

Ute Grabowsky | Photothek | Getty Images

Computer science is not a new major at top schools, but with AI jobs in high-demand, there’s a growing list of colleges and universities offering a four-year “AI” degree specifically.

These programs generally move beyond the foundations of computer science to home in on topics such as machine learning, computing algorithms, data analytics and advanced robotics. The University of Pennsylvania recently announced that its B.S.E. in Artificial Intelligence program will begin in fall 2024. Carnegie Mellon introduced a program well before gen AI was a buzzword, in fall 2018, and MIT’s program began in fall 2022. Purdue University offers an AI undergraduate major, while many colleges and universities offer AI classes within their computer science department, even if there’s not a dedicated major.

The rise of AI-specific degree programs comes as companies are short on talent for this fast-developing field. Half of the highest-paid skills in technology are AI-specific, according to the employment website Indeed.com. Even so, there’s some degree of skepticism about the applicability of an AI-specific four-year degree given how quickly the technology is changing. But proponents say that as long as a program is steeped in computer science and other fundamentals, a focus on AI could provide a resume-building boon.

Here’s what students and their parents, as well as anyone thinking about going back to school for a new career, needs to know about a four-year AI degree:

STEM fundamentals remain critical

Students that want to pursue a degree in AI should look for a program that teaches fundamental information such as computer science concepts, statistics, mathematics and engineering, which lay the foundation for a career in an AI-related field, said Kerem Koca, chief executive of BlueCloud, a cloud service provider. The technology itself is changing, but these core underpinnings do not, and they can prepare students to be successful, even as underlying technology changes, he said.

“It’s important that AI degrees and other education training programs not only focus on specific skill development, but that the focus is on helping students learn how to learn, which includes developing an intellectual curiosity, and skills like leadership, communication and critical thinking,” said Maria Flynn, president and chief executive of Jobs for the Future, an organization that focuses on worker opportunity and education, in an email.

AI degree spike since 2011

There are a number of different programs that focus on AI at the undergraduate and graduate level, and there has been an increase in offerings and degrees being awarded for over a decade now.

According to the Georgetown University Center for Security and Emerging Technology, AI degrees have bucked the general trend in education since 2011, with positive degree conferral growth versus negative growth across all degree areas. AI-related degree awards, in particular, grew even faster than STEM degrees as a general category at bachelor’s master’s and PhD levels. Its review of government data and other sources on the higher education market described the growth of AI degree conferrals as “dramatic,” increasing 120% since 2011 at both bachelor’s and master’s levels.

Some students might also be interested in pursuing AI as an associate’s degree, which several schools, including Miami Dade College, offer.

Education relevance in fast-changing tech market

Some students may wonder if they even need a degree at all, given how fast the market is changing and the fact that more employers have expressed a willingness to hire workers without degrees if they have the appropriate, job-required skills.

It’s important to note that recent research suggests the practice of hiring people without degrees has fallen short, however, and research from the Ladders career site shows that a degree is still required for the highest paying jobs, a list that includes software engineers.

A four-year degree is still a big step up for most entering the job market for the first time, said Celeste Grupman, chief executive of Dataquest, which supplies AI-related educational materials and labs to universities. “It’s still one of the first things an employer is going to look at. It’s not going to get you disqualified, whereas not having one might.” 

Even so, several providers including Dataquest and Coursera, offer certificate programs for learners to build skills quickly. These programs may be appropriate for students who lack the time and resources to complete a four-year program, or already have a degree and are looking to upskill, Grupman said. An online platform allows students to quickly start building projects and understanding how to implement these tools successfully for employment purposes.

AI vs. computer science

It’s important for students to think critically about the curriculum for the program they are considering, how it’s different from a standard computer science curriculum, the likely career trajectory for graduates of the program and economic outcomes for graduates. “As we see in product marketing, anyone can slap ‘AI’ onto an existing product. Students should ask what aspects of AI they will be learning,” Flynn said.

It’s also important for students to carefully consider what they want. Are they looking for a program that provides exposure to AI or practice using AI, or do they want a technical program that provides foundational content and courses on AI technology? They should also consider whether they ultimately want relevant skills and knowledge that will get them into the labor market right now or whether they want a broader degree that will be a foundation for longer-term advancement, Flynn said.

“If you’re an architect, you don’t want a degree in hammers. You want to understand hammers, you want to understand zoning and you want to understand how to build a house that helps a family come alive. The same is true in AI,” said Nichol Bradford, artificial intelligence and human intelligence executive-in-residence with SHRM, an organization for human resources professionals.

How to gain an edge with employers

Some employers may look more favorably upon an AI-specific degree versus a plain-vanilla computer science degree, said David Leighton, chief executive at WITI, an organization for technology-minded professionals. “I think it sets them apart.” 

On the other hand, no one really knows right now what the value of such a degree will be in a few years. “In the year 2000, if you had an internet degree, if there was such a thing, it would have looked great,” Koca said. “Now, it wouldn’t be as applicable. But if you had it in 2002, you could have gotten a job anywhere. The same could be true for a degree in AI.” 

Given the uncertainty, some professionals said students can’t go wrong with a traditional computer science degree or an AI-specific one, provided the fundamentals are covered. Those who take the former route, however, should consider taking classes related to AI and data science, which can be important for future employment. Otherwise, students might need to “close the practical application gap themselves post-graduation,” said Bryan Ackermann, head of AI strategy and transformation at the management consultancy Korn Ferry, in an email.

McGraw-Hill CEO: A.I. in the classroom is here
]]>
https://digitaltechblog.com/college-ai-degree-programs-are-booming-are-they-worth-the-cost/feed/ 0 17842
Honor to launch its first foldable flip phone to challenge Samsung https://digitaltechblog.com/honor-to-launch-its-first-foldable-flip-phone-to-challenge-samsung/ https://digitaltechblog.com/honor-to-launch-its-first-foldable-flip-phone-to-challenge-samsung/#respond Wed, 28 Feb 2024 00:39:40 +0000 https://digitaltechblog.com/honor-to-launch-its-first-foldable-flip-phone-to-challenge-samsung/

Chinese technology firm Honor will launch a foldable flip phone this year, CEO George Zhao told CNBC at Mobile World Congress 2024. It marks the company’s first foray into the vertical-folding style of smartphone that has been popularized by brands like Samsung and Motorola.

Nurphoto | Nurphoto | Getty Images

BARCELONA — Honor will launch a foldable flip phone this year, the company’s CEO told CNBC, marking the Chinese technology firm’s first foray into the vertical-folding style of smartphone that has been popularized by brands like Samsung and Motorola.

“Foldables” refer to smartphones with displays that bend. There are two styles right now on the market — smartphones that fold like a book and those that fold vertically like an old school flip phone but without a visible hinge.

The move to launch a flip foldable is in line with Honor’s push into the premium end of the market where it’s looking to challenge companies like Samsung and Apple.

“This year we are preparing for the flip phone launch — now that we are internally in the final stage,” George Zhao, CEO of Honor, told CNBC in an exclusive interview at the Mobile World Congress in Barcelona.

“We are very positive about foldables for the future.”

Honor already has a number of foldable phones on the market. The latest one is the Honor Magic V2 that it launched last year. But all of Honor’s current foldables are the book style of the folding device.

Honor already has a number of smartphones on the market that fold like a book, such as the Magic V2 that was launched in July 2023 shown here. The Chinese tech company is now preparing to launch a vertical-folding style smartphone, its CEO George Zhao told CNBC at the Mobile World Congress.

Honor

The entry into flip foldables will expose Honor to a different price point. The flip style of phones, such as Samsung’s Galaxy Z Flip 5 are often priced cheaper than the horizontal-style folding devices.

Foldables are seen as high-end devices in the world of smartphones. Sales for premium smartphones, those over $600, likely grew in 2023 while the overall market declined, according to Counterpoint Research. This is one reason that Honor is targeting the high-end market.

Sales of foldable phones are set to rise 40% year-on-year to 22 million units in 2024, according to Counterpoint Research. But Neil Shah, partner at Counterpoint Research, said demand for the flip foldable may be slowing because “expectations of aggressive pricing of the flip form factor has been difficult.”

“I believe this will create some market gap for newer vendors to enter or expand in the flip segment and test out the market.”

Honor, a spin-off from Huawei in 2020 as a result of U.S. sanctions, has sought to thrive as an independent brand. Honor found early success in China and its focus is now on growing in international markets, where it doesn’t yet feature in the top 5 players globally.

Zhao said his hope in the next three-to-five years is that sales volumes overseas exceed China.

Honor’s AI push

Like most device makers at MWC, Honor touted the potential of AI.

For Zhao, the value of the technology will come when a device is able to anticipate what you’d like to do next.

For example, you may get a message from a friend to meet for dinner and the device will be able to anticipate whether you want to open the name of the restaurant in a mapping app or reply to the message.

“In the future, Honor’s strategy is AI to reconstruct the user interaction. It will be intent-based, the user interaction. So maybe when you interact with your phone, it can understand what is your requirement,” Zhao said.

At MWC, Honor showed off an AI-powered feature on its Magic 6 Pro handset that can track the user’s eyes.

The company also showed off a demonstration of a chatbot built on Meta’s Llama 2 AI model.

Last year, Honor announced its intention to go public. Zhao said preparations are being made for the initial public offering but there is no final timing or destination for the listing.

]]>
https://digitaltechblog.com/honor-to-launch-its-first-foldable-flip-phone-to-challenge-samsung/feed/ 0 17801
Amazon launches generative AI tool to answer shoppers’ questions https://digitaltechblog.com/amazon-launches-generative-ai-tool-to-answer-shoppers-questions/ https://digitaltechblog.com/amazon-launches-generative-ai-tool-to-answer-shoppers-questions/#respond Tue, 16 Jan 2024 23:15:04 +0000 https://digitaltechblog.com/amazon-launches-generative-ai-tool-to-answer-shoppers-questions/

Amazon trailers are parked at an Amazon Air gateway at Miami International Airport in Miami, Florida, on Sept. 26, 2023.

Joe Raedle | Getty Images

Amazon is rolling out an artificial intelligence tool that can answer shoppers’ questions about a product, a spokesperson confirmed, as the company continues to experiment with generative AI.

The new feature in Amazon’s mobile app prompts users to ask questions about a specific item. It then returns an answer within a few seconds, primarily by summarizing information collected from product reviews and the listing itself.

“We’re constantly inventing to help make customers’ lives better and easier, and are currently testing a new feature powered by generative AI to improve shopping on Amazon by helping customers get answers to commonly asked product questions,” Maria Boschetti, an Amazon spokesperson, said in an email.

The feature could keep shoppers from scrolling through pages of reviews or reading through a listing to find information about a product.

Unlike OpenAI’s ChatGPT, Amazon’s new feature isn’t equipped to carry out a conversation, but it can respond to creative prompts. On a listing for a women’s vest, it could write a haiku about the product. It was also able to describe the item in the style of Yoda from Star Wars. The tool is designed not to veer off subject, and will return an error message if it can’t answer questions such as, “Who is Jeff Bezos?”

The tool was first spotted by Marketplace Pulse, an e-commerce research firm.

Amazon has introduced several AI tools to its site in recent months. Last June, the company started testing AI-generated summaries of product reviews, and it has launched AI features for third-party sellers that help them write listings, as well as generate photos for ads. Elsewhere, it has rolled out “Q,” an AI chatbot for companies to assist with daily tasks, and Bedrock, a generative AI service for Amazon Web Services customers.

In Amazon’s latest earnings call, CEO Andy Jassy said the company is using generative AI to forecast inventory and to determine the best last-mile routes for drivers.

“Generative AI is going to change every customer experience, and it’s going to make it much more accessible for everyday developers, and even business users, to use,” Jassy told CNBC’s Jim Cramer last month. “So I think there’s going to be a lot of societal good.”

WATCH: Cramer interview with Andy Jassy

Amazon CEO Andy Jassy: We think the FTC is 'wrong on the facts' in antitrust lawsuit

Don’t miss these stories from CNBC PRO:

]]>
https://digitaltechblog.com/amazon-launches-generative-ai-tool-to-answer-shoppers-questions/feed/ 0 17413
The first minds to be controlled by generative AI will live inside video games https://digitaltechblog.com/the-first-minds-to-be-controlled-by-generative-ai-will-live-inside-video-games/ https://digitaltechblog.com/the-first-minds-to-be-controlled-by-generative-ai-will-live-inside-video-games/#respond Sat, 23 Dec 2023 15:46:25 +0000 https://digitaltechblog.com/the-first-minds-to-be-controlled-by-generative-ai-will-live-inside-video-games/

A gamer uses a computer powered with an Nvidia Corp. chip at the Gamescon video games trade fair in Cologne, Germany, on Wednesday, Aug. 23, 2023. Gamescon runs until Sunday, Aug. 27. Photographer: Alex Kraus/Bloomberg via Getty Images

Bloomberg | Bloomberg | Getty Images

It’s not just human life that will be remade by the rapid advance in generative artificial intelligence. NPCs (non-playable characters), the figures who populate generated worlds in video games but have to date largely run on limited scripts — think the proprietor of the store you enter — are being tested as one of the first core gaming aspects where AI can improve gameplay and immersiveness. A recent partnership between Microsoft Xbox and Inworld AI is a prime example.

Better dialogue is just the first step. “We’re creating the tech that allows NPCs to evolve beyond predefined roles, adapt to player behavior, learn from interactions, and contribute to a living, breathing game world,” said Kylan Gibbs, chief product officer and co-founder of Inworld AI. “AI NPCs are not just a technological leap. They’re a paradigm shift for player engagement.”

It’s also a big opportunity for the gaming companies and game developers. Shifting from scripted dialogue to dynamic player-driven narratives will increase immersion in a way that drives replayability, retention, and revenue.

The interaction between powerful chips and gaming has for years been part of the success story at Nvidia, but there is now a clear sense in the gaming industry that it is just beginning to get to the point where AI will take off, after some initial uncertainty. 

“All developers are interested in how artificial intelligence can impact game development process,” John Spitzer, vice president of developer and performance technology at Nvidia, recently told CNBC, and he cited powering non-playable characters as a key test case. 

We'll be working on how to power non-player characters using AI in gaming, Nvidia says

It’s always been true that technological limits and possibilities overdetermine the gaming worlds developers can create. The technology behind AI NPCs, Gibbs says, will become a catalyst for a new era of storytelling, creative expression, and innovative gameplay. But much of what is to come will be “games we have yet to imagine,” he said.

Bing Gordon, an Inworld advisor and former chief creative officer at Electronic Arts, said the biggest advancements in gaming in recent decades have been through improvements in visual fidelity and graphics. Gordon, who is now chief product officer at venture capital firm Kleiner Perkins and serves on the board of gaming company Take-Two Interactive, believes AI will remake the world of both the gamer and game designer.

“AI will enable truly immersive worlds and sophisticated narratives that put players at the center of the fantasy,” Gordon said. “Moreover, AI that influences fundamental game mechanics has the potential to increase engagement and draw players deeper into your game.”  

The first big opportunity for gen AI may be in gaming production. “That’s where we expect to see a major impact first,” said Anders Christofferson, a partner within Bain & Company’s media & entertainment practice.

In other professional tasks, such as creating presentations using software like PowerPoint and first drafts of speeches, gen AI is already doing days of work in minutes. Initial storyboard design and NPC dialogue creation are made for gen AI, and that will free up developer time to focus on the more immersive and creative parts of game making, Christofferson said.

Creating unpredictable worlds

A recent Bain study noted that AI is already taking on some tasks, including preproduction and planning out of game content. Soon it will play a larger role in developing characters, dialogue, and environments. Gaming executives, Bain’s research shows, expect AI to manage more than half of game development within five years to a decade. This may not lead to lower production costs — blockbuster games can run up total development costs of $1 billion — but AI will allow games to be delivered more quickly, and with enhanced quality.

Ultimately, the proliferation of gen AI should allow the development process of games to include the average gamer in content creation. This means that more games will offer what Christofferson calls a “create mode” allowing for increased user-generated content — Gibbs referred to it as “player-driven narratives.” 

The current human talent shortage, a labor issue that exists across the software engineering space, isn’t something AI will solve in the short-term. But it may free developers up to put more time into creative tasks and learn how best to use the new technology as they experiment. A recent CNBC study found that across the labor force, 72% of workers who use AI say it makes them more productive, consistent with research Microsoft has conducted on the impact of its Copilot AI in the workplace.

“GenAI is very nascent in gaming and the emerging landscape of players, services, etc. is very dynamic – changing by the day,” Christofferson said. “As with any emerging technologies, we expect lots of learning to take place regarding GenAI over the next few years.”

Given how much change is taking place in gaming, it may simply be too difficult to forecast AI’s scale at the moment, says Julian Togelius, associate professor of computer science and engineering at New York University. He summed up the current state of AI implementation as a “medium-size deal.”

“In the game development process, generative AI is already in use by lots of people. Programmers use Copilot and ChatGPT to help them write code, concept artists experiment with Stable Diffusion and Midjourney, and so on,” said Togelius. “There is also a big interest in automated game testing and other forms of AI-augmented QA,” he added. 

Generative AI will change the nature of many of our games in the future, says Take-Two Interactive CEO

The Microsoft and Inworld partnership will test two of the key AI implications in the video game industry: design-time and assistance with narrative generation. If a game has thousands of NPCs in it, having AI generate individual backstories for each of them can save enormous development time — and having generative AI working while players interact with NPCs could also enhance gameplay.

The latter will be trickier to achieve, Togelius said. “I think this is much harder to get right, partly because of the well-known hallucination issues of LLMs, and partly because games are not designed for this,” he said. 

Hallucinations occur when large language models (LLMs) generate responses that deviate from context or rational meaning — they speak nonsensically but grammatically, about things that don’t make sense or have any relation to the given context. “Video games are designed for predictable, hand-crafted NPCs that don’t veer off script and start talking about things that don’t exist in the game world,” Togelius said.

Traditionally, NPCs behave in predictable ways that have been hand-authored by a designer or design team. Predictability, in fact, is a core tenant of the video game world and its design process. Open-ended games are thrilling because of their sense of infinite possibility, but to function reliably there is great control and predictability built into them. Unpredictability in the gaming world is a new realm, and could be a barrier to having AI gain wider use. Working out this balance will be a key to moving forward with AI.

“I think we are going to see modern AI in more and more places in games and game development very soon,” Togelius said. “And we will need new designs that work with the strengths and weaknesses of generative AI.”

]]>
https://digitaltechblog.com/the-first-minds-to-be-controlled-by-generative-ai-will-live-inside-video-games/feed/ 0 17197
The techno-optimists and doomsdayers inside Silicon Valley’s most dangerous AI debate https://digitaltechblog.com/the-techno-optimists-and-doomsdayers-inside-silicon-valleys-most-dangerous-ai-debate/ https://digitaltechblog.com/the-techno-optimists-and-doomsdayers-inside-silicon-valleys-most-dangerous-ai-debate/#respond Sun, 17 Dec 2023 16:19:34 +0000 https://digitaltechblog.com/the-techno-optimists-and-doomsdayers-inside-silicon-valleys-most-dangerous-ai-debate/

WASHINGTON, DC – SEPTEMBER 13: OpenAI CEO Sam Altman speaks with reporters on his arrival to the Senate bipartisan Artificial Intelligence (AI) Insight Forum on Capitol Hill in Washington, DC, on September 13, 2023. (Photo by Elizabeth Frantz for The Washington Post via Getty Images)

The Washington Post | The Washington Post | Getty Images

Now more than a year after ChatGPT’s introduction, the biggest AI story of 2023 may have turned out to be less the technology itself than the drama in the OpenAI boardroom over its rapid advancement. During the ousting, and subsequent reinstatement, of Sam Altman as CEO, the underlying tension for generative artificial intelligence going into 2024 is clear: AI is at the center of a huge divide between those who are fully embracing its rapid pace of innovation and those who want it to slow down due to the many risks involved.

The debate — known within tech circles as e/acc vs. decels — has been making the rounds in Silicon Valley since 2021. But as AI grows in power and influence, it’s increasingly important to understand both sides of the divide.

Here’s a primer on the key terms and some of the prominent players shaping AI’s future.

e/acc and techno-optimism

The term “e/acc” stands for effective accelerationism.

In short, those who are pro-e/acc want technology and innovation to be moving as fast as possible.

“Technocapital can usher in the next evolution of consciousness, creating unthinkable next-generation lifeforms and silicon-based awareness,” the backers of the concept explained in the first-ever post about e/acc.

In terms of AI, it is “artificial general intelligence”, or AGI, that underlies debate here. AGI is a super-intelligent AI that is so advanced it can do things as well or better than humans. AGIs can also improve themselves, creating an endless feedback loop with limitless possibilities.

OpenAI drama: Faster AI development won the fight

Some think that AGIs will have the capabilities to the end of the world, becoming so intelligent that they figure out how to eradicate humanity. But e/acc enthusiasts choose to focus on the benefits that an AGI can offer. “There is nothing stopping us from creating abundance for every human alive other than the will to do it,” the founding e/acc substack explained.

The founders of the e/acc started have been shrouded in mystery. But @basedbeffjezos, arguably the biggest proponent of e/acc, recently revealed himself to be Guillaume Verdon after his identity was exposed by the media.

Verdon, who formerly worked for Alphabet, X, and Google, is now working on what he calls the “AI Manhattan project” and said on X that “this is not the end, but a new beginning for e/acc. One where I can step up and make our voice heard in the traditional world beyond X, and use my credentials to provide backing for our community’s interests.”

Verdon is also the founder of Extropic, a tech startup which he described as “building the ultimate substrate for Generative AI in the physical world by harnessing thermodynamic physics.”

An AI manifesto from a top VC

One of the most prominent e/acc supporters is venture capitalist Marc Andreessen of Andreessen Horowitz, who previously called Verdon the “patron saint of techno-optimism.”

Techno-optimism is exactly what it sounds like: believers think more technology will ultimately make the world a better place. Andreessen wrote the Techno-Optimist Manifesto, a 5,000-plus word statement that explains how technology will empower humanity and solve all of its material problems. Andreessen even goes as far as to say that “any deceleration of AI will cost lives,” and it would be a “form of murder” not to develop AI enough to prevent deaths.

Another techno-optimist piece he wrote called Why AI Will Save the World was reposted by Yann LeCun, Chief AI Scientist at Meta, who is known as one of the “godfathers of AI” after winning the prestigious Turing Prize for his breakthroughs in AI.

Yann LeCun, chief AI scientist at Meta, speaks at the Viva Tech conference in Paris, June 13, 2023.

Chesnot | Getty Images News | Getty Images

LeCun labels himself on X as a “humanist who subscribes to both Positive and Normative forms of Active Techno-Optimism.”

LeCun, who recently said that he doesn’t expect AI “super-intelligence” to arrive for quite some time, has served as a vocal counterpoint in public to those who he says “doubt that current economic and political institutions, and humanity as a whole, will be capable of using [AI] for good.”

Meta’s embrace of open-source AI underlies Lecun’s belief that the technology will offer more potential than harm, while others have pointed to the dangers of a business model like Meta’s which is pushing for widely available gen AI models being placed in the hands of many developers.

AI alignment and deceleration

In March, an open letter by Encode Justice and the Future of Life Institute called for “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.”

The letter was endorsed by prominent figures in tech, such as Elon Musk and Apple co-founder Steve Wozniak.

OpenAI CEO Sam Altman addressed the letter back in April at an MIT event, saying, “I think moving with caution and an increasing rigor for safety issues is really important. The letter I don’t think was the optimal way to address it.”

OpenAI's Sam Altman on AI regulation: We can manage this for sure

Altman was caught up in the battle anew when the OpenAI boardroom drama played out and original directors of the nonprofit arm of OpenAI grew concerned about the rapid rate of progress and its stated mission “to ensure that artificial general intelligence — AI systems that are generally smarter than humans — benefits all of humanity.”

Some of the ideas from the open letter are key to decels, supporters of AI deceleration. Decels want progress to slow down because the future of AI is risky and unpredictable, and one of their biggest concerns is AI alignment.

The AI alignment problem tackles the idea that AI will eventually become so intelligent that humans won’t be able to control it.

“Our dominance as a species, driven by our relatively superior intelligence, has led to harmful consequences for other species, including extinction, because our goals are not aligned with theirs. We control the future — chimps are in zoos. Advanced AI systems could similarly impact humanity,” said Malo Bourgon, CEO of the Machine Intelligence Research Institute.

AI alignment research, such as MIRI’s, aims to train AI systems to “align” them with the goals, morals, and ethics of humans, which would prevent any existential risks to humanity. “The core risk is in creating entities much smarter than us with misaligned objectives whose actions are unpredictable and uncontrollable,” Bourgon said.

Government and AI’s end-of-the-world issue

Christine Parthemore, CEO of the Council on Strategic Risks and a former Pentagon official, has devoted her career to de-risking dangerous situations, and she recently told CNBC that when we consider the “mass scale death” AI could cause if used to oversee nuclear weapons, it is an issue that requires immediate attention.

But “staring at the problem” won’t do any good, she stressed. “The whole point is addressing the risks and finding solution sets that are most effective,” she said. “It’s dual-use tech at its purist,” she added. “There is no case where AI is more of a weapon than a solution.” For example, large language models will become virtual lab assistants and accelerate medicine, but also help nefarious actors identify the best and most transmissible pathogens to use for attack. This is among the reasons AI can’t be stopped, she said. “Slowing down is not part of the solution set,” Parthemore said.

Air Force Secretary on AI technology on the battlefield: There will always be humans involved

Earlier this year, her former employer the DoD said in its use of AI systems there will always be a human in the loop. That’s a protocol she says should be adopted everywhere. “The AI itself cannot be the authority,” she said. “It can’t just be, ‘the AI says X.’ … We need to trust the tools, or we should not be using them, but we need to contextualize. … There is enough general lack of understanding about this toolset that there is a higher risk of overconfidence and overreliance.”

Government officials and policymakers have started taking note of these risks. In July, the Biden-Harris administration announced that it secured voluntary commitments from AI giants Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI to “move towards safe, secure, and transparent development of AI technology.”

Just a few weeks ago, President Biden issued an executive order that further established new standards for AI safety and security, though stakeholders group across society are concerned about its limitations. Similarly, the U.K. government introduced the AI Safety Institute in early November, which is the first state-backed organization focusing on navigating AI.

Britain’s Prime Minister Rishi Sunak (L) attends an in-conversation event with X (formerly Twitter) CEO Elon Musk (R) in London on November 2, 2023, following the UK Artificial Intelligence (AI) Safety Summit. (Photo by Kirsty Wigglesworth / POOL / AFP) (Photo by KIRSTY WIGGLESWORTH/POOL/AFP via Getty Images)

Kirsty Wigglesworth | Afp | Getty Images

Amid the global race for AI supremacy, and links to geopolitical rivalry, China is implementing its own set of AI guardrails.

Responsible AI promises and skepticism

OpenAI is currently working on Superalignment, which aims to “solve the core technical challenges of superintelligent alignment in four years.”

At Amazon’s recent Amazon Web Services re:Invent 2023 conference, it announced new capabilities for AI innovation alongside the implementation of responsible AI safeguards across the organization.

“I often say it’s a business imperative, that responsible AI shouldn’t be seen as a separate workstream but ultimately integrated into the way in which we work,” says Diya Wynn, the responsible AI lead for AWS.

According to a study commissioned by AWS and conducted by Morning Consult, responsible AI is a growing business priority for 59% of business leaders, with about half (47%) planning on investing more in responsible AI in 2024 than they did in 2023.

Although factoring in responsible AI may slow down AI’s pace of innovation, teams like Wynn’s see themselves as paving the way towards a safer future. “Companies are seeing value and beginning to prioritize responsible AI,” Wynn said, and as a result, “systems are going to be safer, secure, [and more] inclusive.”

Bourgon isn’t convinced and says actions like those recently announced by governments are “far from what will ultimately be required.”

He predicts that it’s likely for AI systems to advance to catastrophic levels as early as 2030, and governments need to be prepared to indefinitely halt AI systems until leading AI developers can “robustly demonstrate the safety of their systems.”

WIRED's Steve Levy on the AI arms race: OpenAI doesn't have the 'invulnerability' it once had



]]> https://digitaltechblog.com/the-techno-optimists-and-doomsdayers-inside-silicon-valleys-most-dangerous-ai-debate/feed/ 0 17144