institute for continuous health transformation
(InHealth)
Joaquim Cardoso MSc
Founder and Chief Researcher & Editor
January 28, 2023
EXECUTIVE SUMMARY
ChatGPT, a chatbot developed by OpenAI, has quickly gained popularity and mainstream acceptance, with Microsoft investing billions of dollars into the company and working to incorporate it into its office software.
- This surge in attention has prompted pressure on tech giants like Meta and Google to move faster on AI development, potentially sweeping safety concerns aside.
- Some experts fear that this rush to market could expose billions of people to potential harms, such as sharing inaccurate information, before trust and safety experts have been able to study the risks.
- However, others in the field share OpenAI’s philosophy that releasing the tools to the public is the only way to assess real world harms.
DEEP DIVE
Big Tech was moving cautiously on AI. Then came ChatGPT.
The Washington Post
Nitasha Tiku
Fri, January 27, 2023
Three months before ChatGPT debuted in November, Facebook’s parent company Meta released a similar chatbot.
But unlike the phenomenon that ChatGPT instantly became, with more than a million users in its first five days, Meta’s Blenderbot was boring, said Meta’s chief artificial intelligence scientist, Yann LeCun.
“The reason it was boring was because it was made safe,” LeCun said last week at a forum hosted by AI consulting company Collective[i].
He blamed the tepid public response on Meta being “overly careful about content moderation,” like directing the chatbot to change the subject if a user asked about religion.
Three months before ChatGPT debuted in November, Facebook’s parent company Meta released a similar chatbot.
But unlike the phenomenon that ChatGPT instantly became, with more than a million users in its first five days, Meta’s Blenderbot was boring, …
“The reason it was boring was because it was made safe,” LeCun said last week at a forum hosted by AI consulting company Collective[i].
ChatGPT, on the other hand, will converse about the concept of falsehoods in the Quran, write a prayer for a rabbi to deliver to Congress and compare God to a flyswatter.
ChatGPT is quickly going mainstream now that Microsoft — which recently invested billions of dollars in the company behind the chatbot, OpenAI — …
… is working to incorporate it into its popular office software and selling access to the tool to other businesses.
The surge of attention around ChatGPT is prompting pressure inside tech giants including Meta and Google to move faster, potentially sweeping safety concerns aside, …
… according to interviews with six current and former Google and Meta employees, some of whom spoke on the condition of anonymity because they were not authorized to speak.
At Meta, employees have recently shared internal memos urging the company to speed up its AI approval process to take advantage of the latest technology, according to one of them.
Google, which helped pioneer some of the technology underpinning ChatGPT, recently issued a “code red” around launching AI products and proposed a “green lane” to shorten the process of assessing and mitigating potential harms, according to a report in the New York Times.
ChatGPT, along with text-to-image tools such as DALL-E 2 and Stable Diffusion, is part of a new wave of software called generative AI.
They create works of their own by drawing on patterns they’ve identified in vast troves of existing, human-created content.
This technology was pioneered at big tech companies like Google that in recent years have grown more secretive, announcing new models or offering demos but keeping the full product under lock and key.
Meanwhile, research labs like OpenAI rapidly launched their latest versions, raising questions about how corporate offerings, like Google’s language model LaMDA, stack up.
This technology was pioneered at big tech companies like Google … Meanwhile, research labs like OpenAI rapidly launched their latest versions, …
Tech giants have been skittish since public debacles like Microsoft’s Tay, which it took down in less than a day in 2016 after trolls prompted the bot to call for a race war, suggest Hitler was right and tweet “Jews did 9/11.”
Meta defended Blenderbot and left it up after it made racist comments in August, but pulled down another AI tool, called Galactica, in November after just three days amid criticism over its inaccurate and sometimes biased summaries of scientific research.
“People feel like OpenAI is newer, fresher, more exciting and has fewer sins to pay for than these incumbent companies, and they can get away with this for now,” said a Google employee who works in AI, referring to the public’s willingness to accept ChatGPT with less scrutiny.
Some top talent has jumped ship to nimbler start-ups, like OpenAI and Stable Diffusion.
Some AI ethicists fear that Big Tech’s rush to market could expose billions of people to potential harms …
… — such as sharing inaccurate information, generating fake photos or giving students the ability to cheat on school tests — before trust and safety experts have been able to study the risks.
Others in the field share OpenAI’s philosophy that releasing the tools to the public, often nominally in a “beta” phase after mitigating some predictable risks, is the only way to assess real world harms.
Others in the field share OpenAI’s philosophy that releasing the tools to the public, often nominally in a “beta” phase after mitigating some predictable risks, is the only way to assess real world harms.
“The pace of progress in AI is incredibly fast, and we are always keeping an eye on making sure we have efficient review processes, but the priority is to make the right decisions, and release AI models and products that best serve our community,” said Joelle Pineau, managing director of Fundamental AI Research at Meta.
“We believe that AI is foundational and transformative technology that is incredibly useful for individuals, businesses and communities,” said Lily Lin, a Google spokesperson.
“We need to consider the broader societal impacts these innovations can have. We continue to test our AI technology internally to make sure it’s helpful and safe.”
Microsoft’s chief of communications, Frank Shaw, said his company works with OpenAI to build in extra safety mitigations when it uses AI tools like DALLE-2 in its products.
“Microsoft has been working for years to both advance the field of AI and publicly guide how these technologies are created and used on our platforms in responsible and ethical ways,” Shaw said.
OpenAI declined to comment.
The technology underlying ChatGPT isn’t necessarily better than what Google and Meta have developed, …
… said Mark Riedl, professor of computing at Georgia Tech and an expert on machine learning.
But OpenAI’s practice of releasing its language models for public use has given it a real advantage.
But OpenAI’s practice of releasing its language models for public use has given it a real advantage.
“For the last two years they’ve been using a crowd of humans to provide feedback to GPT,” said Riedl, such as giving a “thumbs down” for an inappropriate or unsatisfactory answer, a process called “reinforcement learning from human feedback.”
Silicon Valley’s sudden willingness to consider taking more reputational risk arrives as tech stocks are tumbling.
When Google laid off 12,000 employees last week, CEO Sundar Pichai wrote that the company had undertaken a rigorous review to focus on its highest priorities, twice referencing its early investments in AI.
A decade ago, Google was the undisputed leader in the field.
- It acquired the cutting edge AI lab DeepMind in 2014 and
- open-sourced its machine learning software TensorFlow in 2015.
- By 2016, Pichai pledged to transform Google into an “AI first” company.
- The next year, Google released transformers — a pivotal piece of software architecture that made the current wave of generative AI possible.
The company kept rolling out state-of-the-art technology that propelled the entire field forward, deploying some AI breakthroughs in understanding language to improve Google search.
Inside big tech companies, the system of checks and balances for vetting the ethical implications of cutting-edge AI isn’t as established as privacy or data security.
Typically teams of AI researchers and engineers publish papers on their findings, incorporate their technology into the company’s existing infrastructure or develop new products, a process that can sometimes clash with other teams working on responsible AI over pressure to see innovation reach the public sooner.
Google released its AI principles in 2018, after facing employee protest over Project Maven, a contract to provide computer vision for Pentagon drones, and consumer backlash over a demo for Duplex, an AI system that would call restaurants and make a reservation without disclosing it was a bot.
In August last year, Google began giving consumers access to a limited version of LaMDA through its app AI Test Kitchen.
It has not yet released it fully to the general public, despite Google’s plans to do so at the end of 2022, according to former Google software engineer Blake Lemoine, who told The Washington Post that he had come to believe LaMDA was sentient.
But the top AI talent behind these developments grew restless.
In the past year or so, top AI researchers from Google have left to launch start-ups around large language models, …
… including Character.AI, Cohere, Adept, Inflection.AI and Inworld AI, in addition to search start-ups using similar models to develop a chat interface, such as Neeva, run by former Google executive Sridhar Ramaswamy.
Character.AI founder Noam Shazeer, who helped invent the transformer and other core machine learning architecture, said the flywheel effect of user data has been invaluable.
The first time he applied user feedback to Character.AI, which allows anyone to generate chatbots based on short descriptions of real people or imaginary figures, engagement rose by more than 30 percent.
Bigger companies like Google and Microsoft are generally focused on using AI to improve their massive existing business models, said Nick Frosst, who worked at Google Brain for three years before co-founding Cohere, a Toronto-based start-up building large language models that can be customized to help businesses.
One of his co-founders, Aidan Gomez, also helped invent transformers when he worked at Google.
“The space moves so quickly, it’s not surprising to me that the people leading are smaller companies,” said Frosst.
“The space moves so quickly, it’s not surprising to me that the people leading are smaller companies,” said Frosst.
AI has been through several hype cycles over the past decade, but the furor over DALL-E and ChatGPT has reached new heights.
Soon after OpenAI released ChatGPT, tech influencers on Twitter began to predict that generative AI would spell the demise of Google search.
ChatGPT delivered simple answers in an accessible way and didn’t ask users to rifle through blue links.
Besides, after a quarter of a century, Google’s search interface had grown bloated with ads and marketers trying to game the system.
Soon after OpenAI released ChatGPT, tech influencers on Twitter began to predict that generative AI would spell the demise of Google search.
“Thanks to their monopoly position, the folks over at Mountain View have [let] their once-incredible search experience degenerate into a spam-ridden, SEO-fueled hellscape,” technologist Can Duruk wrote in his newsletter Margins, referring to Google’s hometown.
On the anonymous app Blind, tech workers posted dozens of questions about whether the Silicon Valley giant could compete.
“If Google doesn’t get their act together and start shipping, they will go down in history as the company who nurtured and trained an entire generation of machine learning researchers and engineers who went on to deploy the technology at other companies,” tweeted David Ha, a renowned research scientist who recently left Google Brain for the open source text-to-image start-up Stable Diffusion.
“If Google doesn’t get their act together and start shipping, they will go down in history as the company who nurtured and trained an entire generation of machine learning researchers and engineers who went on to deploy the technology at other companies
AI engineers still inside Google shared his frustration, employees say.
For years, employees had sent memos about incorporating chat functions into search, viewing it as an obvious evolution, according to employees.
But they also understood that Google had justifiable reasons not to be hasty about switching up its search product, beyond the fact that responding to a query with one answer eliminates valuable real estate for online ads.
A chatbot that pointed to one answer directly from Google could increase its liability if the response was found to be harmful or plagiarized.
Chatbots like OpenAI routinely make factual errors and often switch their answers depending on how a question is asked.
Moving from providing a range of answers to queries that link directly to their source material, to using a chatbot to give a single, authoritative answer, would be a big shift that makes many inside Google nervous, said one former Google AI researcher.
The company doesn’t want to take on the role or responsibility of providing single answers like that, the person said.
Previous updates to search, such as adding Instant Answers, were done slowly and with great caution.
Inside Google, however, some of the frustration with the AI safety process came from the sense that cutting-edge technology was never released as a product because of fears of bad publicity — if, say, an AI model showed bias.
Meta employees have also had to deal with the company’s concerns about bad PR, according to a person familiar with the company’s internal deliberations who spoke on the condition of anonymity to discuss internal conversations.
Before launching new products or publishing research, Meta employees have to answer questions about the potential risks of publicizing their work, including how it could be misinterpreted, the person said.
Some projects are reviewed by public relations staff, as well as internal compliance experts who ensure the company’s products comply with its 2011 Federal Trade Commission agreement on how it handles user data.
To Timnit Gebru, executive director of the nonprofit Distributed AI Research Institute, the prospect of Google sidelining its responsible AI team doesn’t necessarily signal a shift in power or safety concerns, because those warning of the potential harms were never empowered to begin with.
“If we were lucky, we’d get invited to a meeting,” said Gebru, who helped lead Google’s Ethical AI team until she was fired for a paper criticizing large language models.
From Gebru’s perspective, Google was slow to release its AI tools because the company lacked a strong enough business incentive to risk a hit to its reputation.
After the release of ChatGPT, however, perhaps Google sees a change to its ability to make money from these models as a consumer product, not just to power search or online ads, Gebru said.
“Now they might think it’s a threat to their core business, so maybe they should take a risk.”
Rumman Chowdhury, who led Twitter’s machine-learning ethics team until Elon Musk disbanded it in November, said she expects companies like Google to increasingly sideline internal critics and ethicists as they scramble to catch up with OpenAI.
“We thought it was going to be China pushing the U.S., but looks like it’s start-ups,” she said.
Originally published at https://www.washingtonpost.com on January 27, 2023.
Names mentioned (selected list)
- Meta’s chief artificial intelligence scientist, Yann LeCun.
- Rumman Chowdhury, who led Twitter’s machine-learning ethics team
- Microsoft’s chief of communications, Frank Shaw,
- Mark Riedl, professor of computing at Georgia Tech and an expert on machine learning.
- Character.AI founder Noam Shazeer,
- Nick Frosst, who worked at Google Brain for three years before co-founding Cohere
- Timnit Gebru, executive director of the nonprofit Distributed AI Research Institute
- former Google software engineer Blake Lemoine,
- Nick Frosst, who worked at Google Brain for three years before co-founding Cohere,
- Aidan Gomez, also helped invent transformers when he worked at Google.
- Can Duruk
- David Ha, a renowned research scientist