Time 100 AI Leaders: Sam Altman, CEO @ OpenAI (ChatGPT)


the health strategist . institute

institute for
health transformation and digital health,
data driven & AI powered


Joaquim Cardoso MSc.

Chief Research and Strategy Officer (CRSO), 
Chief Editor and
Independent Senior Advisor


Key points


  • Sam Altman gives humanity a lot of credit. The human race is smart and adaptable enough, he believes, to cope with the release of increasingly powerful AIs into the world — so long as those releases are safe and incremental.

  • The company doubled down a few months later and launched GPT-4, the most powerful large language model ever made available to the public.

  • In just a matter of months after the release of ChatGPT, the U.S. and U.K. governments had announced high-priority new AI initiatives, and the G-7 group of wealthy democracies announced a plan to coordinate on setting guardrails for the technology.

  • For all the awe ChatGPT and GPT-4 provoked, their most significant legacy may be prompting humanity to imagine what might come next. Today’s most advanced systems can write convincing text, pass the bar exam, debug computer code. What happens when they become capable of making complex plans, deceiving users into carrying out their goals, and operating independently of human oversight?

  • Open AI pioneered the safety technique of “reinforcement learning with human feedback,” the innovation that meant ChatGPT didn’t fall into the toxicity trap that doomed earlier generations of AI chatbots.

  • in June, OpenAI announced it would devote 20% of its substantial computing-power resources toward solving the problem of “superalignment” — how to ensure that AI systems far smarter than even the most intelligent human will act with humanity’s best interests at heart.
Illustration by TIME; reference image: Joel Saget — AFP/Getty Images

DEEP DIVE

TIME
BILLY PERRIGO
SEPTEMBER 7, 2023 


Sam Altman gives humanity a lot of credit. 


The human race is smart and adaptable enough, he believes, to cope with the release of increasingly powerful AIs into the world — so long as those releases are safe and incremental. 

“Society is capable of adapting, as people are much smarter and savvier than a lot of the so-called experts think,” Altman, the 38-year-old CEO of OpenAI, told TIME in May. “We can manage this.”


That philosophy not only explains why OpenAI decided to release ChatGPT, its world-shaking chatbot, in November 2022. 


It’s also why the company doubled down a few months later and launched GPT-4, the most powerful large language model ever made available to the public.


As well as making Altman, a former president of startup accelerator Y Combinator, one of the hottest names in tech, these releases proved him right, at least so far


humanity was able to quickly adapt to these tools without collapsing in on itself. Lawmakers — propelled in equal measures by fear and awe — began seriously discussing how AI should be regulated, a conversation Altman has long wanted them to begin in earnest. 


In just a matter of months after the release of ChatGPT, the U.S. and U.K. governments had announced high-priority new AI initiatives, and the G-7 group of wealthy democracies announced a plan to coordinate on setting guardrails for the technology. 


Humans are gradually learning how to (and perhaps more importantly, how not to) use AI tools in the pursuit of productive labor, education, and fun.


For all the awe ChatGPT and GPT-4 provoked, their most significant legacy may be prompting humanity to imagine what might come next. 


Today’s most advanced systems can write convincing text, pass the bar exam, debug computer code. 


What happens when they become capable of making complex plans, deceiving users into carrying out their goals, and operating independently of human oversight? 


OpenAI’s breakout year made clear that the world’s most advanced AI companies, with enough supercomputers, data, and money, may soon be able to summon systems capable of such feats


Machine learning is a new paradigm of computing; unlike earlier computers that were hard-coded toward specific purposes, it results in systems that reveal their capabilities only after they’re built. As these systems are made more powerful, they also grow more dangerous. 


And right now at least, even the smartest minds in computer science don’t know how to reliably constrain them.


To its credit, as well as pushing the frontier in AI’s so-called capabilities, OpenAI under Altman has made tackling this unsolved problem a key part of its approach. 


It pioneered the safety technique of “reinforcement learning with human feedback,” the innovation that meant ChatGPT didn’t fall into the toxicity trap that doomed earlier generations of AI chatbots. 


(While that technique isn’t perfect — it relies on low-paid human labor and fails to always ensure chatbots respond with true information — it’s among the best the industry has so far.) 


And in June, OpenAI announced it would devote 20% of its substantial computing-power resources toward solving the problem of “superalignment” — how to ensure that AI systems far smarter than even the most intelligent human will act with humanity’s best interests at heart.


Altman’s philosophy points clearly to the task ahead. 


“It is our responsibility to educate policymakers and the public about what we think is happening, what we think may happen, and to put technology out into the world so people can see it,” Altman says. “It is our institutions’ and civil society’s role to figure out what we as a society want.”


This is an excerpt of the article “Time 100 AI”

To continue reading:

https://time.com/collection/time100-ai/6309022/sam-altman-ai/

Illustration by TIME; reference image: Joel Saget — AFP/Getty Images



RELATED POSTS

Total
0
Shares
Deixe um comentário

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *

Related Posts

Subscribe

PortugueseSpanishEnglish
Total
0
Share