What to Expect in 2023 in AI


institute for health transformation

Joaquim Cardoso MSc
January 1, 2023


SOURCE:


HAI Stanford — Human Centered AI
Shana Lynch

Dec 14, 2022


HAI faculty share their predictions for the coming year.


This year’s biggest headline might have been generative AI, but what should we expect from the field in 2023? 


Four Stanford HAI faculty members describe what they expect the biggest advances, opportunities, and challenges will be for the coming year.


  • 1.Better Foundation Models 

  • 2.Video’s Generative Moment 

  • 3.Changing Ecosystem, More Government Funding

  • 4.Immature AI Proliferates


DALL-E

1.Better Foundation Models


Foundation models — giant models that can be used for a variety of downstream tasks without additional training — have been seeing huge progress, and that will only improve next year, …


… says Chris Manning, the Thomas M. Siebel Professor in Machine Learning in the School of Engineering, professor of linguistics and of computer science, director of the Stanford Artificial Intelligence Laboratory, and associate director of Stanford HAI. 


He expects to see improvements in data and data curation …


… — “not just bigger data collections, but large efforts into improving the quality of the data and cleaning out toxic or biased information that comes from random trawls of the web.”


One area he expects to see growth: sparse models. 


A sparse model is a way of representing complex data in a more efficient or compact way, which can be faster to compute and require less memory to store.


A sparse model is a way of representing complex data in a more efficient or compact way, which can be faster to compute and require less memory to store.


“Generally, I expect to see algorithmic advances that let you have more scale,” he says.



2.Video’s Generative Moment


While text and image generative AI was this year’s big story, video will be a big focus in 2023, …


… says Percy Liang, associate professor of computer science and director of Stanford HAI’s Center for Research on Foundation Models

“Capturing long-range dependencies is challenging, but technology will continue to get better, at least with shorter videos to start,” he says. 

“We may be getting to a point next year where we won’t be able to distinguish whether a human or computer generated a video. 

Up to today, if you watch a video, you expect it to be real, but we’re seeing that hard line start to evaporate.”


We may be getting to a point next year where we won’t be able to distinguish whether a human or computer generated a video.


DALL-E

3.Changing Ecosystem, More Government Funding


What does a healthy AI field look like? 


Fei-Fei Li, the Sequoia Capital Professor at Stanford University, professor of computer science, and co-director of Stanford HAI, notes that …

… too many startups are still depending on the stability of open models, unable to develop their own. 


But with the major attention on foundation models this year and venture money flowing, she expects to see more players come to the field in 2023.


But with the major attention on foundation models this year and venture money flowing, she expects to see more players come to the field in 2023.


Compute and data are bottlenecks for startups, though, so the federal government may step up investment in compute resources like a National Research Cloud or a Multilateral AI Research Institute


“There’s concern that startups, which would make the ecosystem more vibrant and diverse, aren’t getting enough resources,” she says.


“There’s concern that startups, which would make the ecosystem more vibrant and diverse, aren’t getting enough resources,” she says.



4.Immature AI Proliferates


2023 will see a “shocking rollout of AI way before it’s mature or ready to go,” …


… says Russ Altman, the Kenneth Fong Professor in the School of Engineering; professor of bioengineering, of genetics, of medicine, and of biomedical data science; and associate director of Stanford HAI. 

“I’m worried that our current government paralysis is not going to move forward on any kind of meaningful regulation, and some areas certainly need regulation.”


… our current government paralysis is not going to move forward on any kind of meaningful regulation, and some areas certainly need regulation.”


He points to the recent proposal in San Francisco to allow police to deploy potentially lethal remote-controlled robots or the potential misuse of tools that can generate human-like text from a short prompt — think how many smart fifth-graders could skip an essay assignment by asking an agent for help, he says.


For 2023, “I expect a hit parade of AI that’s not ready for prime time but coming out because it’ll be driven by over-zealous industry,” Altman says. 


“In some ways, it will make the whole mission of HAI more relevant and critical.”


Originally published at https://hai.stanford.edu


Names mentioned


Chris Manning, the Thomas M. Siebel Professor in Machine Learning in the School of Engineering, professor of linguistics and of computer science, director of the Stanford Artificial Intelligence Laboratory, and associate director of Stanford HAI.

Percy Liang, associate professor of computer science and director of Stanford HAI’s Center for Research on Foundation Models

Fei-Fei Li, the Sequoia Capital Professor at Stanford University, professor of computer science, and co-director of Stanford HAI

Russ Altman, the Kenneth Fong Professor in the School of Engineering; professor of bioengineering, of genetics, of medicine, and of biomedical data science; and associate director of Stanford HAI

Total
0
Shares
Deixe um comentário

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *

Related Posts

Subscribe

PortugueseSpanishEnglish
Total
0
Share