The big tech players are muscling into AI — Google PaLM and Microsoft OpenAI are recent developments


Financial Times
April 14, 2022


Events over the past week or so show that AI is still advancing by leaps and bounds. They also confirm that the biggest tech companies are in the driving seat.


Events over the past week or so show that AI is still advancing by leaps and bounds. They also confirm that the biggest tech companies are in the driving seat.


But given the potentially harmful uses to which the latest generation of AI could be put, the world might not feel any more equitable — nor any safer — if this technology was more widely diffused.


Two developments in the AI realm have brought this point home with renewed force. 


One involves a new Google language model called 
PaLM

According to Percy Liang, an associate professor of computer science at Stanford University, this is the biggest advance in such systems since the release of OpenAI’s GPT-3, the automatic-writing machine that took the AI world by storm two years ago.


PaLM’s main claim to fame is that it can explain why a joke is funny, with a reasonable degree of accuracy. 

That feat suggests that machines are starting to make headway on hard problems such as common sense and reasoning — though as always in the field of AI, designing a system to pull off one party trick doesn’t guarantee advances on a wider front.


The other development last week, from OpenAI, represents a leap forward in the new field of “multimodal” systems, which work with both text and images. 

Microsoft has funded OpenAI to the tune of $1bn and has an exclusive right to commercialise its technology.

OpenAI’s latest system, known as Dall-E 2, takes a text prompt (such as, “an astronaut riding a horse”) and turns it into a photorealistic image. So long, Photoshop.

Microsoft has funded OpenAI to the tune of $1bn and has an exclusive right to commercialise its technology.


Given their obvious applications, their developers are trying to rush systems like these from the research lab into the mainstream. 

They are likely to have an impact in any data-rich field where machines can make recommendations or come up with suggested answers, says Liang. 

Already, OpenAI’s technology is being used to suggest lines to software coders.


Writers and graphic designers could be next. It still takes a human to pick through the output to find the genuinely useful. But as engines of creativity, these systems are unrivalled.


In some way, however, they are also the worst-behaving of all AI models, and they come wrapped in warnings. 

One problem is their impact on global warming: they take a huge amount of computing effort to train. And they reproduce all the biases of the (very large) data sets they are trained on.


They are also natural misinformation factories, mindlessly churning out their best guesses in response to prompts without any understanding of what they’re producing. Just because they know how to sound coherent doesn’t mean they are.


And then there is the risk of intentional misuse. CohereAI, a start-up that has built a smaller version of GPT-3, reserves the right in its terms of service to cut users off for things like the “sharing of divisive generated content in order to turn a community against itself”.


The potential harms of generative AI models like this are not limited to language


A company that has built a machine learning system to help in drug discovery, for instance, recently experimented with changing some of the parameters in its model to see if it would come up with less benign substances. 

As reported in Nature, the system promptly started designing chemical warfare agents, including some that were said to be more dangerous than anything that was publicly known.


For critics of Big Tech, it might set alarm bells ringing to think that a handful of powerful and unaccountable corporations control these tools. But it might be even more worrying if they didn’t.


For critics of Big Tech, it might set alarm bells ringing to think that a handful of powerful and unaccountable corporations control these tools. But it might be even more worrying if they didn’t.


It has been considered good practice in the past to publish models like these, so that other researchers can test the claims and anyone who uses them can see how they work. But given the risks, the developers of today’s largest models have kept them under wraps.


This is already fuelling a hunt for alternatives, with start-ups and open-source developers trying to wrest some control of the technology from Big Tech. 

CohereAI, for instance, was able to raise $125mn in venture capital last month. It may sound prohibitively expensive for a start-up to compete with Google, but Cohere has an agreement to use the search company’s most powerful AI chips to train its own model.


Meanwhile, a group of independent researchers set out to build a similar system to GPT-3 with the express aim of putting it into the public domain. 

The group, who call themselves EleutheraAI, released an open source version of a smaller working model earlier this year.


Such moves suggest Big Tech won’t have this field all to themselves. Who will police the frontiers of this powerful new technology is another matter.


Such moves suggest Big Tech won’t have this field all to themselves. Who will police the frontiers of this powerful new technology is another matter.


Originally published at https://www.ft.com on April 14, 2022.


Names mentioned


Percy Liang, an associate professor of computer science at Stanford University,

CohereAI

EleutheraAI

Total
0
Shares
Deixe um comentário

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *

Related Posts

Subscribe

PortugueseSpanishEnglish
Total
0
Share