2023 is predicted as a breakout year for AI, with the launch of OpenAI’s ChatGPT [Large Language Model  — LLM]


the health strategist

multidisciplinary institute for 
continuous health transformation & digital health


Joaquim Cardoso MSc
Chief Researcher, Editor and Senior Advisor
January 4, 2023



Executive Summary


  • Venture capital investors are predicting that 2023 will be the breakout year for artificial intelligence (AI), with the launch of OpenAI’s ChatGPT language-generation model in November of last year being a particularly influential event. 

  • More than 160 start-ups have been launched to explore the potential of generative AI, which has the ability to recognize and replicate patterns of text, images, computer code, audio, and video. 

  • While the exact killer application of such models is not yet known, it is believed that they could boost the productivity of creative industries and potentially replace human workers. 


  • Low code/no code software platforms are also increasing in popularity, enabling non-expert users to create their own powerful mobile and web apps, but there are risks associated with the use of AI, including the possibility of wrong or hallucinatory output.


  •  Users are advised to treat AI with caution.

Venture capital investors are predicting that 2023 will be the breakout year for artificial intelligence (AI), with the launch of OpenAI’s ChatGPT language-generation model in November of last year being a particularly influential event.






DEEP DIVE


This is a republication of the article “A breakout year for artificial intelligence”, published in the “Financial Times”, on January 3, 2023



Farewell crypto, hello generative AI. 


With the selective amnesia that is one of the defining characteristics of their trade, venture capital investors have already moved on from their unfortunate dalliance with the imploding FTX crypto exchange and fallen in love with the next big thing. 

This year, they say, will be the breakout year for artificial intelligence. 

Although that statement might have been made in any of the past few years, this time they really mean it.


There are some good reasons to believe this assertion may be true. 


The launch in November of OpenAI’s ChatGPT language-generation model, with its astonishing ability to generate paragraphs of convincing text at remarkable speed, has opened users’ eyes to the power of generative AI. 

Large language models, such as ChatGPT, have been trained on vast amounts of data ingested from the internet and are almost instantaneously able to recognise and replicate patterns of text, images, computer code, audio and video. 

No one is quite sure yet what exactly their killer application will be. But more than 160 start-ups have already been launched to explore the answer.


Large language models, such as ChatGPT, have been trained on vast amounts of data ingested from the internet and are almost instantaneously able to recognise and replicate patterns of text, images, computer code, audio and video.


The promise of generative AI is that it can boost the productivity of workers in creative industries, if not replace them altogether. 


Just as machines augmented muscle in the industrial revolution, so AI can augment brainpower in the cognitive revolution. 

This may be particularly good news for jaded copywriters, computer coders, TV scriptwriters and desperate school children late with their homework. 

But it may also have a big impact on areas as diverse as the automation of customer services, marketing material, scientific research and digital assistants. 

One intriguing open question is whether it will reinforce the dominance of existing search engines, such as Google’s, or usurp them.


Generative AI is a good example of a broader trend that is taking powerful technologies out of the hands of experts and putting them in those of everyday users. 


This democratisation of access may have huge implications, and create extraordinary opportunities, for many businesses. 

The increasing popularity of “low code/no code” software platforms, for example, will enable increasing numbers of non-expert users to create their own powerful mobile and web apps. 

No longer will product managers be so beholden to their tech teams setting their own agenda.


The increasing popularity of “low code/no code” software platforms, for example, will enable increasing numbers of non-expert users to create their own powerful mobile and web apps.


This obviously carries risks, as well as opportunities. 


One of the biggest is that the output of generative AI is often wrong, or hallucinatory. 

Such models can sometimes give different answers to the same question depending on their human inputs and training data. 

Deterministic technologies, such as a pocket calculator, will always give you the same answer when you tap in 19 x 37. 

Probabilistic technologies, such as generative AI, will only give you a statistically probable approximation of an answer. 

They are “stochastic parrots” as the former Google researcher Timnit Gebru described them.

For that reason, Stack Overflow, a Q&A website for computer programmers, has already banned ChatGPT-generated responses because they cannot be trusted.


The clear imperfections of generative AI put a particular responsibility on those who are developing these models to consider how they may be abused, before releasing them into the wild. 


But that is becoming increasingly difficult given the speed at which these models are developing. 

Users may well enjoy and profit from their use but they should always treat them with caution. While generative AI can help inspire the first thought, it should never be relied upon for the last word.

Total
0
Shares
Deixe um comentário

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *

Related Posts

Subscribe

PortugueseSpanishEnglish
Total
0
Share