What ChatGPT Reveals About the Urgent Need for Responsible AI


inhealth — institute for health transformation

Joaquim Cardoso MSc — 
Founder, CEO and CSO
January 21, 2023


As Generative AI democratizes adoption, new challenges loom for organizations.


ABHISHEK GUPTA, FRANÇOIS CANDELON, STEVEN D. MILLS, LEONID ZHUKOV

JANUARY 19, 2023


The need to integrate Responsible AI (RAI) practices has become an organizational imperative. 


As Generative AI systems such as ChatGPT gain traction, it will quickly become easier for companies to adopt AI, thanks to lowered barriers to access. 

Already, as many experiment with these systems, they are unearthing serious ethical issues: 


Our research has shown that investing in RAI early is essential; it minimizes failures as companies scale the development and deployment of AI systems within their organization. 


But we’ve also found that it takes three years on average for an RAI program to achieve maturity. 

Moreover, a significant gap persists between commitment and action: All too often, organizations pledge to implement RAI as they scale AI but are stymied by a lack of clear leadership, resource allocation, and alignment with their purpose and values.


With Generative AI, the traditional challenges to adopting RAI remain. 


These include, for example, 


But a new set of vulnerabilities, failures, and ethical issues (that currently have no well-formulated responses) now also demand leaders’ attention. 


Here, we highlight several of these challenges, as well as approaches and lessons learned that can help leaders harness Generative AI’s tremendous potential-in a manner consistent with the ethics commitments they’ve made to their stakeholders.



A new set of challenges


Organizations adopting Generative AI will confront issues that range from the individual user level to the larger ecosystem and society at large.


  • (1) Massive capability overhang 
  • (2) Limited governance 
  • (3) Unclear copyright and other legal liabilities 
  • (4) Erosion of customer trust 
  • (5) Environmental impac 
  • (6) Centralizing power at the top


1.Massive capability overhang


Large Generative AI systems like ChatGPT have exhibited a massive capability overhang-hidden skills and dangers that are not planned for in the development phase and are generally unknown even to the developers. Capability overhang is explored by users who poke and prod the system to see what it is capable of beyond what has been advertised and what others have managed to prompt it to do. Examples include simulating a Virtual Machine (VM) inside ChatGPT or transforming music into spectrograms and running it through Stable Diffusion to generate new music through textual prompts (a system ingeniously called Riffusion).


Capability overhang poses a significant and unique challenge, both for the scientific community and for businesses. At a scientific level, we don’t yet fully understand why capability overhang occurs and how to manage it. For businesses, it is impossible to predict the myriad ways in which teams will leverage Generative AI tools, making it difficult to ensure the right controls are in place. Creating a VM has vastly different controls, for example, than creating new marketing content. It also exposes businesses to various legal liabilities when someone abuses the system or uses it in ways outside of licensing agreements.


Generative AI systems enable the democratization of extant AI capabilities that were hitherto inaccessible because of the engineering skills and tailoring required to make them work in each organization’s context. This includes SMEs and other companies that might have small data science teams and often limited staff allocated toward privacy, in the risk and compliance, legal, and governance functions. The ease with which ChatGPT or Stable Diffusion can be chained and incorporated into a workflow or a product or service means that many organizations that previously struggled with adopting AI can now integrate these capabilities into their operations.


The wider adoption of AI is a good thing. But it can become problematic when those organizations don’t have appropriate governance structures in place, investments into RAI, and the leadership to handle issues.




Generative AI systems rely on foundation models that are trained on massive internet-scale datasets, like the Common Crawl and The Pile. But sometimes the data that gets scooped up contains copyrighted materials. This poses legal risks to organizations if they use the outputs from these systems in their products and services-unaware of their origin-without obtaining the required permissions to do so.


Early efforts from companies like Stability AI offer users the option to flag images that they own and to opt-out. Although such efforts will not directly affect the deployed version of the system, they serve as a useful control mechanism for future releases. While we don’t yet see alignment on best practices for protecting people’s IPR, a constellation of nascent efforts will help the community begin to coalesce around mechanisms that work well (and identify those that don’t).



3.Erosion of customer trust


Users may be put off if they don’t realize (and aren’t informed) that a familiar product now has Generative AI behind it, such as customer service emails powered by a sophisticated chatbot. Moreover, compared to prior deterministic systems, Generative AI systems like ChatGPT might provide different answers to the same question based on the precise wording of the prompt and chat history with the user. This can further complicate the issue of reliability and consistency, for example, if customers compare their experiences and realize that they’re being offered disparate resolutions.


Furthermore, even if customers are aware of the presence of Generative AI, the organization providing that product to them likely isn’t equipped to handle their questions, for example, about how the system works and where their data is being used. And the regulatory landscape doesn’t yet mandate (at least outside Europe) that users be provided with any recourse when things go wrong-as they inevitably can.


Looking beyond the above individual and organization specific risks, broader ecosystem risks also emerge. For example, training the foundation models that underpin these systems will require massive computational resources-which will have a significant impact on the environment. The carbon impact of training a single, large NLP model could approach that of the carbon emissions of five cars over their lifetime.


It is an example of Jevons paradox: As Generative AI democratizes AI capabilities and there is an increase in the efficiency and accessibility of this technology, its usage will go up, and the costs of further model training for new systems and even the inference costs associated with AI systems ( which are significant) will continue to rise.



4.Centralizing power at the top


Such problems are exacerbated by the fact that only the largest organizations-those with the required datasets, computing resources, and engineering chops-are currently able to build and deploy foundation models and Generative AI systems. It’s only use that is being democratized. And the requirements for large-scale data and computing are only continuing to increase, leading to the centralization of power. It can also result in the homogenization of ideas, with a small set of people at these organizations (and their intellectual stances and ideological bents) driving the development roadmap for the approaches, architectures, and directions the field takes.


For example, the process of choosing which issues to address with AI is inherently values-driven-at some point, one use case is selected over another. And these values are then hidden behind the solution or approach that is ultimately visible to the outside world. Without actively addressing this, organizations risk skewing toward problems that might overlook the needs of already marginalized populations.



Creating a living lab for RAI


Leaders’ approach to Generative AI systems should be centered on RAI to address these unique issues. To do so, they can explore adjacent industries that have faced radical explosions in capabilities-observing the elements, relationships, governance, and structures created in response-to identify both long-term possibilities and quick wins.


In the long term, re-imagining the insurance industry for AI systems offers some possibilities for creating a more responsible Generative AI. AI insurance offers protection against certain adverse outcomes that might arise from the use of AI systems. For example, an AI system doing content moderation may state that it will work well in English on subjects involving technology. The firm providing such a system can take out insurance to protect against any claims or legal action that customers initiate because something went wrong with the content moderation (such as failing to catch all instances of hate speech within the realm of English-language tech content).


Now, there might be known failure modes outside of the system’s stated domain-for instance, that the content moderation system doesn’t work well outside of tech topics and in languages other than English. But with more advanced Generative AI systems that have a capability overhang, there is an increased risk of users identifying unknown capabilities. Taking out insurance policies that can help protect against these unknowns can boost the confidence with which organizations adopt Generative AI. Insurance policies that cover unknown risks will likely come with higher premiums, but there might be a manageable cost-benefit tradeoff for the company adopting Generative AI systems: The benefits of using the systems exceed the costs.


Another approach companies can develop over time is to create clear guardrails, through a standardized certification process, for example, that will help set boundaries for users and manage expectations when the system doesn’t perform well for use cases that it wasn’t built and certified to address.


In the short term, companies can address issues such as copyright infringement by providing clear and actionable guidance to stakeholders who will use the systems, and they can coordinate closely with their legal departments to analyze the potential impact of including Generative AI outputs in their deliverables.


To improve their risk assessment capabilities, companies can borrow a popular approach from the world of cybersecurity: red-teaming, or deliberately trying to disrupt the normal operations of the system to find failure modes and vulnerabilities that would push the system beyond its typical operational bounds and trigger errors. When used with Generative AI, this could reveal system vulnerabilities that might be amplified in downstream uses.


For instance, red-teaming could reveal that a chatbot providing advice at university career centers carries a gender bias and is presenting fewer STEM-related career opportunities to female students. Companies will also need to rethink the skillsets required within their organizations, such as the need to train or hire prompt engineers who can work with red-teaming staff to test the Generative AI system more effectively.



While Generative AI systems present us with a bold new horizon of opportunities, over-reliance on existing RAI solutions in the face of an explosion of capabilities will leave issues unsatisfactorily addressed and, in the case of emergent risks, unmitigated. 


Embracing these challenges head-on to develop Generative AI systems responsibly can help an organization gain a long-term competitive edge.


Originally published at https://bcghendersoninstitute.com on January 19, 2023.

Total
0
Shares
Deixe um comentário

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *

Related Posts

Subscribe

PortugueseSpanishEnglish
Total
0
Share