AI biweekly - by Sylvain Duranton: Responsible AI

Sylvain Duranton
Senior Partner & Managing Director at Boston Consulting Group — Global 
Leader for BCG GAMMA

Dear Artificial Intelligence Enthusiasts,

Welcome to the first edition of AI biweekly — my Artificial Intelligence Newsletter on LinkedIn. Every two weeks, I will bring you a handpicked selection tackling a key topic that underlies a great deal of our work. In this first edition, let me focus on Responsible AI.

At BCG GAMMA, we use AI every day, and have a strong position on these new initiatives. When we surveyed senior executives at more than 1,000 large organizations and found that the business community is already strongly in favor of ethical AI: 98% of respondents say their company is making progress towards that goal.

Companies are intrinsically motivated to behave responsibly. The challenge many executives report is the lack of clear standards — which leaves many of them no choice but to create their own.

Right now, legislators from the European Union, the US, India, and other countries are in the process of drafting new laws that will soon shape how companies — and through them all of us as citizens and consumers — can use AI.

The new laws will set minimum legal requirements — but clearing those won’t be enough to gain society’s approval. If you want the social license to operate AI at scale, you will have to gain people’s trust. We advise businesses to take proactive steps towards using responsible AI and be open and transparent about the steps you’re taking. The best companies are already moving in that direction — and they will be greatly rewarded for doing so.

Please read my latest publication on this topic. 

Do you want your article featured in my newsletter? Feel free to get in touch with me — I am very excited to get the discussion going about AI!

Sylvain Duranton

 — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — 

Six ethical principles for AI in Healthcare from WHO guidelines. It is not just about data bias but about also accountability, transparency, and autonomy among others. WHO Outlines Principles for Ethics in Health AI — The Verge — 4 min read

One of the most intriguing developments in AI is the systems that can write or suggest code. But there are inherent risks in automating software development that can create cascading problems, such as the automation of biases. Assessing the Safety Risks of Software Written by Artificial Intelligence — Tech Policy Press — 7 min read

An investigation into how artificial-intelligence powered technologies are already being implemented in law enforcement efforts around the country, often with little oversight: How AI-Powered Tech Landed Man in Jail With Scant Evidence— AP News — 20 min read

Dean of the College of Engineering at Ohio State, Ayanna Howard, details four specific ways to mitigate bias within applications. Real Talk: Intersectionality and AI — MIT Sloan — 6 min read

Is “explainable AI” enough? According to a study by Cornell, IBM, and GeorgiaTech researchers, people tend to demonstrate a high level of trust in AI explanations — regardless of the level of AI expertise. This over-trust has its own implications. Even experts are too quick to rely on AI explanations, study finds — VentureBeat — 6 min read

Which moves are regulators most likely to make, and what are the three main challenges businesses need to consider as they adopt and integrate AI? AI Regulation is Coming — HBR — 18 min read

Originally published at

Deixe um comentário

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *

Related Posts