Register to attend the 2025 India Conference. Learn more

Responsible AI: A Global Policy Framework 2021 Update

ITechLaw updated 2021 edition of its Responsible AI global policy framework, edited by John Buyers of Osborne Clarke LLP, UK and Susan Barty of CMS Cameron McKenna Nabarro Olswang LLP, with input from 37 technology lawyers from over 15 countries once again draws from the global technology community, and provides substantive updates to the eight principles that shape the future of AI: ethical purpose and societal benefit, accountability, transparency and explainability, fairness and non-discrimination, safety and reliability, open data and fair competition, privacy, and intellectual property. The 2021 Update included a Responsible AI Impact Assessment Tool of significant value to industry leaders to help evaluate risks and to develop risk mitigation strategies in order to responsibly implement AI systems.

As noted in the first edition of Responsible AI, the policy framework that we published in 2019 was neces­sarily embryonic. Artificial intelligence’s development is still in its infancy and the potential societal impact of artificial intelligence is difficult to fully grasp, particularly in a field in which the rate of change continues to be almost exponential. These factors have placed a great weight of responsibility on all those who are engaged in the development and deployment of such AI systems. It is not surprising, therefore, that not only policymakers, but also industry representatives and AI researchers are looking for solid legal and ethical guideposts. We are, collectively, participating in an ongoing dialogue. 

It is in this context that I am pleased to welcome the publication of the 2021 Update to Responsible AI: A Global Policy Framework. As we undertook to carry on the dialogue, we could not have been better served than by the two editors of this current update, John Buyers of Osborne Clarke LLP, UK, and Susan Barty of CMS LLP. Together with a team of 38 specialists from 17 countries, John and Susan have not only produced a substantive update to each of the eight principal chapters to Responsible AI and a comprehensive update to the original Global Policy Framework but have also developed a practical “Responsible AI Impact Assessment” template that we hope will be of significant value to AI experts and industry leaders.– Charles Morgan of McCarthy Tétrault LLP
2019-2021 Past President, International Technology Law Association

FOREWORD

It would not be an understatement to say that the world has changed beyond recognition since the pub­lication of the first edition of Responsible AI. We have all been placed in the grip of a global pandemic, dramatically changing our working and personal lives, forcing distance between us and our loved ones and transforming innocent gestures of social interaction, such as shaking hands and hugging, into poten­tially deadly interactions. Where once we might have flown or driven to a meeting or conference, we now use video conferencing.

Isolation has made us even more dependent upon technology: to work, to socially interact, to inform, educate, and to entertain. Social media and predictive technologies have become ever-present in ways we could not even have imagined: driving and manipulating opinions, influencing behaviours, and inevitably powering news cycles. Indeed, as we bring this update to publication we’re witnessing first hand the impact of these technologies on a very unconventional US Presidential election.

The consensus is that rather than enrich us as human beings, exposure to too much technology dimin­ishes us. This is perhaps not surprising, as forced isolation has driven many to the conclusion that we need real social relationships and interaction to thrive as human beings.

It is in this environment that we bring you our 2021 update to Responsible AI. In a fast-moving world, Artificial Intelligence moves at light speed. We’re now seeing the first nascent global steps towards regu­lation: the collective governmental realisation of the enormous harm that this technology can wield if left untrammelled. It looks like the EU is “first out of the blocks” with a proposal that would align machine learning to a regulatory environment not too dissimilar to the one Europeans face with data. The EU’s compliance-driven thinking is inevitably tempered by the more entrepreneurial and enterprise-friendly approaches advocated by the United States and China. Time will tell which vision will prevail.

In the meantime, it has become ever more critical to measure and gauge the impact of artificial intelligence “on the ground” and away from the academic debate. We are inevitably “wising up” to the consequences of ill-thought through development and use– whether that is physical harm, exclusion, or erosion of personal liberty. It is in this environment we launch our Responsible AI Impact Assessment tool (or RAIIA for short) which is designed to help measure, in quantifiable and real terms, the impact of a proposed AI solution. We hope you find it a valuable, and practical tool.

About The Authors

Edited by John Buyers of Osborne Clarke LLP, UK and Susan Barty of CMS LLP, and written together with a team of 38 specialists from 17 countries.

SUGGESTED ACTIONS

  • Grounding the responsible AI framework in the human-centric principle of “accountability” that makes organizations developing, deploying, or using AI systems accountable for harm caused by AI;
  • Promoting a context-sensitive framework for transparency and explainability;
  • Elaborating the notion of elegant failure, as well as revisiting the “tragic choices” dilemma;
  • Supporting open-data practices that help encourage AI innovation while ensuring reasonable and fair privacy, consent, and competitiveness;
  • Encouraging responsible AI by design.

Become A Member of ITechLaw

Join our esteemed network