Responsible AI: A Global Policy Framework 2021 Update
As noted in the first edition of Responsible AI, the policy framework that we published in 2019 was necessarily embryonic. Artificial intelligence’s development is still in its infancy and the potential societal impact of artificial intelligence is difficult to fully grasp, particularly in a field in which the rate of change continues to be almost exponential. These factors have placed a great weight of responsibility on all those who are engaged in the development and deployment of such AI systems. It is not surprising, therefore, that not only policymakers, but also industry representatives and AI researchers are looking for solid legal and ethical guideposts. We are, collectively, participating in an ongoing dialogue.
It is in this context that I am pleased to welcome the publication of the 2021 Update to Responsible AI: A Global Policy Framework. As we undertook to carry on the dialogue, we could not have been better served than by the two editors of this current update, John Buyers of Osborne Clarke LLP, UK, and Susan Barty of CMS LLP. Together with a team of 38 specialists from 17 countries, John and Susan have not only produced a substantive update to each of the eight principal chapters to Responsible AI and a comprehensive update to the original Global Policy Framework but have also developed a practical “Responsible AI Impact Assessment” template that we hope will be of significant value to AI experts and industry leaders.– Charles Morgan of McCarthy Tétrault LLP
2019-2021 Past President, International Technology Law Association
FOREWORD
It would not be an understatement to say that the world has changed beyond recognition since the publication of the first edition of Responsible AI. We have all been placed in the grip of a global pandemic, dramatically changing our working and personal lives, forcing distance between us and our loved ones and transforming innocent gestures of social interaction, such as shaking hands and hugging, into potentially deadly interactions. Where once we might have flown or driven to a meeting or conference, we now use video conferencing.
Isolation has made us even more dependent upon technology: to work, to socially interact, to inform, educate, and to entertain. Social media and predictive technologies have become ever-present in ways we could not even have imagined: driving and manipulating opinions, influencing behaviours, and inevitably powering news cycles. Indeed, as we bring this update to publication we’re witnessing first hand the impact of these technologies on a very unconventional US Presidential election.
The consensus is that rather than enrich us as human beings, exposure to too much technology diminishes us. This is perhaps not surprising, as forced isolation has driven many to the conclusion that we need real social relationships and interaction to thrive as human beings.
It is in this environment that we bring you our 2021 update to Responsible AI. In a fast-moving world, Artificial Intelligence moves at light speed. We’re now seeing the first nascent global steps towards regulation: the collective governmental realisation of the enormous harm that this technology can wield if left untrammelled. It looks like the EU is “first out of the blocks” with a proposal that would align machine learning to a regulatory environment not too dissimilar to the one Europeans face with data. The EU’s compliance-driven thinking is inevitably tempered by the more entrepreneurial and enterprise-friendly approaches advocated by the United States and China. Time will tell which vision will prevail.
In the meantime, it has become ever more critical to measure and gauge the impact of artificial intelligence “on the ground” and away from the academic debate. We are inevitably “wising up” to the consequences of ill-thought through development and use– whether that is physical harm, exclusion, or erosion of personal liberty. It is in this environment we launch our Responsible AI Impact Assessment tool (or RAIIA for short) which is designed to help measure, in quantifiable and real terms, the impact of a proposed AI solution. We hope you find it a valuable, and practical tool.
About The Authors
SUGGESTED ACTIONS
- Grounding the responsible AI framework in the human-centric principle of “accountability” that makes organizations developing, deploying, or using AI systems accountable for harm caused by AI;
- Promoting a context-sensitive framework for transparency and explainability;
- Elaborating the notion of elegant failure, as well as revisiting the “tragic choices” dilemma;
- Supporting open-data practices that help encourage AI innovation while ensuring reasonable and fair privacy, consent, and competitiveness;
- Encouraging responsible AI by design.