Report: Looking before we leap
25th January, 2023, by Ismael Kherroubi Garcia
Ada Lovelace Institute / Looking before we leap / CC-BY 4.0
On 24th January, the report Kairoi contributed to – Looking before we Leap: Expanding Ethical Review Processes for AI and Data Science Research¹ – was officially launched at an event hosted by the Ada Lovelace Institute. The report is the result of efforts led by The Ada Lovelace Institute, the University of Exeter’s Institute for Data Science and Artificial Intelligence, and the Alan Turing Institute, with £100,000 funding from the Arts and Humanities Research Council.
The panel included the report’s co-authors Mylene Petermann (University of Bristol), Niccolò Tempini (University of Exeter and Alan Turing Institute), Ismael Kherroubi Garcia (Kairoi), and Andrew Strait (Ada Lovelace Institute). A panel discussion was chaired by Wendy Hall (University of Southampton), and the co-authors were joined by Dawn Bloxwich (DeepMind), Madhulika Srikumar (Partnership on AI), and Quinn Waeiss (Stanford University). Around 260 people signed up to join the event, and over 80 attended.
The report – at over 100 pages long – studies the state of research ethics committees (RECs) working in the context of artificial intelligence (AI) and data science research. RECS – or Institutional Review Boards (IRBs) in some countries – have been around for several decades. They predominantly deal with research from the biomedical sciences. This is due to their history, as they were initially established as a response to heinous practices on human subjects conducted in the name of “science.” Infamous examples include experiments perpetrated by the Nazi regime,² and the Tuskegee syphilis study,³ which involved recruiting black men, deceiving them about the study’s objectives, and withholding treatment when it became available – an ordeal that lasted over four decades. The question we seek to respond to in the report is: How can RECs best manage the novel concerns raised by AI and data science research?
There is no question that AI and data science research leads to advancements that can raise significant risks in the short and long-term. The chair of the report’s launch event – Wendy Hall – asked us to ponder the question as to whether or not the world wide web would have passed a research ethics review process. The consequences of the internet that we see today would be unimaginable just a third of a century ago. The same can be said for cryptographers of the early 90’s setting the foundations for what we now call blockchain.³ This is a scale at which RECs will necessarily struggle, but it points to the question of scope: what challenges can RECs raise, and how much should they speculate into the future? Consider Dawn Bloxwich’s description of post-launch monitoring of innovations at DeepMind. As noted, this is no easy task.
Although the report is a call for not reinventing the wheel – for refocusing a well-established governance function – we acknowledge the limitations of traditional RECs to handle the questions raised by advancements in AI research. In particular, we highlight six challenges RECs in AI and data science face:
- Many RECs lack the resources, expertise and training to appropriately address the risks that AI and data science pose.
- Traditional research ethics principles are not well suited for AI research, as they assume the closer researcher-subject relationship found in biomedical research.
- Specific principles for AI and data science research are still emerging and are not consistently adopted by RECs.
- Multi-site and public-private partnerships can exacerbate existing challenges of governance and consistency in decision-making processes.
- RECs struggle to review potential harms and impacts that arise throughout the lifecycle of AI and data science projects.
- Corporate RECs lack appropriate transparency in relation to their processes.
We stand by RECs’ potential as well-tested governance mechanisms for informing responsible research practices. To ensure RECs’ effectiveness in the novel terrain of AI research, we conducted workshops with industry and academic leaders in the field, as well as follow-up interviews. We make the following recommendations, geared towards different actors within the research ecosystem:
For Academic and Corporate RECs
#1: Incorporate broader societal impact statements from researchers.
AI and data science research communities have called for researchers to incorporate moral considerations at various stages of their work, from the peer review process to conference submissions, among others. RECs can support these efforts by incentivising researchers to engage in reflexive exercises to consider and document the broader societal impacts of their research.
#2: RECs should adopt multi-stage ethics review processes of high-risk AI and data science research.
Many of the challenges that AI and data science raise will arise at different research stages. RECs should experiment with requiring multiple evaluation stages for high-risk research. For example, an REC can evaluate projects at both the point of data collection, and at the point of publication. This is particularly important in the context of the forthcoming EU AI Act, which will require more robust governance mechanisms for “high-risk AI.”⁵
#3: Include interdisciplinary and experiential expertise in REC membership.
Many of the risks that AI and data science research pose cannot be understood without engaging with diverse experiences and expertise. RECs must be interdisciplinary bodies if they are to address the myriad of issues that AI and data science can pose in different domains. RECs must incorporate the perspectives of those who will be impacted by the research and its outputs.
For Academic and Corporate RECs
#4: Create internal training hubs for researchers and REC members, and enable cross-institutional knowledge sharing.
Cross-institutional knowledge-sharing can ensure institutions do not develop standards of practice in silos. Training hubs should collect and share information on the kinds of ethical issues and challenges AI and data science research might raise, including case studies that support reflexive exercises in the domain. In addition to our report, we have developed a resource consisting of six case studies that highlight ethical challenges that RECs might face.⁶
#5: Corporate labs must be more transparent about their decision-making, and engage more with external partners.
Corporate labs face specific challenges when it comes to AI and data science reviews. While many are better resourced and have experimented with broader societal impact thinking (compared to academic RECs), some of these labs have faced criticism for being opaque about their decision-making processes. Many of these labs make consequential decisions about their research without engaging with local, technical or experiential expertise that resides outside their organisations.
For funders, conference organisers and the broader research ecosystem
#6: Develop standardised principles and guidance for AI and data science research.
National research governance bodies like UKRI should work to create a new set of “Belmont 2.0” principles⁷ that offer standardised approaches, guidance and methods for evaluating AI and data science research. Developing these principles should draw on diverse perspectives from different disciplines and communities impacted by AI and data science research, including multinational perspectives – particularly from regions that have been historically underrepresented in the development of past research ethics principles.
#7: Actors across the research ecosystem should incentivise a responsible research culture.
AI and data science researchers lack incentives to reflect on and document the societal impacts of their research. Different actors in the research ecosystem can encourage ethical behaviour. Funders, for example, can create requirements that researchers develop societal impact statements in order to receive a grant. Meanwhile, conference organisers and journal editors can encourage researchers to include such statements when submitting research. By creating incentives throughout the research ecosystem, ethical reflection can become more desirable and be rewarded.
#8: Policymakers should increase funding and resources for ethical reviews of AI and data science research.
There is an urgent need for institutions and funders to support RECs, including paying for the time of staff, and funding external experts to engage in questions of research ethics. The traditional approach to RECs has treated their labor as voluntary and unpaid. RECs must be properly resourced to meet the challenges that AI and data science pose.
There is no need to reinvent the wheel for the purpose of AI and data science research. RECs and broader research governance departments have been around for decades. Panellist Quinn Waeiss, for example, told us of the great work they’re doing at Stanford’s Ethics & Society Review.⁸ Our report highlights the opportunity to tap into this rich resource. But RECs must be properly equipped.
The landscape of AI Ethics is currently in flux, but more services have emerged to plug the newly uncovered gaps.⁹ Algorithmic auditing, for example, has been on the rise. In such an audit, the data, governance and implications of a given AI system are analysed and a decision is reached as to whether changes must be made or the system must be taken offline. Whilst this practice has become more common, a recent report has identified the rise of “audit washing,” where audits are conducted inappropriately and with unclear standards, thereby not adequately engaging with questions of AI ethics.¹⁰ Some businesses have taken REC matters into their own hands, without adequate scrutiny, raising alarm bells.¹¹
RECs have experience in ensuring good practice in research, including questions of consent, and avoiding conflicts of interest, guaranteeing their independence, a key principle for RECs.¹² In this sense, RECs’ know-how can immediately inform the responsible practice of research in AI and data science.
The rise of easily available AI systems – such as text-to-image tools and large language models – have meant questions about the limitations of AI are reaching the general public. Meanwhile, legislation on AI is being developed around the world, and those best prepared will be the organisations that have robust governance infrastructures. The question is not if but how AI research organisations will meet increased public and legal scrutiny. At Kairoi, we can help you design and implement the policies that safeguard your organisation, and place you at the forefront of the responsible AI revolution..
¹ Petermann et al. (2022) Looking before we Leap: Expanding Ethical Review Processes for AI and Data Science Research, Ada Lovelace Institute: Ethics and accountability in practice
³ Centre for Disease Control and Prevention (2022) The Syphilis Study at Tusekegee Timeline
⁴ Haber & Stornetta (1991) How to time-stamp a digital document, Journal of Cryptology