Kairoi contributes to CDEI Portfolio of AI Assurance Techniques
19th September 2023, by Ismael Kherroubi Garcia
Anton Grabolle / Better Images of AI / Classification Cupboard / CC-BY 4.0
On 19th September, Kairoi’s Responsible AI Interview Questions were added to the Centre for Data Ethics and Innovation’s (CDEI) portfolio of AI assurance techniques. The CDEI leads the UK government’s work to enable trustworthy innovation and is part of the Department for Science, Innovation and Technology. The portfolio showcases AI assurance techniques being used in the real-world to support the development of trustworthy AI.¹
Since Kairoi’s establishment in 2022, we have collaborated on projects that result in impactful and freely available resources that enable organisations to get the best of what AI research and systems have to offer, including the Ada Lovelace Institute’s report Looking Before We Leap on evaluating the ethics of AI research projects,² and the Scottish AI Alliance’s Living with AI course.³ We have also proudly led collaborative efforts to co-create adaptable tools that promote more reflexive AI workplace practices. The Responsible AI Interview Questions are one of such resources, all of which are hosted on GitHub and open to public scrutiny and feedback.⁴
The Responsible AI Interview Questions serve as an introduction to an organisation’s responsible AI culture. They challenge job candidates to critically reflect on AI. Doing so at the interview stage can empower successful job candidates to voice their perspectives when designing, developing, deploying and/or using AI tools. New staff may be further trained on responsible AI during the onboarding process, they may have to adhere to internal AI-related policies, and they may be expected to contribute to AI-related work the organisation conducts. The interview questions establish clear expectations about the organisation’s culture, and allow job candidates to demonstrate their willingness to authentically engage with a responsible AI culture.
The interview questions meet three of the UK governments five cross-sectorial regulatory principles:
- Safety, security and robustness: By testing for job candidates’ responsible AI readiness, employers can mitigate against risks posed by the misuse of AI tools, and ensure more rigorous approaches to the design, development and implementation of such tools.
- Fairness: Fairness is promoted by enabling diverse perspectives into decision-making processes about AI. The interview questions are adapted to different roles, enabling successful job candidates to voice their perspectives through cross-departmental collaboration.
- Accountability and governance: The interview questions can straightforwardly be adapted for interviews for the C-suite and non-executive directors. This demonstrates an organisation’s leadership in responsible AI – both to external parties and internal staff, thus furthering the impact on the AI ecosystem.⁵
The addition of the Responsible AI Interview Questions to the CDEI’s portfolio is a welcome step towards leading systematic change as informed by organisational development theory and practices. The portfolio does not confer government endorsement to any of the tools it includes but – as do we at Kairoi – celebrates the great variety of approaches to AI assurance.
¹ CDEI & DSIT (2023) CDEI portfolio of AI assurance techniques, online [accessed 19 September 2023]
² Petermann et al. (2022) Looking before we leap: Expanding ethical review processes for AI and data science research, online [accessed 19 September 2023]
³ Scottish AI Alliance (2023) Living with AI, online [accessed 19 September 2023]
⁴ Kairoi Ltd (2023) Responsible AI Interview Questions, Kairoi on GitHub, online [accessed 19 September 2023]
⁵ CDEI & DSIT (2023) Kairoi: Responsible AI Interview Questions, CDEI portfolio of AI assurance techniques, online [accessed 19 September 2023]