Another Piece of the AI Ethics Puzzle
18th January 2023, by Ismael Kherroubi Garcia
David Man & Tristan Ferne / Better Images of AI / Trees / CC-BY 4.0
The marketplace of AI Ethics and Responsible Tech solutions is becoming busier and busier. Recent surveys show growing concerns on the part of both business executives and the general public about the ethics of artificial intelligence (AI).¹ In response, more and more startups are emerging with diverse solutions to these concerns. The Ethical AI Database, which provides data and analyses about the nascent and growing landscape of responsible tech firms, increased its number of listed organisations by almost a third between Q1 and Q2 of 2022.² The diversity of startups listed by EAIDB also points to the many types of solutions needed to work on the ethics of AI technologies. There are five categories of “ethical AI company” that the EAIDB recognises; those working in:
- Data for AI;
- AI Audits, Governance, Risk and Compliance;
- ModelOps, Monitoring and Observability;
- Targeted Solutions and Technologies; and
- Open-Source Solutions.
The growth of and diversity in the marketplace of AI Ethics and Responsible Tech point to the scale and importance of the task at hand. Indeed, AI systems have seeped into almost every aspect of our lives. AI ethics is a social question that requires interdisciplinary solutions.
AI Ethics as a Social Question
The question of AI Ethics is far too grand to embark on in a short blogpost, but we can take a moment to identify some of its stakeholders, each of which play a different role in the ethics of AI technologies.
- Innovators: These are the people and organisations who are developing the latest AI. We might think of Meta, Amazon, Google and the like; but also research institutions such as universities, institutes and think tanks. Innovators are whose tech EAIDB’s companies are generally looking to audit and improve. Innovators are those whose research, products and services are becoming a part of our lives.
- Buyers: These are the people and organisations who make decisions about deploying technologies that impact groups of people. The usual suspects include government bodies and justice systems. Employers can also play this role by using technologies that influence decisions about their staff or customers. Schools, hospitals and other organisations – even innovators – can also implement technologies that affect groups of people.
- End users: These are individuals who use AI technologies, whether they know it or not, either directly from Innovators or indirectly through Buyers. Our mobile phones, internet browsers and social media platforms are just some ways we expose ourselves to AI technologies from Innovators. Meanwhile, Buyers may use AI technologies when we apply for jobs, apply for loans, or seek medical attention.
In this complex landscape of AI Ethics and Responsible Tech solutions, which continually uncovers moral concerns from diverse social parties, where does Kairoi sit? And what makes Kairoi different?
- How can we embed responsible practices throughout our organisation?
- How can we operationalise our organisational values?
- How can we ensure our staff champion responsible research and innovation?
- Empower staff with analytical tools to better reflect on morality;
- Co-design mechanisms to ensure responsible practices through
- Appropriate and accurate Communications
- Identification of relevant Technical Solutions
- Public Engagement
- Governance Structures
- Implement policies through workshops to ensure staff and executive buy-in.
Our approach draws on practices from diverse industries, as well as rigorous research. At Kairoi, we believe most decisions in tech have the potential to lead to great social impacts. We help our partners identify these crucial decisions, anticipate their consequences and implement safeguards to guide decision-making processes.