Another Piece of the AI Ethics Puzzle

18th January 2023, by Ismael Kherroubi Garcia

David Man & Tristan Ferne / Better Images of AI / Trees / CC-BY 4.0

The marketplace of AI Ethics and Responsible Tech solutions is becoming busier and busier. Recent surveys show growing concerns on the part of both business executives and the general public about the ethics of artificial intelligence (AI).¹ In response, more and more startups are emerging with diverse solutions to these concerns. The Ethical AI Database, which provides data and analyses about the nascent and growing landscape of responsible tech firms, increased its number of listed organisations by almost a third between Q1 and Q2 of 2022.² The diversity of startups listed by EAIDB also points to the many types of solutions needed to work on the ethics of AI technologies. There are five categories of “ethical AI company” that the EAIDB recognises; those working in:

  • Data for AI;
  • AI Audits, Governance, Risk and Compliance;
  • ModelOps, Monitoring and Observability;
  • Targeted Solutions and Technologies; and
  • Open-Source Solutions.

The growth of and diversity in the marketplace of AI Ethics and Responsible Tech point to the scale and importance of the task at hand. Indeed, AI systems have seeped into almost every aspect of our lives. AI ethics is a social question that requires interdisciplinary solutions.

AI Ethics as a Social Question

The question of AI Ethics is far too grand to embark on in a short blogpost, but we can take a moment to identify some of its stakeholders, each of which play a different role in the ethics of AI technologies.

  1. Innovators: These are the people and organisations who are developing the latest AI. We might think of Meta, Amazon, Google and the like; but also research institutions such as universities, institutes and think tanks. Innovators are whose tech EAIDB’s companies are generally looking to audit and improve. Innovators are those whose research, products and services are becoming a part of our lives.
  2. Buyers: These are the people and organisations who make decisions about deploying technologies that impact groups of people. The usual suspects include government bodies and justice systems. Employers can also play this role by using technologies that influence decisions about their staff or customers. Schools, hospitals and other organisations – even innovators – can also implement technologies that affect groups of people.
  3. End users: These are individuals who use AI technologies, whether they know it or not, either directly from Innovators or indirectly through Buyers. Our mobile phones, internet browsers and social media platforms are just some ways we expose ourselves to AI technologies from Innovators. Meanwhile, Buyers may use AI technologies when we apply for jobs, apply for loans, or seek medical attention. 

Each of these parties have different roles to play in the ethics of AI. Innovators may be seen as those who hold the most responsibility. After all, they seem to be calling the shots as to what technology gets made and how. But where does this leave Buyers? Their decisions seem to influence the lives of individuals more clearly: they decide the contexts in which AI technologies are implemented. Finally, End users seem to be at the mercy of both Innovators and Buyers, navigating the evermore pervasive presence of AI technologies, all whilst potentially generating the data that allow Innovators to create more sophisticated tools.

What about Kairoi?

In this complex landscape of AI Ethics and Responsible Tech solutions, which continually uncovers moral concerns from diverse social parties, where does Kairoi sit? And what makes Kairoi different?

At Kairoi, we understand that the question of AI Ethics is difficult and can be overwhelming. But we also know that Innovators and Buyers have the capacity to research, design and deploy technology responsibly. We begin by looking at the resources, practices and constraints of our clients. We focus on clients who are Innovators and Buyers, but we also work with those who seek to better-inform End Users.

Our main focus is on the organisational culture and decision-making processes that make responsible innovation possible. Some of the questions we help clients answer are:

  • How can we embed responsible practices throughout our organisation?
  • How can we operationalise our organisational values?
  • How can we ensure our staff champion responsible research and innovation?

Of course, the answers to these questions are never the same. The context of any client is key to ensuring answers are useful to them. Generally, our work with clients will follow three stages:

  1. Empower staff with analytical tools to better reflect on morality;
  2. Co-design mechanisms to ensure responsible practices through
    • Appropriate and accurate Communications
    • Identification of relevant Technical Solutions
    • Public Engagement
    • Governance Structures
  3. Implement policies through workshops to ensure staff and executive buy-in.

Our approach draws on practices from diverse industries, as well as rigorous research. At Kairoi, we believe most decisions in tech have the potential to lead to great social impacts. We help our partners identify these crucial decisions, anticipate their consequences and implement safeguards to guide decision-making processes.

 

 

References

¹ IBM Institute for Research Value (2022) AI Ethics in Action
²  Ethical AI Database (2022) Q2 2022 Ethical AI Ecosystem

Author

Portrait of Ismael in an aubergine blazer, black t-shirt and black glasses

Ismael Kherroubi Garcia, FRSA

Ismael is the founder and CEO of Kairoi.

You can find him on LinkedIn, Mastodon and Twitter.

Contact us

hello@kairoi.uk