
AI Ethics & Research Governance Consultancy
Kairoi
/kye-roi/
Kairoi (pronounced /kye-roy/) is from the Ancient Greek word for opportune time. It is about taking advantage of moments when decisions are critical.
At Kairoi, we believe most decisions in tech have the potential to lead to great social impacts. We help our clients identify these crucial decisions, anticipate their consequences and implement safeguards to guide decision-making processes.
Uncovering Values behind Sustainable Innovation
We work with innovators to articulate what they believe in. Kairoi doesn’t offer a one-size-fits-all approach to responsible AI governance. Rather, we help organisations uncover the values they hold dearly to ensure these are applied consistently across their structures.
Developing Structures that Drive Positive Impact
Communicating moral principles isn’t good enough. At Kairoi, we design tailored mechanisms to implement otherwise abstract values. These mechanisms are implemented to diverse departments, to ensure responsible practices are embedded throughout organisations.
Contact us
hello@kairoi.uk
Highlights
The Centre for Data Ethics and Innovation’s portfolio of AI assurance techniques now includes Kairoi’s Responsible AI Interview Questions. The addition of the questions to the portfolio is a welcome step towards leading systematic change as informed by organisational development theory and practices.
Data protection, reputation, intellectual property and organisational culture are all influenced by the adoption of accessible generative AI tools by staff. This post explores why we need relevant policies in the workplace and how these can protect companies and employees.
Kairoi contributes to ground-breaking report on research ethics committees in AI and data science research. The report’s launch event was hosted by the Ada Lovelace Institute on 24th January 2023. The event’s panel included colleagues from the Universities of Southampton, Exeter and Stanford, as well as DeepMind.