Why Workplaces Need Policies For Generative AI Tools
24th April 2023, by Ismael Kherroubi Garcia
Khyati Trehan / DeepMind / Safety / Unsplash Licence
As generative artificial intelligence (AI) tools become more accessible and sophisticated, they are increasingly being used to streamline tasks, create content, and improve productivity. It is no wonder, then, that they are proliferating in knowledge workers’ toolboxes.
One particular generative AI tool taking the workplace by storm is OpenAI’s “ChatGPT”, a chatbot which, built on a large-language model (LLM), provides lengthy and detailed responses. According to a recent report, almost one in every ten employees have used ChatGPT in the workplace. Unfortunately, 11% of what employees paste into the system is confidential information.¹ This poses a major security risk, as what goes into ChatGPT can be reviewed by OpenAI’s “AI trainers” to improve their systems.² What’s more, nearly 70% of employees using ChatGPT at work do so without their bosses knowing.³
The need to establish protocols for using generative AI systems at work has been shown by companies such as JP Morgan and Verizon, who have both opted for outright banning the use of ChatGPT; and Samsung, who are now developing an AI chatbot for internal use after employees input proprietary source code into ChatGPT.⁴
But rules as to how generative AI tools are used in the workplace can impact much smaller organisations and businesses. In this blog post, we outline four reasons organisations need clear and robust use policies for generative AI tools.
1. Data Protection
When using generative AI tools, users provide a prompt for the tool to display some result. Prompts may be relatively innocuous. Users might request an AI chatbot to rephrase a sentence or check for grammatical errors in a text. But it turns out they can do much more than that, and can help with creating reports, writing blog posts, summarising texts, and drafting emails. There is a significant risk that users provide detailed prompts for business-related activities.
Prompts made to generative AI tools can become part of their training data. That is, there is a risk that your prompts are not only seen by the company behind the tool, but feed into future responses the tool gives to other users. It is crucial, then, that we understand the prompts as freely giving away information. Much like with sharing passwords or client information on social media, users must not share confidential information with generative AI tools.
From a cybersecurity standpoint, generative AI systems have been “faked”. In one case, a fake ChatGPT extension for the web browser Chrome was used to hijack Facebook accounts – potentially exposing thousands of Facebook users’ data.⁵ These systems can also pose problems of their own. On 20th March, ChatGPT had to be taken offline after it was found that titles of other users’ chat history, as well as some users’ payment details, were visible to other users.⁶ This came down to a bug that OpenAI was able to patch, but AI systems are also subject to novel attacks.⁷
2. Reputational Damage
Samsung employees reportedly leaked three sets of sensitive information to ChatGPT within a few weeks of the tech giant lifting a ban on its use.⁸ One involved source code that an engineer prompted ChatGPT with, as there was a bug that needed solving. Another employee used ChatGPT to optimise the process of identifying defective chips. And the third involved feeding ChatGPT with an internal meeting’s transcript for its minutes. All three employees are under disciplinary investigation. There is no doubt that Samsung’s blunder comes with some embarrassment for their reputation as a world-leading electronics company.
In another case, misuse of a generative AI tool has come with significant financial cost. Alphabet, the conglomerate holding company behind Google, recently saw its market value plunge by 9% after Google AI launched “Bard,” another AI chatbot. In their own advertising, they showed Bard claiming that the James Webb Space Telescope (launched in 2021) took the first image of a planet beyond the solar system. However, that feat was achieved in 2004 by the Very Large Telescope.⁹ In this case, we see how the misuse of generative AI systems (which can be inaccurate and must have their answers checked every time) can directly cost a company a lot of money. In this case, $100 billion.¹⁰
3. Intellectual Property
The question about intellectual property (IP) in generative AI tools is very murky, and different jurisdictions can respond to its issues in different ways. And there are at least two distinct angles relating with IP here: who owns the IP of a tool’s training data? And who owns the IP of its outputs? To consider these questions, let’s take the text-to-image tool “Midjourney” as an example.
Regarding ownership of training data, Midjourney draws on an enormous dataset of images linked to by a German-based research non-profit, LAION.¹¹ These images, in turn, might be under copyright licences that do not allow for Midjourney’s usage. Indeed, Midjourney and other tools scraping artists’ works have been sued for precisely this reason.¹² Relatedly, we may get to a point where customers demand assurance that our services do not infringe on others’ intellectual property rights,¹³ and standards are already being developed to trace the origin of different types of media.¹⁴
On the ownership of the outputs of generative AI tools, Midjourney is quite clear: if you are not a paying user, you do not own the IP of the images it generates.¹⁵ Therefore, an employee creating images using Midjourney for blog posts, for example, must take care to share it under the correct licence.
But IP in generative AI systems is a hot topic, and very little is settled. Midjourney-generated images were recently ruled to not be protected by copyright law by the US Copyright Office.¹⁶ Meanwhile, organisations would hope their own works would be of their IP, and IP often features in employment contract clauses for this reason. Notwithstanding, generative AI tools might be used by staff on a personal basis. The question of IP alone is a strong reason to clarify the appropriate uses of generative AI systems in the workplace.
4. Fostering a Responsible Innovation Culture
Workplace policies can help educate the wider public about a myriad of topics. Equity, diversity and inclusion policies can bring important questions about social injustice to staff’s attention; wi-fi use policies can inform staff about the dangers of using public wi-fi networks; something as simple as a health and safety policy might remind staff to bend their knees when picking things up from the floor; and so on. Company policies are highly informative if built right.
A robust Generative AI Use Policy can help mitigate extreme responses to AI systems. Those who fear AI can be helped to see that things like ChatGPT or Bard are simply tools based on LLMs that find patterns in how words tend to link together in texts. Similarly, those who view AI systems as the answer to all our problems can be reminded that they are not only prone to error (what has been called “artificial hallucination”), but simply encapsulate the limited understandings of their creators; that is, they are biassed.
Robust policies encourage better-informed notions about generative AI tools. By knowing more about these systems, we help staff engage with them on more nuanced terms, if they choose to engage with them at all. Indeed, part of such a policy’s role would be to clarify whether the onus of the decision to use generative AI tools lies with the individual or the organisation. In other words, it establishes who is accountable for a system’s misuse.
Concluding: It’s Just Good Practice
Organisations have many policies to ensure their staff’s and their own safety. Internet use policies establish rules as to what can and cannot be browsed from company-owned devices; social media policies provide guidelines about what should not be posted online about the company; data protection policies can ensure compliance with relevant regulation; and so on. A policy setting out how to use generative AI tools is not so far-fetched.
As with other company policies, a robust Generative AI Use Policy can help raise awareness about a topic; in this case, the increasingly accessible tools based on AI systems. Importantly, such a policy demonstrates an organisation’s commitment to equitable and consistent standards, as well as their ability to keep abreast of technological advancements.
If you are unsure where to start for your own Generative AI Use Policy, we at Kairoi have made a template available for you to copy, and which you can freely access here. We encourage you to reach out so we can help design a more tailored policy that accounts for your organisation’s context and practices. We also support the effective rollout of AI governance policies and mechanisms to ensure safe and responsible design, development, deployment and use of AI research and systems.
Contact us
hello@kairoi.uk
References
¹ Coles (2023) 3.1% of workers have pasted confidential company data into ChatGPT, Cyberhaven
² OpenAI (n.d.) What is ChatGPT?
³ Jackson (2023) Nearly 70% of people using ChatGPT at work haven’t told their bosses about it, survey finds, Business Insider
⁴ Milmo (2023) ChatGPT limited by Amazon and other companies as workers paste confidential data into AI chatbot, iNews
⁵ Rees (2023) Fake ChatGPT Chrome Extension Steals Facebook Logins, Make Use Of
⁶ OpenAI (2023) March 20 ChatGPT outage: Here’s what happened
⁷ Federal Office for Information Security (2023) AI Security Concerns in a Nutshell
⁸ Jeong Doo-yong (2023) Concerns turned into reality… As soon as Samsung Electronics unlocks ChatGPT, ‘misuse and abuse’ appears one after another, The Economist
⁹ Exoplanet Exploration (2023) 2M1207 b – First image of an exoplanet, NASA
¹⁰ Coulter and Bensinger (2023) Alphabet shares dive after Google AI chatbot Bard flubs answer in ad, Reuters
¹¹ Large-scale Artificial Intelligence Open Network, https://laion.ai/
¹² Vincent (2023) AI art tools Stable Diffusion and Midjourney targeted with copyright lawsuit, The Verge
¹³ Appel, Neelbauer & Schweidel (2023) Generative AI has an Intellectual Property Problem, Harvard Business Review
¹⁴ Coalition for Content Provenance and Authenticity, https://c2pa.org/
¹⁵ Midjourney, Terms of Service, https://docs.midjourney.com/docs/terms-of-service
¹⁶ Novak (2023) AI-Created Images Aren’t Protected By Copyright Law According To U.S. Copyright Office, Forbes
Contact us
hello@kairoi.uk