Skip to content

Generative Artificial Intelligence (GenAI) policy

Use of Generative Artificial Intelligence (AI)

1. Purpose

The purpose of this policy document is to provide a framework for the use of Generative Artificial Intelligence Large Language Model tools (collectively referred to in the rest of this document as GenAI) such as ChatGPT, Bard, Bing or other similar tools by council employees, contractors, developers, vendors, temporary staff, consultants or other third parties, hereinafter referred to as ‘staff”.

This policy is designed to ensure that the use of GenAI is ethical, complies with all applicable laws, regulations and council policies, and complements the council’s existing information and security policies.

The pace of development and application of GenAI is such that this policy will be in a constant state of development.

Generative artificial intelligence (GenAI) can create realistic, human-like text, images, code and art based on huge amounts of (usually public) data it has been trained on. It

  • Can produce a range of useful outputs, like text, audio, images, and code
  • Responds to natural language questions, so any employee can use it
  • Is very good at understanding different types of data - useful given councils have large amounts of unstructured data in a large variety of formats.

2. Use

This policy applies to all staff using any GenAI tools, whether through council-owned devices or personal devices used for council activities.  These tools can be embedded in other tools – such as email clients or video conferencing tools.   For example, Microsoft 365 includes many authorised GenAI tools – such as Teams transcription.  

Use of GenAI must be in a manner that promotes fairness and avoids bias to prevent discrimination and promote equal treatment and be in such a way as to contribute positively to the council’s goals and values.

Staff may use GenAI for work-related purposes if they adhere to this policy. This includes tasks such as generating text or content for reports, emails, presentations, images and customer service communications.

Particular attention should be given to Governance, Vendor practices, Copyright, Accuracy, Confidentiality, Disclosure and Integration with other tools.

2.1 Governance

Before entering any kind of personal or confidential information into a GenAI website, tool or app, that hasn’t been supplied by DDaT, staff must first complete a Data Protection Impact Assessment detailing their intention to use, the reason for use, and the expected information to be input as well as the generated output and distribution of content.  

2.2 Vendors

Any use of GenAI technology in pursuit of council activities should be done with full knowledge of the policies, practices, terms and conditions of the developers or vendors of that tool. 
Staff must adhere to copyright laws when utilising GenAI. It is prohibited to use GenAI to generate content that infringes upon the intellectual property rights of others, including but not limited to copyrighted material. If a staff member is unsure whether a particular use of GenAI constitutes copyright infringement, they should contact Legal Services or Information Governance Team before using GenAI.   For example, using GenAI to produce a logo could produce something based on a copy of a logo that is a trademark or is copyrighted.

2.4 Accuracy

GenAI can completely make up “facts”.  They will have ingested a large amount of data sources, some of which may be fiction.   They also generate text that looks like real facts.   So, it is important to fact check any content produced. 

All information generated by GenAI must be reviewed and edited for accuracy prior to use. Users of GenAI are responsible for reviewing output and are accountable for ensuring the accuracy of GenAI generated output before use/release. If staff have any doubt about the accuracy of information generated by GenAI, they should not use GenAI without correction. 

2.5 Confidentiality

Confidential and personal information must not be entered into a public GenAI tool (such as ChatGPT).  This is because the information will then enter the public domain and may be used for further training of the publicly available tool.  This would amount to a data breach.   Staff must follow all applicable data privacy laws and organisational policies when using GenAI. For example: 

  • Staff must not use an unauthorised GenAI tool to write a letter to a customer with any personal details in.  For example: ‘Mr A N Other at 123 Acacia Avenue’ as that data will be ingested and kept by the GenAI for re-use.  
  • Staff must not use GenAI apps on personal phones to record and summarise work meetings, or to use translation services. 
  • Staff must not upload spreadsheets full of customer data for GenAI analysis.
If staff have any doubt about the confidentiality of information or what will happen to the data they enter, they should not use that GenAI tool.   Confidential or personal should only be entered into a GenAI tool that has been built or procured specifically for Leicester City Council use where the data entered is confined for the council’s sole use and use of that tool has been specifically sanctioned for that purpose by the Information Governance Team.  So, for example, using Microsoft Teams with a council login to transcribe meetings is authorised.   However, using a free tool downloaded to a personal phone to transcribe a work meeting is not authorised and could constitute a data breach.

2.6 Social Impact and Equality

Staff must be aware of how the use of GenAI may impact different groups of people in different ways as it may have inherent social bias or have been trained on stereotypes.  It may have inappropriate cultural values or display sensitive content.   For example, GenAI must not be allowed to solely determine which customers should have access to services; Humans must be involved in such decision-making where needed, and there must be an appeal processes for any automated or AI-informed decisions. This process will be undertaken by the Information Governance & Risk Team.

2.7 Ethical Use

GenAI must be used ethically and in compliance with all applicable legislation, regulations and organisational policies. Staff must not use GenAI to generate content that is discriminatory, offensive, or inappropriate. If there are any doubts about the appropriateness of using GenAI in a particular situation, staff should consult with their supervisor or Information Governance Team.

2.8 Disclosure

Content produced via GenAI must be identified and disclosed as containing GenAI-generated information. 

Footnote example: 

Note: This document contains content generated by Artificial Intelligence (AI). AI generated content has been reviewed by the author for accuracy and edited/revised where necessary. The author takes responsibility for this content. 

2.9 Integration with other tools

API and plugin tools enable access to GenAI and extended functionality for other services (such as email, Teams or search engines) to improve automation and productivity outputs. Staff should follow OpenAI’s Safety Best Practices: 

  • Adversarial testing 
  • Human in the loop (HITL) 
  • Prompt engineering 
  • “Know your customer” (KYC) 
  • Constrain staff input and limit output tokens 
  • Allow staff to report issues 
  • Understand and communicate limitations 
  • End-staff IDs. 

 API and plugin tools must be rigorously tested for: 

  • Moderation – to ensure the model properly handles hate, discriminatory, threatening, etc. inputs appropriately. 
  • Factual responses – provide a ground of truth for the API and review responses accordingly. 

3. Risks

Use of GenAI carries inherent risks. A comprehensive risk assessment should be conducted for any project or process where use of GenAI is proposed via a data protection impact assessment and DDaT assessments. The risk assessments should consider potential impacts including: legal compliance; bias and discrimination; security (including technical protections and security certifications); and data sovereignty and protection.  

GenAI may store sensitive data and information, which could be at risk of being breached or hacked. The council must assess technical protections and security certification of a GenAI tool before use. If staff have any doubt about the security of information input into GenAI, they should not use GenAI. 

Data entered into GenAI may enter the public domain. This can release non-public information and breach regulatory requirements, customer or vendor contracts, or compromise intellectual property. Any release of private/personal information without the authorisation of the information’s owner could result in a breach of relevant data protection laws. Use of GenAI to compile content may also infringe on regulations for the protection of intellectual property rights. Staff should ensure that their use of any GenAI complies with all applicable laws and regulations and with council policies. 

3.2 Data sovereignty and protection

While a GenAI platform may be hosted internationally, under data sovereignty rules information created or collected in the originating country will remain under jurisdiction of that country’s laws. The reverse also applies. If information is sourced from GenAI hosted overseas, the laws of the source country regarding its use and access may apply. GenAI service providers should be assessed for data sovereignty practice by any organisation wishing to use their GenAI.

4. Compliance

Any violations of this policy should be reported to the council’s Information Governance Team or senior management. Failure to comply with this policy may result in disciplinary action, in accordance with council’s Human Resources policies and procedures.

5. Review

This policy will be reviewed periodically and updated as necessary to ensure continued compliance with all applicable legislation, regulations and organisational policies.  

6. Acknowledgement

By using GenAI, staff acknowledge that they have read and understood these guidelines, including the risks associated with the use of GenAI. 

This policy is based on guidance prepared by ALGIM (Aotearoa - New Zealand) and Socitm (UK). 

www.algim.org.nz  

www.socitm.net