Generative Artificial Intelligence: Five things to consider.

Posted: 3 November 2023

Artificial Intelligence (AI) has been dominating headlines following the world’s first summit on artificial intelligence safety held at Bletchley Park this month.

Large language models (LLMs) like OpenAI’s ChatGPT have been stirring up discourse on the future of Generative AI (GenAI), how regulation will be implemented, and how this impacts businesses and people across the world. 

The promise of AI is undeniable, offering transformative solutions and insights. When exploring new tools, including AI products, we need to first consider the data implications, such as data protection, intellectual property or export control implications, so that we can embed safeguards into our usage.

Here are five things to consider when evaluating the use of AI: 

Data Ownership 

Consider any information or data shared with an AI system as potentially becoming publicly accessible. Some GenAI systems are continually learning through interaction with users, meaning the information you input becomes part of the model and you effectively lose control of this data

We’ve already seen this happen with corporate data, for example, Samsung trade secrets were accidentally leaked via ChatGPT after engineers used the service to help fix issues with source code. Without a full understanding of how GenAI systems operate or control on what happens to the data you input, sharing personal data with these systems is unwise. 

Ethics and Bias 

We must be mindful of content generated by AI as it may not align with the University’s values and ethics. AI can generate biased content, or misinterpret prompts if not programmed with careful consideration, and without a greater understanding of how AI models are trained, it’s challenging to trust the outputs. 

Automated decision making  

Individuals have a right to not be subject to decision-making processes where there is no human involvement, and where there may be legal or other significant effects on them.  

Transparency  

Transparency is essential when handling personal data to meet our obligations under data protection legislation. We must be able to document how personal data is processed, limit what we process as much as possible and communicate this effectively to the individuals whose data we are handling. We must also consider where AI servers are based, under data protection legislation we must ensure that personal data is not transferred outside the UK/EEA without appropriate due diligence and safeguards in place.  

Accountability and limitations 

Do you have the expertise to verify that the output is accurate? And are you willing to take full responsibility for missed inaccuracies? AI systems currently still suffer from memorisation, where the same data you input can be generated again without modification, meaning data you did not intend to share is disclosed more widely, and hallucinations, where AI can simply generate incoherent and untrue responses. Ultimately, you are accountable for any content used. 

If you are interested in finding out more, some relevant information and guides can be read online.

For information on managing data risks in particular, please take a look at the AI Toolkit from the Information Commissioner’s Office.