Although many of the conversations around Artificial Intelligence are complex, they should by no means be exclusionary. AI should be accessible to all.
Given the deep impact this technology will have on society, everyone should have the opportunity to participate in discussions about AI - whether you have a PhD in Machine Learning, or whether the AI Fringe is your first tech event.
There are a few terms which attendees can expect to hear throughout the week. Similarly, there are resources we recommend at the bottom of this page for anyone who wants to learn more.
AGI refers to AI systems that can perform a range of tasks (unlike ‘narrow’ AI) and match, if not exceed, human performance.
There are many different definitions of AI. They generally refer to computer-based systems that can perform complex tasks that are considered to require ‘human-like’ intelligence or learning to solve, like playing chess or navigating roads.
Assurance refers to ensuring humans can understand and control AI systems during operation.
Bias occurs when an algorithm produces results and outcomes which are systematically prejudiced or unfair. This is often - but not always - because they have been trained on biased datasets.
Explainability is when humans are able to understand how an AI system works and why it produces certain outputs.
There are several definitions of frontier AI. Generally, these refer to the most advanced and capable AI systems. To read more about how the government will be defining frontier AI at the upcoming UK AI summit, see here.
Governance refers to the policies and practises an organisation puts in place to oversee, safely manage, and control the development and deployment of AI systems.
Machine learning is the process whereby machines develop the capability to complete tasks through ‘learning’ to spot patterns in datasets or repeatedly solving tasks to achieve a certain goal, improving each time, as opposed to following a set of instructions.
Narrow AI is an AI system that can perform a single specific task. For example, it might be able to play chess or navigate roads, but not both. Most of today’s AI systems are ‘narrow’.
Reinforcement learning occurs when a machine repeats a task and learns from a ‘reward signal’ when it has got something right (like giving a well-behaved dog a treat).
Risk refers to outputs of an AI system that may cause harm or damage to humans and society. The extent of potential risk from AI systems is highly contentious and debated, and ranges from minor to existential (which relates to human existence).
Concern with preventing any level of harm resulting from the outputs of an AI system, including accidental harm and misuse.
The large set of data used to train a machine learning model. The machine is fed the data to teach it to identify patterns and make predictions.
Providing information and data that allows humans to have visibility over a model and understand how it works, including its development, training, operation, deployment and outputs.