A

Accountability
The obligation to take responsibility for the outcomes of AI decisions and to provide remedies for harm.
Adversarial Attack
A technique that deliberately manipulates input data to exploit vulnerabilities and cause machine learning/AI systems to behave incorrectly.
Agentic AI
AI systems that can plan, call tools, and take multi-step actions toward goals, within set boundaries.
Audit Trail (AI)
Documented evidence of AI system development, deployment, and updates to enable oversight. AI audit trails are also used to record inputs, outputs, AI model behaviour and decision-making logic to be able to trace decisions to data and model parameters.
Algorithm
A set of step-by-step instructions or rules that are used by AI/machine learning systems for solving a problem or making a decision.
Algorithmic Accountability
The responsibility of organizations/developers to explain, rationalize, and take ownership of AI decisions and their impacts.
Algorithmic Drift (or Model Drift)
A gradual decline in AI/machine learning model performance over time. Algorithmic drift is a type of Concept Drift.
Algorithmic Impact Assessment (AIA)
A structured assessment of the potential impacts and risks of an AI system.
Alignment
Ensuring AI systems are consistent with organization preferences, goals, and ethics.
Anonymization
Irreversibly altering data so individuals cannot be identified.
Application Programming Interface (API)
A set of rules and protocols that allows different software applications to communicate and interact with each other, serving as an intermediary enabling applications to request and exchange data or functionality seamlessly.
Artificial Intelligence (AI)
The field of computer science focused on creating systems or machines that can perform tasks that typically require human intelligence. These tasks include learning, reasoning, problem-solving, understanding natural language, perception (like vision and hearing), and decision-making.
Audit (AI Audit)
A systematic review of AI systems for compliance with ethical and regulatory standards.
Autonomy
The ability of an AI system to operate without human intervention.

B

Backdoor Attack
A hidden trigger planted during training that makes a model behave incorrectly when that trigger appears.
Benchmarking
Comparing model performance against standard datasets or other metrics.
Bias
Systematic errors in AI outputs that result in unfair treatment or outcomes for certain groups.
Bias Mitigation
The processes and techniques used during the AI lifecycle (data collection, model training, deployment) to identify, measure, and reduce the systemic errors or prejudices that lead to unfair outcomes.

C

Cache-Augmented Generation (CAG)
A technique that improves generative AI models by using a cache to store and reuse past information, making responses faster and more contextually relevant.
Capability Control (AI Confinement)
Techniques to limit or constrain what an AI system can do (through monitoring and control).
Chatbot
A software application designed to simulate human conversation through text or voice commands.
Compliance
The act of adhering to established standards, policies, or regulations.
Compute
The computational power (hardware, energy, and time) required to train or run AI models.
Computer Vision
AI systems that analyze and interpret visual data (e.g., images and video).
Concept Drift
When the statistical relationships the model learned (i.e., relationship between input data to target variable) has changed making it less accurate in its predictions. This frequently occurs due to the evolution of real-world conditions, making the relationships that the model learned on no longer valid.
Convolutional Neural Network (CNN)
A neural network for images and video that scans small patches to detect edges, textures, and shapes.

D

Data Governance
Policies and practices that ensure data quality, privacy, and ethical compliance.
Data Leakage (train-test contamination)
When evaluation data overlaps with training data, giving an overly optimistic performance score.
Data Minimization
Collecting and using only the data needed for a specific, stated purpose.
Data Provenance
The documentation of where data comes from and how it has been used or altered.
Data Protection
Legal frameworks (e.g., GDPR, PIPEDA) ensuring responsible data handling and safeguarding individual rights.
Data Sovereignty
Data is subject to the laws of the country or region where it was generated.
Decommissioning / Retiring
The process of safely phasing out or disabling an AI system.
Deep Learning
A type of AI that learns from data by way of multi-layered neural networks to process complex data patterns.
De-identification
Reducing the link to an individual by removing or masking identifiers; re-identification may still be possible.
Differential Privacy
A technique that adds carefully calibrated noise so insights can be shared without revealing information about any one person.
Diffusion Model
A generative model that starts with noise and gradually removes it to create realistic images, audio, or video.

E

Edge AI
AI processing done locally on a device (e.g., smartphone) rather than in the cloud.
Embedding
A numerical representation of data (e.g., words or images) that captures semantic meaning.
Ethical Impact Assessment (EIA)
A process to evaluate potential social, cultural, and ethical implications of AI before and during deployment. Under the EU AI Act this is a required process that evaluates risks, biases, and potential harms of an AI system before deployment.
Ethics (AI)
The study and application of moral principles in the design, development, and use of AI.
Explainability
The degree to which an AI system’s decision-making process can be understood by humans.
Explainable AI (XAI)
A field of AI focused on improving explainability, transparency, and interpretability of AI systems.

F

Federated Learning
A technique allowing models to be trained across multiple decentralized devices or servers while keeping data local for privacy.
Feedforward Neural Network (FNN)
The simplest neural network where information flows straight from input to output.
Few-shot Learning
Adapting a model to a new task using only a small number of labeled examples.
Few-shot Prompting
Improving outputs by including a few examples in the prompt.
Fine-Tuning
Adapting a pre-trained model to a specific domain or use case using specialized datasets.
Foundation Model
A large, pre-trained model that can be fine-tuned for other tasks.
Frontier Model
Highly advanced, large scale AI model that pushes the boundaries of current capabilities.

G

Generalization
A model’s ability to perform well on new, unseen data.
Generative Adversarial Network
Two models train together; one generates synthetic data and the other detects fakes, pushing realism.
Generative AI (GenAI)
AI systems capable of creating new content—text, images, music, code—based on training data (e.g., ChatGPT, DALL·E, Claude).
Goodhart's Law
In AI, it is the principle that “when a measure becomes a target, it ceases to be a good measure.”
Governance (AI)
The systems, policies, and processes used to direct and control AI technologies responsibly.

H

Hallucination
A generative AI output that is factually incorrect and fabricated but presented as fact.
Human-in-the-Loop (HITL)
An approach where humans are actively involved with the AI through oversight, judgement, and feedback.

I

Inclusive AI
The intentional practice of designing, developing, and deploying AI systems that are accessible, fair and beneficial to all people from all backgrounds.
Inference
The process that a trained AI model uses to generate predictions or outputs from new data.
Inference Engine
The component of an AI system that applies rules to established data to make predictions, make decisions, or generate new information.

K

Knowledge Graph
A structured network of entities and relationships used to ground answers and reasoning.

L

Label (target)
The outcome a model is trained to predict.
Large Language Model (LLM)
A type of foundation model trained on massive text data to understand and generate human-like language outputs.
Library (AI)
A reusable collection of code, tools, and sometimes prebuilt models that developers import to handle common AI tasks—such as data processing, training, and inference—without writing everything from scratch.
Lifecycle (AI)
The stages of AI system development from design and training to deployment, monitoring, and retirement.

M

Machine Learning (ML)
A subset of AI that enables systems to learn from data and improve performance over time without being explicitly programmed.
Model Card / System Card
Documentation that explains how a model was trained, evaluated, and its appropriate use cases.
Model
AI programs that make predictions or generate outputs by recognizing patterns in data sets.
Multimodal Model
A model that can process and combine multiple data types (e.g., text + images + sound).

N

Natural Language Processing (NLP)
A branch of AI that helps machines recognize, understand, and respond to human language.
Neural Network
A type of AI inspired by the human brain’s structure, used to identify patterns and relationships in data.

O

Oversight Committee (AI)
A body responsible for monitoring AI use within an organization to ensure ethical and regulatory compliance.

P

Parameter
A variable used to determine how data is processed and to manipulate the input and output of AI models.
Pipeline
A series of data processing steps that prepare and train AI models.
Policy (AI)
A set of rules and/or guidelines governing how AI can be developed, deployed, and used – consistent with applicable AI regulation.
Predictive Analytics
The use of data, statistical algorithms, and AI to forecast future outcomes or behaviors.
Privacy Impact Assessment
A method to assess and address privacy risks when collecting, using or sharing personal information.
Prompt Engineering
Designing effective instructions and questions (aka prompts) to guide generative AI outputs.

R

Red Teaming
Testing AI systems by intentionally probing for vulnerabilities, bias, or harmful outputs.
Regulation
Legally binding rules set by governments or regulatory authorities to control aspects of AI deployment.
Regulation (AI)
Public sector policies and laws setting rules for AI design, deployment, and accountability (e.g., EU AI Act).
Regulatory Sandbox
A controlled setting that allows testing AI systems under relaxed regulatory conditions to support innovation.
Reinforcement Learning
A machine learning approach where models learn by receiving feedback (rewards or penalties) for their actions.
Responsible AI
An umbrella term and practice that focuses on developing and deploying AI systems in a safe, fair, transparent, and environmentally conscious manner, aligning with human values.
Retraining
Updating a model with new or more representative data to maintain accuracy.
Retrieval Augmented Generation (RAG)
A pattern where a model looks up trusted documents and uses them to ground its answer.
Risk-Based Approach
Regulatory method that tailors oversight based on the potential risk an AI system poses to individuals or society.

S

Safety by Design
A principle emphasizing the proactive integration of safety and risk prevention into AI development at the design level.
Shadow Uses
Employees using unauthorized AI tools without IT knowledge and oversight.
Soft Law
Non-binding standards, guidelines, or best practices (e.g., OECD AI Principles, UNESCO AI Ethics Recommendation).
Synthetic Data
Artificially generated data used to train AI systems when real data is unavailable or sensitive.
Synthetic Media (Deep Fakes)
AI-generated content such as virtual influencers or AI-created videos that feel real but originate in code.

T

Temperature
A setting that controls randomness in generation; higher means more varied outputs.
Token
A unit of data (e.g., word fragment or symbol) processed by algorithms.
Training Data
The dataset used to teach an AI model to recognize patterns or perform tasks.
Transfer Learning
Reusing knowledge from a pre-trained model to improve performance on a new task.
Transformer
A neural network architecture that uses attention to process sequences efficiently (the basis for most modern LLMs).
Transparency
Openness about how an AI system operates, including data sources, algorithms, and intended uses.

U

Unsupervised Learning
Finding patterns or clusters in unlabeled data.

V

Validation Data
A set of data used during model development to tune parameters and prevent overfitting.

Z

Zero-shot Learning
Solving a new task with no labeled examples by relying on what the model already knows and a clear description.
Zero-shot Prompting
Asking a model to perform a task without providing any examples in the prompt.
Back to Top