AI Act's Glossary: Essential Terms and Defintions
Our extensive glossary is crafted to guide you through the language of Artificial Intelligence (AI). This collection is meticulously assembled from terms featured in the EU Artificial Intelligence Act[1] along with additional vocabulary essential for AI fluency. Vaultinum, as a leading provider of AI assessment services, understands the importance of clarity and understanding in navigating this evolving field. With the intention of aiding professionals, enthusiasts, and stakeholders alike, we've curated this glossary to offer concise definitions and explanations of key terms outlined in the AI Act. Whether you're a seasoned expert or just beginning your journey in the realm of AI regulation, this resource serves as a valuable reference point.
We invite you to explore these terms at your convenience, empowering yourself with the knowledge necessary to engage effectively in discussions surrounding AI ethics, governance, and compliance.
What is an ‘AI system’?
An AI system is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
What is the ‘Artificial Intelligence Office’?
‘Artificial Intelligence Office’ means the Commission’s function of contributing to the implementation, monitoring and supervision of AI systems, general purpose AI models and AI governance. References in the AI Act to the Artificial Intelligence office shall be understood as references to the Commission.
What is ‘AI literacy’?
‘AI literacy’ refers to skills, knowledge and understanding that allows providers, users and affected persons, taking into account their respective rights and obligations in the context of the AI Act, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.
What is an ‘AI regulatory sandbox’?
‘AI regulatory sandbox’ means a concrete and controlled framework set up by a competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real world conditions, an innovative AI system, pursuant to a sandbox plan for a limited time under regulatory supervision;
What is a ‘sandbox plan’?
‘Sandbox plan’ means a document agreed between the participating provider and the competent authority describing the objectives, conditions, timeframe, methodology and requirements for the activities carried out within the sandbox.
What is an ‘authorised representative’?
‘Authorised representative’ means any natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by the AI Act.
What is a ‘basic AI tool’?
A rule-based AI system comprising a set of human-coded rules that result in pre-defined outcomes would be a basic AI tool. These systems are suited to projects and applications requiring small amounts of data and simple, straightforward rules.
What is a ‘biometric categorization system’?
‘Biometric categorisation system’ means an AI system for the purpose of assigning natural persons to specific categories on the basis of their biometric data unless ancillary to another commercial service and strictly necessary for objective technical reasons.
What is ‘biometric data’?
‘Biometric data’ means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data.
What is ‘biometric identification’?
‘Biometric identification’ means the automated recognition of physical, physiological, behavioural, and psychological human features for the purpose of establishing an individual’s identity by comparing biometric data of that individual to stored biometric data of individuals in a database.
What is ‘biometric verification’?
‘Biometric verification’ means the automated verification of the identity of natural persons by comparing biometric data of an individual to previously provided biometric data (one-to-one verification, including authentication).
What is a ‘remote biometric identification system'?
‘Remote biometric identification system’ means an AI system for the purpose of identifying natural persons, without their active involvement, typically at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database.
What is meant by ‘real-time remote biometric identification system’?
‘Real-time’ remote biometric identification system’ means a remote biometric identification system whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay. This comprises not only instant identification, but also limited short delays in order to avoid circumvention.
What is a ‘post remote biometric identification system'?
‘Post remote biometric identification system’ means a remote biometric identification system other than a ‘real-time’ remote biometric identification system.
What does a ‘CE marking or conformity’ mean?
‘CE marking of conformity’ (CE marking) means a marking by which a provider indicates that an AI system is in conformity with the requirements set out in Title III, Chapter 2 of the AI Act and other applicable Union legislation harmonising the conditions for the marketing of products (‘Union harmonisation legislation’) providing for its affixing.
What does ‘common specification' mean?
‘Common specification’ means a set of technical specifications, as defined in point 4 of Article 2 of Regulation (EU) No 1025/2012 providing means to comply with certain requirements established under the AI Act.
What is a ‘conformity assessment’?
‘Conformity assessment’ means the process of demonstrating whether the requirements set out in Title III, Chapter 2 of the AI Act relating to a high-risk AI system have been fulfilled.
What is a ‘conformity assessment body’?
‘Conformity assessment body’ means a body that performs third-party conformity assessment activities, including testing, certification and inspection.
What is “context management”?
Context management involves integrating changes in context to enable the AI model to maintain performance levels in an evolving real world. Caching on the other hand involves reusing and repurposing cached data to minimize resource usage. Both are important elements for continuous improvement of AI models.
What is ‘critical infrastructure’?
‘Critical infrastructure’ means an asset, a facility, equipment, a network or a system, or a part of thereof, which is necessary for the provision of an essential service within the meaning of Article 2(4) of Directive (EU) 2022/2557.
What is ‘data augmentation’?
Data augmentation is a technique used to artificially expand and diversify training datasets. This helps improve quality and quantity of available data without using up resources. This is essential in DL where limited data might lead to overfitting.
What is ‘data visualization’?
Data visualization is important in terms of transaprency as it bridges the gap between AI decision-making and users. It converts complex AI outputs into intuitive, visual formats, making the abstract and often intricate patterns of AI algorithms accessible and comprehensible.
What is a ‘deep fake’?
‘Deep fake’ means AI generated or manipulated image, audio or video content that resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful.
What is ‘deep learning’?
Deep Learning is type of AI and ML that uses multilayered networks to build models inspired by the human brain.
What is a ‘deployer’?
‘Deployer’ means any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.
What is ‘disruptor’ in the context of AI models?
In the context of economic disruption, this refers to the process of challenging traditional business models and industries, potentially reshaping entire sectors, or causing significant economic dislocation.
What is a ‘distributor’?
‘Distributor’ means any natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market.
What is a ‘downstream provider’?
‘Downstream provider’ means a provider of an AI system, including a general-purpose AI system, which integrates an AI model, regardless of whether the model is provided by themselves and vertically integrated or provided by another entity based on contractual relations.
What is an ‘emotional recognition system’?
‘Emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data.
What is ‘enabler’ in the context of AI models?
In the social and economic sense by enhancing productivity and in particular augmenting human capabilities (helping accomplish tasks faster, more efficiently and with greater accuracy, helping analyse huge quantities of data faster in order to make quicker and better-informed decisions), solving societal problems such as those related to climate change, population growth or health issues, and stemming innovation by helping generate new tech.
What is meant by ‘experiments’?
Experiments can involve a variety of tests and tracking them is important from an efficiency and productivity perspective. Experiments can typically include using training and testing data, models with differing hyperparameters, running different code, running the same code but in different environment configurations, etc.
What is ‘explainable AI (XAI)’?
Explainable AI (XAI) encompasses techniques and processes designed to make AI models' decisions and outputs understandable and transparent to human users. These methods aim to foster trust and clarity in AI applications by elucidating how models derive their predictions, facilitating both user confidence and regulatory compliance.
What is a ‘feedback loop’?
A feedback loop is essential to enable the AI model to confirm or invalidate its decisions. It allows for the adjustments of the parameters in order to enhance performance. A beneficial feedback loop typically involves bringing in unbiased, external information into the system.
What is a ‘floating-point operation’?
‘Floating-point operation’ means any mathematical operation or assignment involving floating-point numbers, which are a subset of the real numbers typically represented on computers by an integer of fixed precision scaled by an integer exponent of a fixed base.
What is a ‘general purpose AI model’?
‘General purpose AI model’ means an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. This does not cover AI models that are used before release on the market for research, development and prototyping activities.
What is a ‘general purpose AI system’?
‘General purpose AI system’ means an AI system which is based on a general purpose AI model, that has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems.
What is ‘generative AI’?
Generative AI uses ML and DL (through techniques such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs)) to generate new, unique content, such as images, videos, or even text.
What does a ‘harmonised standard’ mean?
‘Harmonised standard’ means a European standard as defined in Article 2(1)(c) of Regulation (EU) No 1025/2012.
What are ‘high-impact capabilities’?
‘High-impact capabilities’ in general purpose AI models means capabilities that match or exceed the capabilities recorded in the most advanced general purpose AI models.
What is an ‘importer’?
‘Importer’ means any natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established outside the Union.
What is ‘informed consent’?
‘Informed consent’ means a subject's freely given, specific, unambiguous and voluntary expression of his or her willingness to participate in a particular testing in real world conditions, after having been informed of all aspects of the testing that are relevant to the subject's decision to participate.
What is ‘input data’?
‘Input data’ means data provided to or directly acquired by an AI system on the basis of which the system produces an output.
What does ‘instructions for use’ mean?
‘Instructions for use’ means the information provided by the provider to inform the user of in particular an AI system’s intended purpose and proper use.
What does ‘intended purpose' mean?
‘Intended purpose’ means the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation.
What is an ‘invariance’?
An invariance refers to a property whereby an output remains unchanged irrespective of transformations applied to the input. For example, in image processing, this could mean that the content of an image remains relevant and recognizable regardless of whether the image is enlarged or rotated. Identifying such invariances is crucial for ensuring that AI models remain robust and accurate across various input modifications.
What is meant by ‘law enforcement authority’?
‘Law enforcement authority’ means:
(a) any public authority competent for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; or
(b) any other body or entity entrusted by Member State law to exercise public authority and public powers for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security.
What is meant by ‘law enforcement’?
‘Law enforcement’ means activities carried out by law enforcement authorities or on their behalf for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security.
What is ‘machine learning’?
Machine Learning (ML) is a type of AI using algorithms to learn and improve from training data.
What does ‘making available on the market’ mean?
‘Making available on the market’ means any supply of an AI system or a general purpose AI model for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge.
What is the ‘market surveillance authority’?
‘Market surveillance authority’ means the national authority carrying out the activities and taking the measures pursuant to Regulation (EU) 2019/1020.
What are ‘MLOps’?
MLOps are good practices aimed at automating and standardizing processes across the ML lifecycle, from data collection to post-deployment.
What does ‘notifying authority’ mean?
‘Notifying authority’ means the national authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring.
What is a ‘notified body’?
‘Notified body’ means a conformity assessment body notified in accordance with the AI Act and other relevant Union harmonisation legislation.
What does ‘national competent authority’ mean?
‘National competent authority’ means any of the following: the notifying authority and the market surveillance authority. As regards AI systems put into service or used by EU institutions, agencies, offices and bodies, any reference to national competent authorities or market surveillance authorities in the AI Act shall be understood as referring to the European Data Protection Supervisor.
What does ‘performance of an AI system' mean?
‘Performance of an AI system’ means the ability of an AI system to achieve its intended purpose.
What is ‘personal data’?
‘Personal data' means personal data as defined in Article 4, point (1) of Regulation (EU) 2016/679.
What is ‘non-personal data’?
‘Non-personal data’ means data other than personal data as defined in point (1) of Article 4 of Regulation (EU) 2016/679.
What is ‘profiling’?
‘Profiling’ means any form of automated processing of personal data as defined in point (4) of Article 4 of Regulation (EU) 2016/679; or in the case of law enforcement authorities – in point 4 of Article 3 of Directive (EU) 2016/680 or, in the case of Union institutions, bodies, offices or agencies, in point 5 Article 3 of Regulation (EU) 2018/1725.
What does ‘placing on the market’ mean?
‘Placing on the market’ means the first making available of an AI system or a general-purpose AI model on the Union market.
What is a ‘post-market monitoring system’?
‘Post-market monitoring system’ means all activities carried out by providers of AI systems to collect and review experience gained from the use of AI systems they place on the market or put into service for the purpose of identifying any need to immediately apply any necessary corrective or preventive actions.
What is ‘prediction’ in the context of ML?
Prediction in ML generally refers to the ability to make predictions about possible outcomes based on historical data. An AI model is a program or algorithm that at its core relies on training data to recognize patterns and make predictions or decisions.
What is a ‘provider’?
‘Provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or a general purpose AI model or that has an AI system or a general purpose AI model developed and places them on the market or puts the system into service under its own name or trademark, whether for payment or free of charge.
What is ‘publicly accessible space’?
‘Publicly accessible space’ means any publicly or privately owned physical place accessible to an undetermined number of natural persons, regardless of whether certain conditions for access may apply, and regardless of the potential capacity restrictions.
What does ‘putting into service’ mean?
‘Putting into service’ means the supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose.
What is an ‘operator’?
‘Operator’ means the provider, the product manufacturer, the deployer, the authorised representative, the importer or the distributor.
What is a ‘real world testing plan’?
‘Real world testing plan’ means a document that describes the objectives, methodology, geographical, population and temporal scope, monitoring, organisation and conduct of testing in real world conditions.
What does ‘reasonably foreseeable misuse’ mean?
‘Reasonably foreseeable misuse’ means the use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems, including other AI systems.
What does ‘recall of an AI system' mean?
‘Recall of an AI system’ means any measure aimed at achieving the return to the provider or taking it out of service or disabling the use of an AI system made available to deployers.
What is ‘reproducibility’?
In the domain of AI, particularly machine learning, reproducibility refers to the extent to which identical or similar results can be achieved by rerunning the algorithm on specific datasets within a given project. Depending on the model and its objectives, reproducibility may emphasize outcomes, analyses, or inferences, with a focus on drawing consistent conclusions.
What is ‘risk’?
‘Risk’ means the combination of the probability of an occurrence of harm and the severity of that harm.
What does ‘safety component of a product or system’ mean?
‘Safety component of a product or system’ means a component of a product or of a system which fulfils a safety function for that product or system, or the failure or malfunctioning of which endangers the health and safety of persons or property.
What are ‘special categories of personal data’?
‘Special categories of personal data’ means the categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725.
What is ‘sensitive operational data’?
‘Sensitive operational data’ means operational data related to activities of prevention, detection, investigation and prosecution of criminal offences, the disclosure of which can jeopardise the integrity of criminal proceedings.
What is a ‘serious incident’?
‘Serious incident’ means any incident or malfunctioning of an AI system that directly or indirectly leads to any of the following:
(a) the death of a person or serious damage to a person’s health;
(b) a serious and irreversible disruption of the management and operation of critical infrastructure;
(ba) breach of obligations under Union law intended to protect fundamental rights;
(bb) serious damage to property or the environment.
What is a ‘subject’?
‘Subject’ for the purpose of real world testing means a natural person who participates in testing in real world conditions.
What is a ‘substantial modification’?
‘Substantial modification’ means a change to the AI system after its placing on the market or putting into service which is not foreseen or planned in the initial conformity assessment by the provider and as a result of which the compliance of the AI system with the requirements set out in Title III, Chapter 2 of the AI Act is affected or results in a modification to the intended purpose for which the AI system has been assessed.
What is ‘systemic risk at Union level'?
‘Systemic risk at Union level’ means a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the internal market due to its reach, and with actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain.
What is ‘testing data’?
‘Testing data’ means data used for providing an independent evaluation of the AI system in order to confirm the expected performance of that system before its placing on the market or putting into service.
What does ‘testing in real world conditions’ mean?
‘Testing in real world conditions’ means the temporary testing of an AI system for its intended purpose in real world conditions outside of a laboratory or otherwise simulated environment with a view to gathering reliable and robust data and to assessing and verifying the conformity of the AI system with the requirements of the AI Act; testing in real world conditions shall not be considered as placing the AI system on the market or putting it into service within the meaning of the AI Act, provided that all conditions under Article 53 or Article 54a are fulfilled.
What is ‘training data’?
‘Training data’ means data used for training an AI system through fitting its learnable parameters.
What is ‘validation data’?
‘Validation data’ means data used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process, among other things, in order to prevent underfitting or overfitting; whereas the validation dataset is a separate dataset or part of the training dataset, either as a fixed or variable split.
What is ‘widespread infringement’?
‘Widespread infringement’ means any act or omission contrary to Union law that protects the interest of individuals:
(a) which has harmed or is likely to harm the collective interests of individuals residing in at least two Member States other than the Member State, in which:
(i) the act or omission originated or took place;
(ii) the provider concerned, or, where applicable, its authorised representative is established; or
(iii) the deployer is established, when the infringement is committed by the deployer;
(b) which protects the interests of individuals, that have caused, cause or are likely to cause harm to the collective interests of individuals and that have common features, including the same unlawful practice, the same interest being infringed and that are occurring concurrently, committed by the same operator, in at least three Member States.
What does ‘withdrawal of an AI system' mean?
‘Withdrawal of an AI system’ means any measure aimed at preventing an AI system in the supply chain being made available on the market.
References:
1 https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/pdf
Recommended for you