Description of the technology

Artificial intelligence (AI) algorithm security focuses on protecting systems based on machine learning and AI algorithms from unwanted manipulation, attacks, malicious use, and data falsification. Threats concern both the learning of models (attacks on training data) and their use in production environments (attacks on input data, such as so-called adversarial attacks). With the growing use of AI in critical areas, such as finance, health care, and public safety, protecting the integrity, credibility, and reliability of AI algorithms is essential to ensure their safe and ethical use.

Mechanism of action

  • AI algorithm security encompasses a set of practices related to protecting the machine learning process, including selecting appropriate training data, validating input data, monitoring model performance, and testing for resistance to attacks. These mechanisms help identify and block attempts to manipulate results and ensure that AI models work as intended by design. Protection includes both pre-deployment testing and real-time monitoring.

Implementation of the technology

Required resources

  • Resistance testing tools: Systems to test models’ resistance to various types of attacks (e.g. FGSM and PGD).
  • Monitoring systems: Software for monitoring the performance of algorithms and their results.
  • Data analytics platforms: Tools to help assess the reliability of training data.
  • Model management systems: Tools for the protection and version control of AI models.
  • Data encryption systems: Mechanisms to secure input and training data.

Required competences

  • Threat analysis: Ability to identify threats typical of AI systems.
  • Machine learning: Knowledge of algorithms and processes for creating AI models.
  • Data security: Ability to secure data at various stages of processing.
  • Penetration tests: Knowledge of attack techniques and methods to defend against attacks in the context of AI.
  • Model management: Ability to monitor and audit AI algorithms in a production environment.

Environmental aspects

  • Energy consumption: High demand for computing resources to train and test AI models.
  • Emissions of pollutants: Emissions from the operation of data centres that support machine learning.
  • Raw material consumption: High demand for rare materials and electronic components in AI servers.
  • Recycling: Problems with recovering materials from obsolete computing systems.
  • Waste generated: Problems with disposal of equipment used for AI calculations.

Legal conditions

  • AI ethics regulations: Standards for transparency and ethics in the use of AI (e.g. AI Act).
  • AI security standards: Standards for protecting algorithms from attacks (e.g. ISO/IEC TR 24028).
  • Data protection regulations: Regulations for the protection of data used in AI processes (e.g. GDPR).
  • Standards for critical systems: AI security standards in sectors such as medicine and finance.
  • IT security: Regulations for IT risk management in the context of AI applications.

Companies using the technology