The EU AI Act becomes applicable across the EU, including Malta, on 2 August, 2026 (you may read our general overview here). However, the AI Act’s general provisions and the provisions on prohibited AI practices that present an unacceptable level of risk, will come into force as early as 2 February 2025. With this deadline fast approaching, organisations subject to the AI Act must ensure compliance accordingly.
AI Literacy
By 2 February 2025, providers and deployers of AI systems, including those based in Malta, must take steps to guarantee an adequate level of AI literacy among their staff and any other individuals operating and utilising AI systems on their behalf. In doing so, the context within which the AI systems are to be used and the potential impact on affected persons or groups must be considered.
Prohibited AI Practices
In addition to the above, from 2 February 2025, the following AI practices shall be prohibited (effectively banned) across the EU, including Malta:
1. AI systems deploying subliminal, manipulative or deceptive techniques having the objective or the effect of materially distorting one’s behaviour or significantly impairing one’s ability to make informed decisions, causing, or likely to cause, significant harm to that person or to other individuals;
2. AI systems exploiting one’s vulnerabilities as a result of their age, disability or socioeconomic situation, with the objective or the effect of materially distorting their behaviour and causing, or likely to cause, significant harm to that person or to other individuals;
3. AI systems which evaluate or classify individuals according to their social behaviour or personality characteristics, leading to unfavourable treatment of certain persons in social contexts which are unrelated to the context in which the data was originally obtained, or leading to unjustifiable unfavourable treatment of certain persons which is disproportionate to their social behaviour;
4. AI systems used specifically to assess personality traits of individuals to predict criminal behaviour, except when used to support human assessments of one’s involvement in a criminal activity, based on verifiable criminal activity data;
5. AI systems used specifically to create or expand facial recognition databases by image scraping facial images from the internet or CCTV footage;
6. AI systems used to infer one’s emotions at the workplace and in educational settings, except when intended and applied for medical or safety reasons;
7. AI systems utilising biometric categorisation systems to individually categorise persons according to their biometric data to infer race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation, except where biometric datasets have been acquired lawfully for labelling or filtering in law enforcement;
8. AI systems which use real-time biometric identification in public spaces for law enforcement purposes, unless this is strictly necessary to:
-
- Find missing persons or victims of abduction, human trafficking, or sexual exploitation;
- Prevent imminent threats to life, safety, or terrorist threats;
- Identify suspects of crimes and conduct criminal investigations and prosecute offences punishable by at least four years of imprisonment.
Consequences of Non-Compliance
In the event of non-compliance, offenders will be subject to administrative fines reaching up to € 35,000,000 or, where the offender is an undertaking, up to 7% of its total worldwide annual turnover, whichever is higher.
With just days until the prohibitions above become applicable, organisations are urged to take steps to ensure compliance. Understanding what AI practices are being prohibited and training relevant personnel is crucial to align with the EU’s expectations.
This document does not purport to give legal, financial or tax advice. Should you require further information or legal assistance, please do not hesitate to contact the authors and/or anyone from our Technology Law Team at iptmt@mamotcv.com