Questo sito utilizza cookie tecnici e di terze parti per migliorare l'esperienza.
Within the context of growing global concern about the responsible use of artificial intelligence, a not-for-profit association dedicated to training on the risks of AI represents an initiative of significant social value. This organisation would position itself within a rapidly evolving regulatory landscape, where AI legislation is being simultaneously disseminated at both global and European levels, creating a strong need for awareness and competence amongst developers, business users, professionals and citizens. The Association could establish itself as a third-party educational entity, providing awareness programmes, specialised training and educational resources on AI-related risks, raising funds through partnerships, public and private contributions, membership fees and the support of entities that recognise the importance of responsible technological awareness.
At the global level, the regulatory landscape for AI is profoundly fragmented and asymmetrical. There is not yet a unified global law; rather, different countries and regions have adopted radically different approaches, reflecting their geopolitical, cultural and economic priorities. Whilst some countries have legislated firmly, others have chosen lighter and more flexible approaches, encouraging self-regulation and voluntary compliance.
The European Union has established global primacy with the approval of the Artificial Intelligence Act (AI Act), formally the Regulation (EU) 2024/1689. This represents the first comprehensive and systematic legal framework ever developed at global level specifically dedicated to AI. Approved on 13 June 2024 and published in the Official Journal of the European Union on 12 July 2024, the AI Act entered into force 20 days after publication, with phased implementation of its various articles through to February 2025 and beyond. This regulation of 113 articles and 13 annexes represents a sophisticated and systemic regulatory response.
Structure and Risk Classification: The AI Act adopts a risk-based approach, classifying AI systems into four distinct categories:
Unacceptable Risk: Systems of AI wholly prohibited, such as remote facial recognition in public spaces used by law enforcement for systematic monitoring of persons. These applications represent a direct threat to fundamental rights and personal security.
High Risk: Applications requiring the highest level of compliance, including systems used in critical infrastructure, education, employment, essential services and administrative decisions. For these systems, providers must implement conformity assessments, registration, rigorous technical documentation, human oversight, elevated data quality, transparency and information security robustness.
Limited Risk: Technologies such as AI-based chatbots that must explicitly inform users that they are interacting with automated systems, ensuring transparency in interactions.
Minimal Risk: Applications already widely deployed such as spam filters and intelligent video games, subject to minimal regulatory constraints.
Obligations for Suppliers and Developers: The AI Act imposes a complex series of binding obligations upon providers of high-risk AI systems:
Risk assessment: All systems must be subject to risk assessments based on their potential decision-making impact
Data management: Rigorous requirements for the training, validation and testing of datasets
Technical documentation: Obligation to prepare and maintain comprehensive documentation of the system prior to its placing on the market
Operational logging: Maintenance of detailed logs of operations carried out (Article 12)
Human oversight: Implementation of robust mechanisms for human control over automated decision-making processes
Transparency: Provision of clear information to installers and end users
Fundamental rights impact assessment: Execution of Fundamental Rights Impact Assessment (FRIA) for systems processing personal data
Accuracy and robustness: Ensuring cybersecurity, accuracy and system resilience over time
Special Provisions for Generative AI: The AI Act introduces specific rules for generative AI developers, classifying them as high-risk systems. These obligations include rigorous transparency regarding training algorithms, detailed information on data used, security measures to prevent misuse (such as generation of deceptive deepfakes), and the obligation of registration in public databases to ensure traceability and accountability.
Severe Sanctions: The AI Act establishes a system of graduated sanctions that may reach up to 6% of annual global turnover for violations of the most serious rules, making compliance a mandatory strategic priority for any organisation developing or using AI systems within the EU.
Canada: In June 2022, Canada introduced the Artificial Intelligence and Data Act (AIDA), still in the legislative approval phase. The AIDA aims to protect Canadian citizens and promote responsible AI. In anticipation of its entry into force, the Canadian government published in September 2023 a Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, a code which enterprises may voluntarily subscribe to in order to adopt specific measures for preventing risks from generative AI. This hybrid approach combines self-regulation with future legislative intent.
China: China has adopted a sectoral and thematic approach, with the Cyberspace Administration of China (CAC) promulgating concrete rules for specific technologies. This includes regulations on the management of algorithmic recommendations (2022), on deepfakes or face synthesis (2022) and on generative AI services (2023), with particular attention to content moderation, intellectual property and cybersecurity. The Chinese approach is centralised and permits rapid large-scale implementation, but with less public transparency compared to Western models.
United States: The USA has adopted a fragmented and multi-level approach. At the federal level, the National Institute of Standards and Technology (NIST) has published the AI Risk Management Framework and established the AI Safety Institute, tasked with assessing the risks of "frontier" AI models, developing safety standards and providing testing environments. At state level, various states such as California have proposed specific legislation (for example, Senate Bill 1047). Agencies such as the FTC (Federal Trade Commission) and the FDA (Food and Drug Administration) are inviting companies to avoid unfair or deceptive practices, especially in the medical sector. The American approach is progressively oriented towards a risk-based framework similar to the European one, but remains fundamentally primarily self-regulated at the federal level.
Japan: On 28 May 2025, Japan approved the Law on the Promotion of Research, Development and Use of Artificial Intelligence Technologies (Japanese AI Bill). This represents a "light touch" approach deliberately contrasted with the European hard-law model. The Japanese law adopts a multi-stakeholder governance model involving public institutions, research entities, private sector and civil society, placing innovation at the centre rather than restriction. The Japanese approach emphasises voluntary cooperation amongst parties rather than binding legal obligations, assigning to the State a coordinating role rather than rigid regulation.
Singapore: Singapore has deliberately chosen an incremental and "light-touch" approach. Rather than hastening to enact specific legislation, Singapore's government actively promotes AI adoption as a driver of national growth through flexible frameworks regularly updated. In 2019, it published the National AI Strategy, and in 2025 launched the Agentic AI Primer to provide guidance on autonomous AI systems. Singapore adopts clear principles for responsible development and deployment of AI, encouraging voluntary compliance rather than mandatory regulatory compliance.
Australia: Australia has adopted an even more pragmatic position, contending that existing laws are sufficient to regulate AI risks. Australia's national plan promotes a "technology-neutral" and "light touch" approach, intervening normatively only when absolutely necessary. The assumption is that matters such as privacy breaches, consumer fraud, discrimination and breaches of intellectual property rights can be managed through existing regulation, with a "monitoring and advisory" role assigned to the Australian AI Safety Institute.
The European AI Act represents a normative masterpiece that transcends simple technological regulation to address fundamental issues of human rights, transparency, accountability and protection of personal dignity. The regulation comprises 13 chapters (Titles) structured as follows:
Titles I and II - General Provisions and Definitions: Establish basic definitions of "AI system" and delineate the scope of application of the regulation.
Title III - Prohibited AI Systems: Prohibits specific categories of systems deemed incompatible with European values, including those exploiting vulnerabilities (for example, based on age, disability, or socio-economic conditions) and those employing subliminal or deceptive manipulation techniques.
Title IV - Requirements for High-Risk Systems: Imposes proportionate and rigorous obligations upon providers, including risk assessment, data quality, technical documentation, human oversight, registration and traceability.
Titles V-VII - Obligations of Providers, Importers and Distributors: Specifies the responsibilities of each actor in the AI supply chain.
Titles VIII-X - Notification, Market Surveillance and Compliance: Establishes monitoring mechanisms and sanctions.
Title XI - Delegation of Powers: Permits the European Commission to enact delegated acts to adapt the regulation to technological evolution.
Title XII - Penalties: Defines administrative financial penalties, with ceilings of up to 6% of annual global turnover for serious violations.
Title XIII - Final Provisions: Includes transitional and implementation clauses.
The AI Act provides for phased and strategic entry into force:
1 August 2024: General entry into force of the Regulation (20 days after publication on 12 July 2024)
2 February 2025: Entry into force of key provisions on prohibited systems, high risks, human oversight, and penalties
2 August 2025: Full application for providers of general-purpose AI models
2 November 2025 and subsequently: Progressive deadlines for specific adjustments and full compliance
This graduated implementation timeline allows organisations to adapt progressively, but also creates a limited timeframe for preparation and achieving compliance.
A crucial aspect for training and compliance is the intersection between AI Act and GDPR (General Data Protection Regulation). Where AI systems process personal data, they must address both AI Act requirements and GDPR requirements:
Integrated Risk Assessment: Both the AI Act and GDPR require risk assessments, which must be integrated and coherent.
DPIA and FRIA: The Data Protection Impact Assessment (DPIA) of the GDPR and the Fundamental Rights Impact Assessment (FRIA) of the AI Act overlap. For high-risk AI systems processing personal data, organisations must perform combined assessments addressing both data protection and AI-specific risks.
Privacy by Design: The GDPR approach of "privacy by design" must be extended to "privacy and security by design" for AI.
International Data Transfers: Where an AI system operates on infrastructure outside the EU, the organisation must verify that equivalent protective measures are in place to those required by the GDPR.
Explicit Consent: Where personal data is used to train AI models, specific, informed and explicit consent must be obtained for this particular use.
The AI Act devotes particular attention to generative AI, an area of growing concern. Specific risks include:
Bias and Discrimination: Generative models learn from available online data, which is not neutral. Cultural prejudices, gender stereotypes and latent discrimination can infiltrate AI outputs, producing results that inadvertently discriminate against specific demographic groups. For example, image generation systems may predominantly represent white men when asked for images of people in positions of importance.
Deceptive Content: Generative AI can produce convincing text that mimics human style, opening avenues for abuse such as deepfakes, false news and sophisticated phishing messages.
Privacy and Data Leakage: Generative AI risks learning and reproducing confidential information, including sensitive personal information that should not be stored.
Manipulation and Prompt Injection: External attacks can push models to behave anomalously through sophisticated manipulation techniques.
Intellectual Property: Training on data from public sources (including unauthorised collection of copyright-protected data) raises significant intellectual property questions.
Robustness and Cybersecurity: Vulnerability to adversarial inputs and to cybersecurity attacks remains a critical concern.
Complementary to the AI Act, a specific international standard for AI security has emerged: ETSI TS 104 223 - Securing Artificial Intelligence (SAI): Baseline Cyber Security Requirements for AI Models and Systems. This standard establishes a framework of reference with 14 basic principles extensible to 72 principles to ensure protection against cybersecurity attacks and AI-specific vulnerabilities, including protection against data corruption, model obscuration, insertion of falsified elements and data management vulnerabilities.
An often overlooked aspect of the AI Act is the explicit requirement for "AI Literacy". Article 4 of the regulation states that "providers and deployers of AI systems shall take measures to ensure, to the greatest extent possible, a sufficient level of AI literacy of their personnel and other persons operating and using AI systems on their behalf".
This means that training is not optional, but a binding legal obligation for organisations implementing AI systems, particularly high-risk ones. A not-for-profit association dedicated to providing this training would fit perfectly into the European regulatory ecosystem, providing critical educational resources for legal compliance.
The regulatory landscape for Artificial Intelligence is in rapid and profound evolution. Whilst the European Union has established a global standard with the AI Act, the rest of the world is following with diversified approaches. A not-for-profit association dedicated to training on the risks of AI would represent a critical resource in this context of normative and technical transition.
The organisation could position itself as a knowledge hub that:
Translates regulatory complexity (AI Act, GDPR, DPIA, FRIA) into accessible educational programmes
Develops organisational competencies necessary for legal compliance and responsible AI management
Promotes awareness of specific AI risks (bias, privacy, cybersecurity, deceptive content)
Provides practical resources for risk assessment, AI governance and continuous monitoring
Creates a community of practice of professionals, educators and stakeholders interested in responsible AI
AI Risk Assessment Tool for School Governance