The world's first AI law: why it must protect people on the run in particular
Agreement on the first law to regulate artificial intelligence
As part of its digital strategy, the European Commission has developed a Legal act on the regulation of artificial intelligence (AI Act) was proposed to ensure better conditions for the development and use of artificial intelligence (AI) in Europe and to prevent risks. The aim of the legislators was to reconcile the advantages and opportunities of AI with the protection of fundamental rights and the prevention of threats.
After long and complicated negotiations, in December 2023 we, as Parliament, adopted a Agreement with the Council of the EU (i.e. the member states). This is the first ever regulation on the regulation of AI, which is a great success. Even though many areas of application for artificial intelligence are still being researched, it is already clear that further regulation will be needed in the future. We Greens would also have liked to see more extensive regulations, particularly with a focus on the protection of fundamental rights and vulnerable groups. For example, there is still a risk that Prejudice and discrimination are reinforced by AI. Amnesty International also shows what risks the digital age and artificial intelligence pose for the rights of asylum seekers in a detailed report on.
The dangers of artificial intelligence using the example of migration control management
At the Border protection has unfortunately not succeededThe use of artificial intelligence is also a major challenge, as is the regulation of real-time monitoring and other measures. Furthermore, the use of artificial intelligence great danger that the rights of marginalized groups of peoplefor example of asylum seekers or migrants, are violated. This can happen through profiling, automated "risk assessments" and pervasive surveillance practices. EU governments are increasingly deploying AI-powered surveillance systems at borders. These systems use algorithms to analyze data from cameras, drones and sensors to help border guards make decisions in real time. AI is also to be used in asylum procedures, for example in the processing of asylum applications. This can lead to relevant misjudgements and complicated, bureaucratic procedures. The AI Act will only make a limited contribution to preventing such risks.
Certain AI applications raise significant ethical and legal concerns, such as Lie detectors and Biometric recognition systems. This is where the AI Act comes in and regulates such surveillance options. However, we Greens have not been able to assert ourselves in all areas, meaning that there is still a risk of misuse of the technology in border surveillance, for example. There is currently a clear lack of reliable data on the susceptibility of such technologies to errors, particularly in the case of facial recognition. Such systems carry the risk of violating fundamental human rights, such as the right to privacy and the principle of non-refoulement, which prohibits turning people back to areas where they are in imminent danger.
What needs to be considered in the further development of relevant legislation
For further development, it is important to point out significant weaknesses in the AI Act, even if it is fundamentally a great success that there is a Europe-wide entry into the regulation of AI. The compromise found in the AI Act is to ban certain forms of artificial intelligence that are classified as dangerous, while other AI functions are classified as high-risk, requiring strict monitoring and adherence to strict regulatory standards.
Despite considerable concessions that we as the Greens had to make, such as the lack of a ban on biometric surveillance, significant shortcomings in the classification system for high-risk AI and broad exemptions for the use of AI in law enforcement, we as a group are satisfied with the outcome of the negotiations. The future will show how robust and future-proof this regulation will be in view of the rapid technological developments surrounding AI. There will probably have to be adjustments in the near future.
The most important successes for our Group include
- The scope of the AI Regulation, which now also includes general AI.
- Definitions of AI systems that are consistent with international standards and the OECD principles.
- Ban on real-time remote biometric identification etc. in publicly accessible areas.
- Categorization of high-risk AI systems and associated obligations and restrictions.
- A fundamental rights impact assessment before the introduction of a high-risk system.
- Obligations for general AI models, including technical documentation and transparency.
- Environmental obligations, which are a new focus of the law
- A new Commission "AI Office" to monitor and enforce the rules for general AI models.
- Transparency rules for deepfakes and regulatory sandboxes to support start-ups and SMEs in developing AI that is fully compliant with the Regulation.