Let's Compare the EU AI Act and Turkey's Proposed AI Law
As with the KVKK (Personal Data Protection Law) and e-commerce laws, Turkey is following the European Union in the field of artificial intelligence regulations. The Proposed AI Law, which appears to be modeled after the EU AI Act that took effect in the EU on August 1, 2024, was submitted to parliament on June 1, 2024. In this article, we will summarize what we find to be the significant differences between the Turkish proposal and the EU AI Act, and what gray areas and opportunities this law might create.
Let’s Compare the EU AI Act and Turkey’s Proposed AI Law
First, we should mention that the EU AI Act has a long history. The drafting process for this law began in 2021 and it took its final form in the last year with articles aimed at ensuring the safety of general-purpose AI technologies, such as advanced large language models (LLMs), and keeping them within human control and expectations. It’s important to underline that the law in Turkey is a proposal, not a draft, so it’s reasonable to expect it to be less lengthy and detailed than a full law. In length, it is about 10% of the EU AI Act, and its content and details are similarly simple and less comprehensive. Nevertheless, we can say that this proposal is a good start for establishing certain principles.
Banned AI Application Areas
The EU AI Act has strictly banned the use of artificial intelligence in the following areas:
- Deceiving people, attempting to manipulate their behavior for malicious purposes, and exploiting people’s vulnerabilities
- Collecting sensitive data from people without their consent, leading to bias and discrimination with this data
- Developing social scoring systems that affect people’s access to services and job opportunities
- Real-time identification of people in public spaces without their knowledge or consent, and related surveillance and image collection
- Detecting people’s emotions in educational and professional settings
The Turkish law proposal, on the other hand, states that AI must adhere to the following principles but does not set specific limits on application areas:
- Security: Ensuring AI systems operate safely and without risk
- Transparency: Ensuring AI systems and their processes are open and understandable
- Fairness: Ensuring AI systems are non-discriminatory and fair
- Accountability: Ensuring that individuals and institutions responsible for AI outcomes can be identified and audited
- Privacy: Ensuring AI systems comply with the principles of protecting and keeping personal data confidential
Although the proposal states that some application areas, such as autonomous vehicles and medical diagnostic systems, will be considered high-risk systems and subject to stricter rules, it does not offer the sharp distinctions and prohibitions on applications that directly center on citizen rights as the EU AI Act does. The most questionable aspect of this difference is the lack of clarity on how flexibly certain applications, which are explicitly banned in the EU, will be treated in Turkey, especially given the example of China, which keeps its citizens under tight surveillance with AI technology. The fact that such a fundamental sanction is not included even at the proposal stage raises questions about whether it will be considered during the drafting phase. On the other hand, we can certainly foresee that these five listed principles will play an important role in preventing risks arising from AI. These principles also almost completely overlap with the principles targeted for our domestic AI ecosystem to follow in the National Artificial Intelligence Strategy announced for 2021-2025. In this respect, we can say that this new law proposal is of a nature that strengthens and supports the current national AI strategy.
Definitions of Actors in the AI Ecosystem
A second difference is that the actors in the AI ecosystem are defined more clearly in the EU AI Act than in the Turkish law proposal. Since legal sanctions are directly based on these definitions, the clarity of these definitions is also very important.
The EU AI Act has a definitions section with 68 articles. It includes a wide list of definitions, from basic concepts like artificial intelligence and risk to niche concepts like AI model deployer and AI model importer. The Turkish law proposal has also generally made definitions for AI, provider, deployer/user, importer, distributor, and AI operator—terms we don’t use much in daily life but are directly relevant to the law—in parallel with the definitions in the EU AI Act.
These definitions appear in perhaps one of the most striking parts of the proposal, where the responsibilities and penalties for these defined actors are explained. In its section listing responsibilities and penalties, the EU AI Act dedicates separate headings for distributors, model developers, model deployers, and many other actors. In the Turkish law proposal, however, penalties are specified for what are called “AI operators,” which is defined to cover all actors in the ecosystem, as shared verbatim below.
AI Operators: This definition has been kept broad to encompass all stakeholders in the AI ecosystem. Including providers, distributors, users, importers, and distributors, this definition plays a central role in determining the responsibilities and obligations of all parties within the scope of the law.
For example, it does not currently seem clear from the proposal whether a fine for violation of obligations, which can be up to 15 million TL or 3% of the organization’s annual turnover, will be applied to a company that developed a non-compliant model, a user who deployed the model without checking its compliance, or perhaps a distributor who introduced this foreign-developed model in Turkey. There is no doubt that these penalties will be a deterrent, but in their current form in the text, they may cause anxiety among actors in the AI ecosystem due to their inability to clearly understand their responsibilities.
Advanced General-Purpose AI
The third and final point we will touch upon is that while general-purpose AI systems have a significant place in the EU AI Act, these models are not mentioned in our proposal. These are very advanced and multi-purpose models, like the one behind ChatGPT, whose technical development steps alone can cost millions of dollars, and for which the computing power used for their creation is also above a certain amount. Although current models are not yet seen as having the potential to endanger humanity, AI experts who see how quickly these models can acquire new capabilities predict that much more advanced and risky models will soon be produced. They emphasize the critical importance of these models being reliable, transparent, and aligned with human values and intentions. Turkey also participated in the AI Safety Summit series, initiated to ensure international cooperation in this field, and we can predict that we will also participate in the next summit to be held in Paris. The companies like OpenAI and DeepMind that can produce models at the level of sophistication defined in the EU AI Act are currently few. We also have no clear information about whether there are private or national initiatives in Turkey producing models at this level. Therefore, we cannot say that this part of the law has much relevance to Turkey. However, in an era where AI is now part of national security and economic development plans, we do not know what kind of policy Turkey will adopt towards advanced AI models, or what steps Turkish companies will take in this area. The auditing and testing of these advanced models, and the control of applications using these models, will bring with it a very large regulatory ecosystem. If you are interested in these areas, you can contact us for more information.
What Awaits Us?
If the AI law proposal in Turkey is accepted, most of the implementation-level details will likely be defined later. This is also true for the EU AI Act. In light of this law, the EU has established two organizations: the European AI Board and the EU AI Office. Between these two, operational tasks such as implementing the laws, defining relevant tests, monitoring compliance with the laws, and coordinating countries seem to be more the responsibility of the EU AI Office. Undoubtedly, all these processes will lead to the emergence of many new actors in the private sector in the EU and change the business processes of existing ones. What we foresee for the actors in Turkey is as follows:
- If you want to establish an auditing company within the framework of this law in Turkey, provide consultancy for the regulation of this law, or closely follow the legislation as a lawyer, the EU AI Office will likely be one of the centers kept on the radar while detailed applications are defined in Turkey.
- If you are an organization that produces or uses AI, you will be more interested in the implementation part of the law rather than its creation process. In this case, 3 different actors come to the forefront:
- Organizations that produce or use AI
- Entrepreneurs or existing actors who want to start an auditing business in this field
- Actors who may be indirectly affected by applications in this field, such as insurance and occupational safety
Most of the companies that will undergo significant auditing under this law will need some form of consultancy or product for preparation before the audit. They may establish this internally or may want to obtain it as external consultancy or a solution. In addition, it is possible to foresee that the institutions that will officially audit this legal regulation will be private auditing bodies that have passed some form of accreditation. Therefore, this law proposal may pioneer many new initiatives in the field of auditing. Likewise, it is not too early for sectors indirectly interacting with these areas, such as insurance and occupational safety, to closely follow this law proposal and prepare what products they will offer for demands from companies to protect against this law.
So what about those who will be audited? They probably face the greatest uncertainty. Both the risk of facing high penalties and the current uncertainty in the law proposal may cause unease in companies, and from an investment perspective, it can even be interpreted as a move that will curb innovation and companies’ R&D investments in the field of AI. On the other hand, for the sustainable growth of the technology ecosystem, it is essential that innovations are of a nature that considers, improves, and can act in harmony with people and societies. From this perspective, we can expect that such applications, which may create uncertainty and turbulence in the short term, will bring benefits to every actor in a system where they are defined clearly, based on information, and applied fairly.
Of course, while examining this law proposal, it is necessary to keep in mind that this text is only a proposal, and moreover, the very specific language used in laws can sometimes have a narrowing effect on the law. In this journey from proposal to draft and adoption, many details such as detailed definitions and which institution will maintain and audit which legislation will be determined. It is doubtless that this proposal, which we consider a good start, will need more headings than the current ones over time. For example, if the harmonization process with the EU comes to the fore again, it will be important for high-risk and banned applications to be explicitly stated in our law, as in the EU AI Act. Making the definitions of AI actors and the responsibilities of each of them much more detailed than in the current law proposal will support the reliability of the legal processes that will operate within the framework of this law. We hope that such positive changes will find a place in the later stages of the law proposal and we will continue to follow the process closely and share it with you.
Resources
We recommend that you follow our monthly newsletter, where we will feature important developments regarding the EU AI Act, and this newsletter organized by Risto Uuk, which focuses solely on the EU AI Act.