In the future, strict rules for the use of artificialintelligence will apply in the EU

In the future, strict rules for the use of artificialintelligence will apply in the EU

In the future, strict rules for the use of artificial intelligence will apply in the EU. The law is important, says expert Lukowiczin an interview. Although the technology is not actually intelligent, it willmassively change our lives.

tagesschau.de: The EU has decided on a position on the planned first AI law. It is intended to ban or regulate high-risk and risky applications. How useful are the rules from your point of view?

Paul Lukowicz: It’s a very good approach. Artificial intelligence (AI) is enormously powerful. It will influence our lives like no other technology in recent years. If we want it to change our lives for the better, it must also be regulated by law.

Regulation that does not regulate the technology itself, but rather its effects, makes a lot of sense. Because by doing so we prevent something bad from happening without hindering innovation and the creation of the technology. future for artificial intelligence

“AI can endanger security” future for artificial intelligence

tagesschau.de: The planned EU law differentiates between the applications – among other things, they are classified as risky and high-risk. High-risk applications should be banned, risky ones should besubject to strict requirements. When do you think artificial intelligence is risky and should be banned?

Lukowicz: Risky and forbidden – those are two different things. AI is risky – like any other technology – when it has an impact on human well-being, human life and the security of certain things that areimportant to us in society. Especially if she does something wrong, she can endanger security.

However, AI is also capable of doing things that we fundamentally do not want. For example, certain surveillance techniques such as the famous “Social Scoring System”, in which AI systems are used to evaluate people’s behavior and see whether they behave the way the state would want them to. We basically don’t want something like that. It is right that this is simply forbidden by law.

tagesschau.de: Where should the limits be for the useof AI – for example when used in the medical field?

Lukowicz: It is always problematic when the AI ​​​​does things without humans being able to intervene or take a second look at them.This generally also applies in the medical field. When it comes to high-risk applications, it’s not so much about whether we want to use the technology, but about the requirements that the technology must meet so that it can be used safely.

AI should always be used in medicine if the use of AI increases the likelihood that the medical intervention will be successful and benefit people.

“There is no real intelligence behind it”

tagesschau.de: What exactly is artificial intelligence?

Lukowicz: AI is nothing more than a set of mathematicalmethods and algorithms that have been found to be able to do things that wepreviously thought were only possible for humans. For example, 20 years ago an AI won against a human grandmaster in chess for the first time. But AI can also generate complex images or pieces of music.

It’s important to understand that no matter how amazing thisis, there is no real intelligence behind it. At least not in the sense that we might understand intelligence. They are very precisely defined, but often quite simple mathematical procedures that are applied to large amounts of data.

tagesschau.de: Does that mean the AI ​​only does whatwas programmed?

Lukowicz: It’s not that simple. In an AI program, the so-called machine learning process, the computer is usually given a lot of examples. They illustrate what should be done. The computer is then told step by step what it has to do in order to deduce from these examples how the problem can actually be solved.

The system does not learn in the sense that it does something completely independently. We have taught it how to derive somethingfrom the data and it cannot do anything else.

But usually this data is so complex that we as humans cannot really say with 100 percent certainty what the system will actually extract from the data. And that is precisely where the big problem lies and hence the eneed for regulation.

If we don’t look closely at these data sets, these”sample sets”, if we don’t build in certain security mechanisms, then we can end up with a system that we believe does A. In reality, it’s doing B -because we didn’t properly understand the data we provided to it.

“The fact that AI is displacing humans is sciencefiction”

tagesschau.de: So we don’t have to worry and we can continueto work with AI?

Lukowicz: Given the current state of AI, the fact thatAI will eventually establish a new intelligent species and displace humans definitely belongs in the realm of science fiction films.

But it is a technology that is influencing more and more areas of our lives – for example the way we consume information. Or in trafficwith self-driving cars. AI can control energy grids and many other things. That’s why regulation by the European Parliament is so important. future for artificial intelligence

We don’t need to be afraid, but we need to use this technology thoughtfully and with appropriate caution. We should always ask ourselves: Is the use of technology in one place or another something that really benefits usas humans or is it something that might put us in danger?

The interview was conducted by Anja Martini, tagesschau.de

The interview was edited and shortened for the written version.

future for artificial intelligence future for artificial intelligence future for artificial intelligence

In order to perform any task on a computer, you must instruct your device on which application to utilize. While you can utilize Microsoft Word and Google Docs to compose a business proposal, these programs cannot assist you in sending an email, sharing a selfie, analyzing data, scheduling an event, or purchasing movie tickets. Additionally, even the most advanced applications lack a comprehensive understanding of your professional work, personal life, interests, and relationships, and have limited capability to utilize this information to perform actions on your behalf. Currently, this type of functionality is only achievable with another human being, such as a close friend or a personal assistant.

Over the next five years, this will undergo a complete transformation. You will no longer need to use different applications for various tasks. Instead, you will simply inform your device, in everyday language, about the action you want to carry out. Based on the level of information you choose to share, the software will be able to provide personalized responses due to its thorough comprehension of your life. In the near future, anyone with online access will be able to have a personal assistant powered by artificial intelligence that surpasses current technology.

This kind of software, which can understand natural language and execute various tasks based on its knowledge of the user, is referred to as an agent. I have been contemplating agents for nearly thirty years and discussed them in my 1995 book, The Road Ahead, but they have only recently become viable due to advancements in AI.

Agents will not only revolutionize how everyone interacts with computers but will also disrupt the software industry, leading to the most significant computing revolution since the transition from command typing to icon clicking.

A personal assistant for all

Certain critics have highlighted that software companies have previously offered similar solutions, which users did not wholeheartedly embrace (e.g., people still mock Clippy, the digital assistant included in Microsoft Office and later discontinued). So, why will people adopt agents?

The answer lies in their substantial improvement. Users will be able to engage in nuanced conversations with them. Agents will be highly personalized and won’t be limited to simple tasks like composing a letter. Clippy shares as much similarity with agents as a rotary phone does with a mobile device.

If desired, an agent will be able to assist with all of your activities. By obtaining permission to monitor your online interactions and physical locations, it will develop a profound understanding of the people, places, and activities you are involved in. It will comprehend your personal and professional relationships, hobbies, preferences, and schedule. You will have the freedom to choose how and when it assists with a task or prompts you to make a decision.

“Clippy was a bot, not an agent.”

To comprehend the substantial impact that agents will bring, let’s compare them to the current AI tools. Most of these tools are bots, confined to a single application and typically only intervene when a particular word is written or when assistance is requested. Since they do not remember previous interactions, they do not improve or learn any user preferences. Clippy was a bot, not an agent.

Agents are more intelligent. They are proactive, capable of offering suggestions before being prompted. They can carry out tasks across applications and improve over time by recalling your activities and recognizing intentions and patterns in your behavior. Drawing from this information, they will offer to provide what they believe you need, while you always retain the final decision-making authority.

Imagine that you wish to plan a trip. While a travel bot may identify affordable hotels, an agent will have knowledge of your travel dates and, based on its understanding of whether you prefer new destinations or repeat ones, can suggest suitable locations. Upon request, it will recommend activities based on your interests and adventure tendencies and book reservations at restaurants that align with your preferences. As of now, achieving this level of personalized planning requires engaging a travel agent and spending time detailing your preferences to them.

The most exciting impact of AI agents is the democratization of services that are currently unaffordable for many people. They will have a particularly significant impact on four areas: healthcare, education, productivity, and entertainment and shopping.

Healthcare

Presently, AI primarily assists in healthcare by handling administrative tasks. For instance, applications like Abridge, Nuance DAX, and Nabla Copilot can capture audio during a medical appointment and create notes for the doctor to review.

The significant transformation will occur when agents can aid patients in basic triage, provide guidance on managing health issues, and assist in determining the need for further treatment. These agents will also support healthcare professionals in making decisions and increasing productivity. (For example, applications such as Glass Health can analyze a patient summary and suggest diagnoses for the doctor to consider.) Providing assistance to patients and healthcare workers will be especially beneficial for individuals in underprivileged countries, where many individuals never have the opportunity to consult a doctor.

These medical AI assistants will take longer to be implemented compared to others because ensuring accuracy is a matter of life and death. People will require convincing evidence of the overall benefits of health AI assistants, even though they won’t be flawless and will make errors. Human errors occur as well, and lack of access to medical care is also a significant issue.

A significant number of U.S. military veterans who require mental health treatment do not receive it.

Mental health care is another example of a service that AI assistants will make accessible to almost everyone. Currently, weekly therapy sessions may seem like a luxury, but there is substantial unmet demand, and numerous individuals who would benefit from therapy do not have access to it. For example, a study conducted by RAND revealed that half of all U.S. military veterans who require mental health care do not receive it.

Well-trained AI assistants in mental health will make therapy more affordable and accessible. Wysa and Youper are among the early chatbots in this field, but AI assistants will delve much deeper. If you choose to share enough information with a mental health assistant, it will comprehend your life history and relationships. It will be available when needed and won’t become impatient. With your consent, it could even monitor your physical responses to therapy through your smartwatch—such as noticing if your heart rate increases when discussing an issue with your boss—and recommend when you should consult a human therapist.

Education

For years, I have been enthusiastic about the ways in which software can ease teachers’ workload and aid student learning. It won’t supplant teachers but will complement their efforts by customizing work for students and freeing teachers from administrative tasks to allow more focus on the most crucial aspects of their job. These changes are finally beginning to materialize in a significant manner.

The current pinnacle of this development is Khanmigo, a text-based bot developed by Khan Academy. It can provide tutoring in subjects such as math, science, and the humanities—for instance, explaining the quadratic formula and creating math problems for practice. It can also aid teachers in tasks like lesson planning. I have been a long-time admirer and supporter of Sal Khan’s work and recently had him on my podcast to discuss education and AI.

Text-based bots are just the initial phase—AI assistants will unlock numerous additional learning opportunities.

For instance, only a few families can afford a tutor who provides one-on-one supplementary instruction to complement classroom learning. If assistants can capture the effectiveness of a tutor, they will make this supplementary instruction available to everyone who desires it. If a tutoring assistant knows that a child enjoys Minecraft and Taylor Swift, it will utilize Minecraft to teach them about calculating the volume and area of shapes, and use Taylor’s lyrics to teach them about storytelling and rhyme schemes. The experience will be far more immersive—with graphics and sound, for example—and more tailored than today’s text-based tutors.

Productivity

There is already substantial competition in this field. Microsoft is integrating its Copilot into Word, Excel, Outlook, and other services. Similarly, Google is employing its Assistant with Bard and productivity tools to accomplish similar tasks. These copilots can perform numerous functions, such as transforming a written document into a presentation, responding to questions about a spreadsheet using natural language, and summarizing email threads while representing each person’s perspective.

AI assistants will do much more. Having one will be akin to having a dedicated personal aide to assist with a variety of tasks and execute them independently at your request. If you have a business idea, an assistant will help you draft a business plan, create a presentation, and even generate images depicting your product. Companies will be able to provide assistants for their employees to directly consult and participate in every meeting to address queries.

Whether working in an office or not, your assistant will be able to support you in the same way personal assistants aid executives today. For instance, if your friend recently underwent surgery, your assistant will offer to arrange flower delivery and can place the order for you. If you express a desire to reconnect with your college roommate, it will collaborate with their assistant to schedule a meeting, and just before the meeting, it will remind you that their eldest child recently commenced studies at the local university.

Entertainment and shopping

AI can already assist in selecting a new TV and recommend movies, books, shows, and podcasts. Additionally, a company I have invested in recently launched Pix, which allows you to pose questions (such as “Which Robert Redford movies might appeal to me and where can I watch them?”) and then offers suggestions based on your past preferences. Spotify features an AI-powered DJ that not only plays songs based on your tastes but also engages in conversation and can even address you by name.

Agents will not only provide suggestions but also assist you in taking action based on those suggestions. For instance, if you wish to purchase a camera, your agent will go through all the reviews, summarize them, recommend a product, and place an order once you’ve made a decision. If you express a desire to watch Star Wars, the agent will check if you have the appropriate streaming service subscription, and if not, offer to help you sign up for one. Additionally, if you’re unsure about what you want to watch, the agent will make personalized recommendations and facilitate the process of playing your chosen movie or show.

Moreover, you will have access to personalized news and entertainment tailored to your interests. An example of this is CurioAI, which can generate a customized podcast on any topic you inquire about.

This advancement spells a significant change in the tech industry. Essentially, agents will be capable of aiding in almost any activity and aspect of life. This will bring about profound implications for both the software industry and society.

In the realm of computing, we often refer to platforms as the underlying technologies on which apps and services are built. Android, iOS, and Windows are all examples of platforms. Agents are poised to be the next major platform.

In the future, creating a new app or service will not require expertise in coding or graphic design. Instead, you will simply communicate your requirements to your agent. It will have the ability to write code, design the app’s interface, create a logo, and publish the app on an online store. The recent introduction of GPTs by OpenAI offers a glimpse into a future where individuals who are not developers can easily create and share their own assistants.

Agents will revolutionize both the use and development of software. They will replace search engines because of their superior ability to find and synthesize information for users. They will also supplant many e-commerce platforms by identifying the best prices across a wider range of vendors. Additionally, they will supersede traditional productivity apps such as word processors and spreadsheets. Sectors that are currently distinct—like search advertising, social networking with advertising, shopping, and productivity software—will merge into a single industry.

It is unlikely that a single company will dominate the agents business. Rather, there will be numerous different AI engines available. While some agents may be free and ad-supported, most will likely be paid for. Therefore, companies will be motivated to ensure that agents primarily serve the user’s interests rather than the advertisers’. The high level of competition among companies entering the AI field this year suggests that agents will be very cost-effective.

However, before the sophisticated agents described earlier become a reality, we need to address several technical and usage-related questions about the technology. I have previously written about the ethical and societal issues surrounding AI, so in this discussion, I will focus specifically on agents.

There is as yet no established data structure for an agent. Developing personal agents will necessitate a new type of database capable of capturing the intricacies of individuals’ interests and relationships and swiftly recalling this information while upholding privacy. New methods of information storage, such as vector databases, are emerging and may be better suited for housing data generated by machine learning models.

Additionally, it remains uncertain how many agents users will interact with. Will a personal agent be distinct from a therapist agent or a math tutor? If so, there is the question of when and how these agents might collaborate.

The manner in which users will interact with their agents also presents a challenge. Companies are exploring various options, including apps, glasses, pendants, pins, and even holograms. Although all of these are viable possibilities, the milestone breakthrough in human-agent interaction could be earbuds. If an agent needs to communicate with you, it might speak to you or appear on your phone. For example, it may say, “Your flight is delayed. Would you like to wait, or can I assist in rebooking?” Additionally, it can enhance the sound coming into your ear by eliminating background noise, amplifying difficult-to-hear speech, or clarifying heavily accented speech.

Other challenges include the absence of a standardized protocol for agent-to-agent communication, the need to make agents affordable for all users, the necessity for more effective prompting to obtain the desired response, the avoidance of misinformation—particularly in crucial domains like healthcare—and ensuring that agents do not cause harm due to biases. Moreover, it is imperative to prevent agents from performing unauthorized actions. While concerns about rogue agents persist, the potential misuse of agents by malicious individuals is a more pressing issue.

Privacy and other significant concerns

As these developments unfold, the issues surrounding online privacy and security will become even more pressing than they already are. It will be important for you to have the ability to determine what information the agent can access, so you can be confident that your data is only shared with the individuals and companies of your choosing.

However, who has ownership of the data you share with your agent, and how can you ensure that it is used appropriately? No one wants to start receiving advertisements related to something they confided in their therapist agent. Can law enforcement use your agent as evidence against you? When might your agent refuse to engage in actions that could be detrimental to you or others? Who determines the values that are embedded in agents?

There is also the issue of how much information your agent should disclose. For instance, if you want to visit a friend, you wouldn’t want your agent to say, “Oh, she’s meeting other friends on Tuesday and doesn’t want to include you.” Additionally, if your agent assists you in composing work emails, it needs to know not to use personal information about you or proprietary data from a previous job.

Many of these concerns are already at the forefront of the technology industry and among legislators. I recently took part in a forum on AI with other technology leaders, which was organized by Sen. Chuck Schumer and attended by numerous U.S. senators. During the event, we exchanged ideas about these and other issues and discussed the necessity for lawmakers to implement robust legislation.

However, some issues will not be determined by companies and governments. For example, agents could impact how we interact with friends and family. Today, expressing care for someone can involve remembering details about their life, such as their birthday. But if they know that your agent likely reminded you and handled sending flowers, will it hold the same significance for them?

In the distant future, agents may even compel humans to contemplate profound questions about purpose. Consider a scenario where agents become so advanced that everyone can enjoy a high quality of life without having to work as much. In such a future, what would people do with their time? Would obtaining an education still be desirable when an agent provides all the answers? Can a safe and flourishing society be sustained when most individuals have significant amounts of free time?

Nevertheless, we have a long way to go before reaching that stage. In the meantime, agents are on the horizon. Over the next few years, they will completely transform how we lead our lives, both online and offline.

What is the significance of artificial intelligence?

AI streamlines repetitive learning and exploration through data. Rather than automating manual tasks, AI carries out frequent, high-volume, computerized tasks reliably and without fatigue. Human involvement is still crucial for setting up the system and asking the appropriate questions.

AI enhances the intelligence of existing products. Many products that are currently in use will benefit from AI capabilities, similar to the way Siri was integrated into a new generation of Apple products. Automation, conversational platforms, bots, and smart machines can be merged with extensive data to enhance numerous technologies. Upgrades in home and workplace settings, such as security intelligence and intelligent cameras, along with investment analysis, are included.

AI adjusts through progressive learning algorithms to enable data to dictate the programming. AI identifies patterns and regularities in data to allow algorithms to acquire skills. Just as an algorithm can teach itself to play chess, it can also learn what product to recommend next online. Furthermore, the models adapt when presented with new data.

AI a greater and more comprehensive amount of data using neural networks that have multiple hidden layers. Previously, constructing a fraud detection system with five hidden layers was considered unfeasible. However, this has changed due to the remarkable computer power and large data sets. Extensive data is necessary to train deep learning models because they learn directly from the data.

AI achieves remarkable precision through deep neural networks. For instance, Alexa and Google interactions are primarily based on deep learning, and these products become more accurate with increased usage. In the medical field, AI techniques from deep learning and object recognition can now be employed to precisely identify cancer in medical images.

AI maximizes the potential of data. When algorithms are self-learning, the data itself becomes a valuable asset where the solutions lie. Applying AI is the key to uncovering these answers. Since the significance of data has now become more pronounced than ever, it can confer a competitive edge. In a competitive industry, possessing the best data is advantageous, even if similar techniques are being utilized by everyone, as the best data will emerge triumphant.

Top digital technology news:

Upcoming EU AI regulations set to take effect; Concerns raised about the digitalization of finance and banking; UK communications watchdog enhances digital safety guidelines.

1. EU’s AI Act set to take effect

The European Union’s regulations regarding artificial intelligence (AI) are scheduled to be implemented in June following the approval of a political agreement by member states that was reached in December. These regulations may establish a global standard for the technology.

“This historic legislation, the first of its kind globally, addresses a worldwide technological issue that presents both opportunities for our societies and economies,” stated Mathieu Michel, Belgium’s digitization minister.

The new regulations introduce stringent transparency requirements for high-risk AI systems, while the guidelines for general-purpose AI models will be less rigorous, according to Reuters.

The deployment of real-time biometric surveillance in public areas is also limited to instances of specific crimes, such as preventing terrorism and apprehending individuals suspected of severe offenses.

2. Digitalization of banking creating new risks

The Basel Committee on Banking Supervision has issued a warning regarding the safety risks associated with the digital transformation of the banking sector. In a recent report, the Committee highlighted that this transformation is generating new vulnerabilities and exacerbating existing ones, indicating that additional regulations may be necessary to address these emerging challenges.

The expansion of cloud computing, the advent of AI, and the data-sharing practices of external fintech companies, among other factors, contribute to new risks.

“These may involve increased strategic and reputational dangers, a wider range of factors that could challenge banks’ operational risk and resilience, and potential system-wide threats due to heightened interconnections,” the report stated.

The Committee includes central bankers and regulators from the G20 and other nations that have committed to implementing its regulations.

3. News in brief: Digital technology stories from around the world

Microsoft has joined forces with an AI company based in the UAE to invest $1 billion in a data center in Kenya.

The EU’s data privacy authority has cautioned that OpenAI is still failing to comply with data accuracy requirements.

Research has utilized AI to detect as many as 40 counterfeit paintings listed for sale on eBay, including pieces falsely attributed to Monet and Renoir, according to The Guardian.

TikTok will begin employing digital watermarks to identify AI-generated content that has been uploaded from other platforms. Content created with TikTok’s own AI tools is already automatically marked.

The UK’s communications authority Ofcom has introduced a new safety code of conduct, urging social media companies to “moderate aggressive algorithms” that promote harmful content to children.

The House Foreign Affairs Committee has voted to move forward a bill that facilitates the restriction of AI system exports.

A global AI summit, co-hosted by South Korea and the UK, concluded with commitments to safely advance the technology from both public and private sectors.

OpenAI has established a new Safety and Security Committee that will be headed by board members as it begins the development of its next AI model.

The adoption of Generative AI tools has been gradual, according to a survey of 12,000 individuals across six countries, but is most pronounced among those aged 18-24.

4. More about technology on Agenda

For businesses to bridge the gap between the potential and reality of generative AI, they must focus on return on investment, says Daniel Verten, Head of Creative at Synthesia. This entails setting clear business goals and ensuring that GenAI effectively addresses challenges from start to finish.

Climate change threatens agriculture, with innovative strategies crucial for protecting crops while minimizing environmental impact. AI can facilitate the acceleration of these solutions, explains Tom Meade, Chief Scientific Officer at Enko Chem.

What does the future hold for digital governance? Agustina Callegari, Project Lead of the Global Coalition for Digital Safety at the World Economic Forum, delves into the outcomes of the NetMundial+10 event and the establishment of the São Paulo Guidelines.

European Union member nations reached a final agreement on Tuesday regarding the world’s first major law aimed at regulating artificial intelligence, as global institutions strive to impose limits on the technology.

The EU Council announced the approval of the AI Act — a pioneering regulatory legislation that establishes comprehensive guidelines for artificial intelligence technology.

Mathieu Michel, Belgium’s secretary of state for digitization, stated in a Tuesday announcement that “the adoption of the AI Act marks a significant milestone for the European Union.”

Michel further noted, “With the AI Act, Europe underscores the significance of trust, transparency, and accountability in handling new technologies while also ensuring that this rapidly evolving technology can thrive and contribute to European innovation.”

The AI Act utilizes a risk-based framework for artificial intelligence, indicating that various applications of the technology are addressed differently based on the potential threats they pose to society.

The legislation bans AI applications deemed “unacceptable” due to their associated risk levels, which include social scoring systems that evaluate citizens based on data aggregation and analysis, predictive policing, and emotional recognition in workplaces and educational institutions.

High-risk AI systems encompass autonomous vehicles and medical devices, assessed based on the risks they present to the health, safety, and fundamental rights of individuals. They also cover AI applications in finance and education, where embedded biases in the algorithms may pose risks.

Matthew Holman, a partner at the law firm Cripps, mentioned that the regulations will significantly impact anyone involved in developing, creating, using, or reselling AI within the EU — with prominent U.S. tech firms facing close scrutiny.

Holman stated, “The EU AI legislation is unlike any law in existence anywhere else globally,” adding, “It establishes, for the first time, a detailed regulatory framework for AI.”

According to Holman, “U.S. tech giants have been closely monitoring the evolution of this law.” He remarked that there has been substantial investment in public-facing generative AI systems that must comply with the new, sometimes stringent, law.

The EU Commission will be authorized to impose fines on companies that violate the AI Act, potentially as high as 35 million euros ($38 million) or 7% of their total global revenue, whichever amount is greater.

This shift in EU law follows OpenAI’s launch of ChatGPT in November 2022. At that time, officials recognized that existing regulations lacked the necessary detail to address the advanced capabilities of emerging generative AI technologies and the risks linked to the use of copyrighted materials.

Implementing these laws will be a gradual process.

The legislation enforces strict limitations on generative AI systems, which the EU refers to as “general-purpose” AI. These limitations include adherence to EU copyright laws, disclosure of transparency concerning how the models are trained, routine testing, and sufficient cybersecurity measures.

However, it will take some time before these stipulations come into effect, as indicated by Dessi Savova, a partner at Clifford Chance. The restrictions on general-purpose systems will not take effect until 12 months after the AI Act is enacted.

Additionally, generative AI systems currently available on the market, such as OpenAI’s ChatGPT, Google’s Gemini, and Microsoft’s Copilot, will benefit from a “transition period” that allows them 36 months from the date of enactment to comply with the new legislation.

Savova conveyed to CNBC via email, “An agreement has been established regarding the AI Act — and that regulatory framework is about to be realized.” She emphasized the need to focus on the effective implementation and enforcement of the AI Act thereafter.

The Artificial Intelligence Act (AI Act) of the European Union marks a significant development in global regulations concerning AI, addressing the growing demand for ethical standards and transparency in AI applications. Following thorough drafting and discussions, the Act has been provisionally agreed upon, with final compromises struck and its adoption by the European Parliament scheduled for March 13, 2024. Expected to come into effect between May and July 2024, the AI Act creates a detailed legal framework aimed at promoting trustworthy AI both within Europe and globally, highlighting the importance of fundamental rights, safety, and ethical principles.

Managed by the newly established EU AI Office, the Act imposes hefty penalties for noncompliance, subjecting businesses to fines of €35 million or 7 percent of annual revenue, whichever is higher. This compels stakeholders to recognize its implications for their enterprises. This blog offers a comprehensive analysis of the Act’s central provisions, ranging from rules concerning high-risk systems to its governance and enforcement structures, providing insights into its potential effects on corporations, individuals, and society as a whole.

How does this relate to you?

AI technologies shape the information you encounter online by predicting which content will engage you, gathering and analyzing data from facial recognition to enforce laws or tailor advertisements, and are utilized in diagnosing and treating cancer. In essence, AI has an impact on numerous aspects of your daily life.

Similar to 2018’s General Data Protection Regulation (GDPR), the EU AI Act could set a global benchmark for ensuring that AI positively influences your life rather than negatively, regardless of where you are located. The EU’s AI regulations are already gaining international attention. If you are involved in an organization that uses AI/ML techniques to develop innovative solutions for real-world challenges, you will inevitably encounter this Act. Why not familiarize yourself with its intricacies right now?

The AI Act is designed to “enhance Europe’s status as a worldwide center of excellence in AI from research to market, ensure that AI in Europe adheres to established values and rules, and unlocks the potential of AI for industrial purposes.”

A risk-based approach

The foundation of the AI Act is a classification system that assesses the level of risk an AI technology may present to an individual’s health, safety, or fundamental rights. The framework categorizes risks into four tiers: unacceptable, high, limited, and minimal.

Unacceptable Risk Systems

The AI regulations from the EU consist of several important provisions aimed at ensuring the ethical and responsible use of AI. Prohibited AI practices include the banning of manipulative techniques, exploitation of vulnerabilities, and classification based on sensitive characteristics. Real-time biometric identification for law enforcement requires prior authorization and notification to the relevant authorities, with member states having flexibility within defined limits. Moreover, obligations for reporting necessitate annual reporting on the use of biometric identification, promoting transparency and accountability in AI deployment.

High Risk Systems

The EU identifies several high-risk AI systems across various sectors, including critical infrastructure, education, product safety, employment, public services, law enforcement, migration management, and justice administration. These systems must adhere to strict obligations, including conducting risk assessments, using high-quality data, maintaining activity logs, providing detailed documentation, ensuring transparency during deployment, having human oversight, and guaranteeing robustness.

High-risk AI systems must fulfill rigorous requirements before they can be marketed. We have simplified these for your convenience:

Assess the application’s impact to determine the risk level of the system.

Familiarize yourself with the regulatory requirements based on your use case and risk classification. Standards will be established by the AI Office in collaboration with standardization organizations like CEN/CENELEC.

Implement a risk management system: Evaluate and monitor risks associated with the application in real-world scenarios.

Data and Data Governance: Ensure that data is representative, accurate, and complete, maintain independence during training, testing, and validation, ensure quality of annotations, and work towards fairness and bias mitigation while safeguarding personal data privacy.

Technical Documentation and Transparency for deployers: Keep and make available the necessary information to assess compliance with requirements and ensure complete transparency regarding critical information and procedures for regulatory bodies as well as for application consumers.

Human Oversight: Create a synergistic environment that allows for human monitoring and intervention capabilities after production.

Accuracy, Robustness, and Cybersecurity: Ensure the model’s robustness and conduct continuous integrity checks on data and the system.

Quality Management System: Implement a comprehensive system for managing the quality of data and learning processes.

Limited Risk Systems

Limited risk pertains to the dangers associated with a lack of clarity in AI utilization. The AI Act establishes particular transparency requirements to ensure individuals are informed when necessary, promoting trust. For example, when engaging with AI systems like chatbots, individuals should be made aware that they are communicating with a machine, allowing them to make an educated decision to proceed or withdraw. Providers are also required to ensure that content generated by AI is recognizable. Moreover, any AI-generated text that aims to inform the public on issues of public significance must be labeled as artificially generated. This requirement also extends to audio and video content that involves deep fakes.

Minimal or no risk

The AI Act permits the unrestricted use of AI systems categorized as minimal risk. This encompasses applications like AI-powered video games or spam detection systems. The majority of AI applications currently utilized in the EU fall under this classification.

General Purpose AI Systems

From a broad perspective, a general-purpose AI model is deemed to carry systemic risk if its training necessitates more than 10^25 floating point operations (FLOPs), signifying substantial impact capabilities. These are primarily generative AI models.

General obligations can be fulfilled through self-assessment, with the following understood:

  • Codes of Practice: Utilize codes of practice to demonstrate compliance until standardized norms are established.
  • Technical Documentation and Information Sharing: Provide essential information to evaluate compliance with the requirements and ensure ongoing access for regulators.
  • Model Evaluation: Conduct model evaluation using standardized protocols and tools, including adversarial testing, to identify and address systemic risks.
  • Risk Assessment: Evaluate and manage systemic risks that arise from the development or application of AI models.
FredMT Admin Avatar