We will employ AI in a manner that respects human dignity, rights, and freedoms

We will employ AI in a manner that respects human dignity, rights, and freedoms

How safe is the use of artificial intelligence? The EU states have now agreed on rules. These are intended to ensure that AI systems are safe and comply with fundamental rights. Consumer advocates still see dangers.

For the first time, EU member states have laid down comprehensive rules for the use of artificial intelligence (AI). The decision is intended to ensure that AI systems are safe and respect fundamental rights, the Council of EU states announced. At the same time, innovation should be promoted.

Praise from Buschmann and Habeck

“The EU is well on its way to setting the world’s first binding standard for trustworthy AI,” said Federal Justice Minister Marco Buschmann (FDP). However, he sees room for improvement: for example, in ensuring anonymity in public spaces and transparency in the use of AI systems.

Federal Minister of Economics Robert Habeck (Greens) also welcomed the agreement. Artificial intelligence is “crucial for the competitiveness of the EU”.

Before the new rules actually come into force, the EU states must reach an agreement with the European Parliament.

Ban: AI for evaluating people

The EU Commission proposed the law in April 2021 with the aim of setting global standards. The greater the potential dangers of an application, the higher the requirements should be. High penalties are provided for violations of the rules. Above all, the authority wants to create the basis for users to be able to trust AI applications.

Among other things, the telecommunications ministers agreed on a ban on using AI to evaluate people based on their social behavior or personality traits if this leads to disadvantages. In addition, the regulation should specify how to deal with particularly risky AI systems.

These include biometric recognition systems and systems used in water and electricity supplies. The use of AI in the military sector and for purely research purposes is to be exempted from the rules.

AI already in many areas

Artificial intelligence usually refers to applications based on machine learning, in which software searches through large amounts of data for matches and draws conclusions from them.

They are already being used in many areas. For example, such programs can evaluate CT scans faster and with greater accuracy than humans. Self-driving cars also try to predict the behavior of other road users in this way. And chatbots or automatic playlists from streaming services also work with AI.

Critics: “important questions remain unanswered”, “full of loopholes”

The EU consumer association Beuc complained that the decision of the EU states left too many important questions unanswered, such as facial recognition by private companies in public places. In addition, provisions that classified systems as highly risky had been watered down.

Dutch Green MEP Kim van Sparrentak, on the other hand, criticized the decision. The agreement text lacks “necessary safeguards for fundamental rights” and is “full of loopholes,” vanSparrentak wrote on Twitter.

AI’s potential benefits and risks

The wide range of potential applications of AI also means there is a similarly broad spectrum of possible benefits and risks associated with using such technology. The potential benefits of AI at a societal level, as outlined by the European Parliament, include the following:

AI has the potential to improve healthcare, enhance the safety of cars and other transportation systems, and provide personalized, affordable, and longer-lasting products and services. It can also improve access to information, education, and training. Furthermore, AI can enhance workplace safety by utilizing robots for hazardous job tasks and create new job opportunities as AI-driven industries evolve and transform.

For businesses, AI can facilitate the development of innovative products and services, increase sales, optimize machine maintenance, enhance production output and quality, improve customer service, and conserve energy.

The use of AI in public services can result in cost reductions and provide new opportunities in public transportation, education, energy, and waste management. It can also contribute to improving the sustainability of products.

The utilization of data-based scrutiny can strengthen democracy, prevent disinformation and cyber attacks, and ensure access to high-quality information.

AI is expected to play a larger role in crime prevention and the criminal justice system, as it can process massive datasets more quickly, accurately assess prisoner flight risks, predict and prevent crime or terrorist attacks. In military contexts, AI could be used for defensive and offensive strategies in hacking and phishing, as well as targeting key systems in cyberwarfare.

However, the article also highlighted some of the risks associated with AI. These include issues of liability, such as determining who is accountable for any harm or damage caused by the use of AI. Similarly, in an article on Forbes’ website, futurist Bernard Marr suggested that the major risks of AI at a broad level are:

A lack of transparency, especially in the development of deep learning models (including the ‘Black Box’ issue where AI generates unexpected outputs, and human scientists and developers are unclear about the reasons behind it).

  • Bias and discrimination, particularly when AI systems inadvertently perpetuate or amplify societal biases.
  • Privacy concerns, particularly regarding AI’s ability to analyze large amounts of personal data.
  • Ethical concerns, especially related to the challenges of instilling moral and ethical values in AI systems.
  • Security risks, including the development of AI-driven autonomous weaponry.
  • Concentration of power, given the risk of AI development being dominated by a small number of corporations.
  • Dependence on AI, including the risk that overreliance on AI leads to a decline in creativity, critical thinking skills, and human intuition.
  • Job displacement, as AI has the potential to render some jobs unnecessary, while potentially creating the need for others.
  • Economic inequality, and the possibility that AI will disproportionately benefit the wealthy individuals and corporations.
  • Legal and regulatory challenges, and the necessity for regulation to keep pace with the rapid pace of innovation.
  • An AI arms race, involving companies and nations competing to develop new capabilities at the expense of ethical and regulatory considerations.
  • Loss of human connection, and concerns that reliance on AI-driven communication and interactions could lead to reduced empathy, social skills, and human connections.
  • Misinformation and manipulation, including the risk that AI-generated content fuels the spread of false information and manipulation of public opinion.
  • Unintended consequences, particularly related to the complexity of AI systems and the lack of human oversight leading to undesired outcomes.
  • Existential risks, including the emergence of artificial general intelligence (AGI) surpassing human intelligence and posing long-term risks for humanity’s future.
  • On the issue of misinformation and manipulation, several observers have suggested that the 2024 elections, particularly the US presidential election, may be the first elections significantly
  • influenced by AI in the campaigning process.
  • Potential impact on the employment market in the UK

A government-commissioned report by PWC in 2021 discovered that 7 percent of jobs in the UK workforce faced a high risk of automation within the next five years. This figure increased to 30 percent over a 20-year period:

Based on our analysis, it is estimated that approximately 7 percent of current UK jobs could be highly likely (over 70 percent probability) to be automated in the next five years, which could rise to around 18 percent after 10 years and just under 30 percent after 20 years.

These estimates align with previous studies and incorporate feedback from an expert workshop on the automatability of different occupations alongside a detailed examination of OECD and ONS data relating to task composition and required skills for various occupations.

The report highlighted the manufacturing sector as being particularly susceptible to job losses over the next 20 years, with anticipated reductions also in transport and logistics, public administration and defense, and the wholesale and retail sectors. Conversely, the health and social work sector was anticipated to experience the most significant job growth, along with expected gains in the professional and scientific, education, and information and communications sectors.

Jobs in lower-paid clerical and process-oriented roles were identified as being particularly at risk of being lost. On the other hand, the report indicated that there would be increases in jobs within managerial and professional occupations.

The report suggested that the most probable scenario is that the long-term impact of AI on employment levels in the UK would be largely neutral, although the specific impacts within this framework remain uncertain.

Subsequent analyses of AI, especially since the introduction of LLMs such as ChatGPT and Google Bard, have raised questions about whether the impact of AI will predominantly affect lower-paid or manual jobs. A report published by OpenAI in March 2023, the creator of ChatGPT, suggested that higher-paying jobs are more likely to be affected by LLMs. The analysis also indicated that there would be variations depending on the nature of the tasks involved:

The significance of science and critical thinking skills is strongly negatively linked to exposure, indicating that occupations requiring these skills are less likely to be influenced by current LLMs. Conversely, programming and writing skills show a strong positive correlation with exposure, suggesting that occupations involving these skills are more susceptible to LLM influence.

On April 21, 2023, the House of Commons Business, Energy, and Industrial Strategy Committee released a report on post-pandemic economic growth and the UK labor market. This report emphasized the potential impact of AI on productivity within the UK. It mentioned research from Deloitte which found that “by 2035 AI could enhance UK labor market productivity by 25%”, and that “four out of five UK organizations stated that the use of AI tools had heightened their employees’ productivity, improved decision-making, and made their processes more efficient”.

The report also argued that AI and related technologies might have a positive effect on facilitating labor market access for individuals who have experienced difficulty finding and maintaining employment, such as disabled individuals.

Estimates of AI’s impact on the UK and global economy are continually being released as these products evolve. Recent examples include research from McKinsey, which indicated that generative AI could provide value equivalent to the UK’s entire GDP to the global economy in the coming years:

Generative AI’s effect on productivity could add trillions of dollars in value to the global economy. Our latest analysis estimates that generative AI could add the equivalent of $2.6tn to $4.4tn annually across the 63 use cases we analyzed—by comparison, the United Kingdom’s entire GDP in 2021 was $3.1tn.

This impact would raise the overall influence of all artificial intelligence by 15 to 40 percent. This estimate would approximately double if we factor in the impact of integrating generative AI into software currently utilized for tasks beyond those use cases.

Case study: Potential impact on the knowledge and creative industries (House of Lords Communications and Digital Committee report, January 2023)

AI has potential applications across nearly all aspects of human life, making it impossible to discuss them all here. Yet, in January 2023, the House of Lords Communications and Digital Committee examined the potential effect of AI on the creative industries in the UK as part of a broader assessment of the sector, providing an illustrative example.

The committee received testimony indicating that new technologies and the rise of digitized culture will alter the way creative content is created, distributed, and monetized in the next five to ten years.

The committee emphasized the importance of protecting intellectual property (IP) and its significance to the creative industries. It also highlighted the impact of AI technologies, particularly the use of text and data mining by generative AI models to learn and develop content on existing materials.

The committee also brought to attention the proposed reforms to IP law:

The government’s proposed changes to IP law illustrated the tension between developing new technologies and supporting rights holders in the creative industries. In 2021, the Intellectual Property Office (IPO) sought input on the relationship between IP and AI. In 2022, the IPO outlined its conclusions, including “a new copyright and database right exception which allows text and data mining for any purpose”.

The committee expressed concerns that such proposals were “misguided” and did not adequately consider the potential harm to the creative industries. They argued that while AI development was important, it should not be pursued at the expense of the creative industries. As a result, the committee recommended the IPO to immediately pause its proposed changes to the text and data mining regime. The committee also urged the IPO to conduct and publish an impact assessment on the implications for the creative industries. If the assessment revealed negative effects on businesses in the creative industries, the committee suggested pursuing alternative approaches, such as those utilized by the European Union (EU), which are detailed in section 5.1 of this briefing.

Additionally, the committee cautioned against using AI to produce, reproduce, and distribute creative works and image likenesses without proper consent or consideration of the rights of performers and original creators.

In response to the committee, the government stated that, considering additional evidence of the impact on the creative sector, it would not move forward with the proposals for an exception for text and data mining of copyrighted works. Instead, the government announced plans to collaborate with users and rights holders to establish a “code of practice by the summer [2023]” on text and data mining by AI.

Several legal challenges are currently underway regarding the use of existing written content and images to train generative AI. Authors Paul Tremblay and Mona Awad, for instance, have initiated legal action in the United States against OpenAI, alleging unauthorized use of their work to develop its ChatGPT LLM.

The debate on how best to safeguard copyright and creative careers like writing and illustrating is ongoing. The Creators’ Rights Alliance (CRA), a coalition of organizations from across the UK cultural sector, contends that current AI technology is advancing without sufficient consideration of ethical, accountability, and economic issues related to creative human endeavor.

The CRA advocates for clear definition and labeling of solely AI-generated work and work involving creators’ input. It also emphasizes the need to protect the distinct characteristics of individual performers and artists. Furthermore, the CRA calls for copyright protection, including no data mining of existing work without consent, and urges increased transparency regarding the data used to create generative AI. Additionally, the CRA seeks enhanced protection for creative roles such as visual artists, translators, and journalists, to prevent these roles from being displaced by AI systems.

Italy is suggesting a new law regarding Artificial Intelligence (AI) as of May 20, 2024, which was presented to the Italian Senate.

The proposed law contains (1) general principles for the development and utilization of AI systems and models; (2) specific provisions, especially in the healthcare domain and for scientific research in healthcare; (3) regulations on the national strategy on AI and governance, including the identification of the national competent authorities as per the EU AI Act; and (4) modifications to copyright law.

Below, we present an outline of the significant provisions of the proposal.

Aims and General Principles

The suggested law endeavors to encourage a “fair, transparent, and responsible” use of AI, following a human-centered approach, and to oversee potential economic and social risks, as well as risks to fundamental rights. The law will work together with the EU AI Act. (Article 1)

The proposed law specifies general principles, founded on the principles developed by the Commission’s High-level expert group on artificial intelligence, pursuing three broad objectives:

Equitable algorithmic processing. Research, testing, development, implementation, and application of AI systems must respect individuals’ fundamental rights and freedoms, and the principles of transparency, proportionality, security, protection of personal data and confidentiality, accuracy, non-discrimination, gender equality, and inclusion.

Data protection. The development of AI systems and models must be based on data and processes that are appropriate to the sectors in which they’re planned to be used, and ensure that data is accurate, reliable, secure, qualitative, appropriate, and transparent. Cybersecurity throughout the systems’ lifespan must be guaranteed, and specific security measures adopted.

Digital sustainability. The development and implementation of AI systems and models must ensure human autonomy and decision-making, prevention of harm, transparency, and explainability. (Article 3)

Definitions

The definitions used in the proposed law, such as “AI system” and “[general-purpose] AI model” are the same as those in the EU AI Act, and the definition of the term “data” is based on the Data Governance Act. (Article 2)

Processing of Personal Data Related to the Use of AI Systems

Information and disclosures concerning the processing of data must be written in clear and simple language to ensure complete transparency and the ability to object to unfair processing activities.

Minors aged 14 or older can consent to the processing of personal data related to the use of AI systems, provided that the relevant information and disclosures are easily accessible and understandable. Access to AI by minors under 14 requires parental consent. (Article 4)

Use of AI in the Healthcare Sector

As a general goal, the proposed law stipulates that AI systems should contribute to improving the healthcare system, preventing and treating diseases while respecting the rights, freedoms, and interests of individuals, including their data protection rights.

The use of AI systems in the healthcare system must not select or influence access to medical services on a discriminatory basis. Individuals have the right to be informed about the use of AI and its benefits related to diagnosis and therapy, and to receive information about the logic involved in decision-making.

Such AI systems are intended to support processes of prevention, diagnosis, treatment, and therapeutic choice. Decision-making must remain within the healthcare professional’s purview. (Article 7)

Scientific Research to Develop AI Systems for the Healthcare Sector

The proposed law aims to streamline data protection-related obligations for scientific research conducted by public and private not-for-profit entities, for processing of personal data, including health data, for scientific research purposes to develop AI systems for the prevention, diagnosis, and treatment of diseases, development of medicines, therapies, and rehabilitation technologies, and manufacturing of medical devices. (Article 7)

Specifically, the proposed legislation:

– Removes the need to obtain consent from the individual whose data is being used, by categorizing the stated purposes as “significant public interests,” as outlined in Article 9(2)(g) of the GDPR. This exemption does not apply to commercial and for-profit activities.

– Allows for the secondary usage of personal data, including special categories of data, with direct identifiers removed, for processing related to the aforementioned “significant public interests.” Consequently, a new consent is not required if there are changes to the research.

– In such instances, the following conditions are applicable:

– The obligations of transparency and providing information to data subjects can be fulfilled in a simplified manner, such as by posting a privacy notice on the data controller’s website.

– The processing activities need to (1) be approved by the relevant ethics committee, and (2) be communicated to the Italian data protection authority (“Garante”); and (3) certain information, including a data protection impact assessment and any processors identified, must be shared with the Garante. Processing may commence 30 days after this communication unless the Garante issues a blocking measure. (Article 8)

These provisions are consistent with a recent revision of the Italian Privacy Code pertaining to processing for medical research purposes (refer to our blogpost here).

Other Industry-Specific Provisions

– The utilization of AI systems in the workplace must be secure, dependable, transparent, and respectful of human dignity and personal data protection. The employer is required to notify the employee about the use of any AI, along with other pertinent information that must be provided prior to commencing employment. (Article 10)

– In regulated professions, AI may only be used for supportive tasks. To maintain the trust-based relationship with the client, information about any AI systems used by the professional must be communicated in a clear, straightforward, and comprehensive manner. (Article 12)

National AI Strategy

– The proposed legislation introduces a national strategy on AI, to be updated biennially, with the aim of establishing a public-private partnership, coordinating the activities of public entities, and implementing measures and economic incentives to foster business and industrial development in the AI domain. (Article 17)

Governance

– The proposed legislation assigns two competent national authorities for AI, as required by the EU AI Act, with the authority to enforce and implement national and EU AI laws, as follows:

– Agenzia per l’Italia digitale (“AgID”, the agency for “digital Italy”). AgID will be responsible for (1) promoting innovation and AI development, and (2) establishing procedures and carrying out functions related to the notification, evaluation, accreditation, and monitoring of the notified bodies tasked with conducting conformity assessments of AI systems pursuant to the EU AI Act.

– Agenzia per la cybersicurezza nazionale (“ACN”, the agency for national cybersecurity). ACN will be (1) tasked with monitoring, inspecting, and enforcing powers over AI systems, in accordance with the regulations set forth in the EU AI Act, and (2) responsible for promoting and developing AI from a cybersecurity perspective.

Although not designated as a competent authority for AI, the Garante maintains its competence and authority in relation to the processing of personal data. (Article 18)

The Italian government is also empowered to enact, within 12 months from the enactment of the law, the necessary legislation to align national law with the EU AI Act. (Article 22)

Labeling of AI-Generated News and Information

– The proposed legislation establishes a requirement to label any news or informational content that is entirely generated by AI, or has been partially modified or altered by AI in a way that presents fictional data, facts, and information as genuine, with an “AI” mark, label, or announcement. (Article 23)

Copyright Protection and AI-Generated Works

– The proposed legislation introduces specific amendments to copyright law. Notably, regarding AI-generated works, it clarifies that only works resulting from human intellectual effort are protected by copyright, including those created with the assistance of AI tools, to the extent that they reflect the author’s intellectual endeavor. (Article 24)

Criminal Provisions

Among other provisions, the proposed legislation establishes a new offense targeting the unauthorized dissemination of images, videos, or audio that have been falsified or altered by AI in a manner that can be misleading about their authenticity. The new offense carries a penalty of 1-3 years of imprisonment. (Article 25)

Next Steps

As part of the legislative process, the proposed legislation will need to undergo review, discussion, and approval by the Senate, and will subsequently be transmitted to the Chamber of Deputies, which must also approve the same text. Once formally approved, the law will come into effect on the 15th day following its publication in the Italian Official Journal.

Technological advancements are exerting a rapidly increasing influence on our lives with the advent of artificial intelligence (AI). AI has swiftly emerged as an integral element of our lives, transforming business

Nonetheless, as AI technologies gain popularity, they bring up moral, legal, and social concerns. Many countries across the globe are adopting laws to control the design, deployment, and use of AI. This article discusses the relevant regulations and details about AI in specific countries and regions. It also seeks to educate you about the main considerations and issues related to AI.

AI Regulations Across Different Countries

1. The United States of America

The United States’ decentralized approach to regulating artificial intelligence aligns with its general governance model. Most regulatory practices and policies in the US are focused on specific sectors, and this approach similarly extends to the field of AI.

Overall, there is no comprehensive federal regulation framework specifically for artificial intelligence. However, the US has set up various sector-specific agencies and organizations to address some of the challenges arising from the development of AI.

For instance, the Federal Trade Commission (FTC) focuses on consumer protection when it comes to AI applications and aims to enforce fair and transparent business practices in the industry. Similarly, the National Highway Traffic Safety Administration (NHTSA) regulates the safety aspects of AI-powered technologies, particularly in autonomous vehicles.

Additionally, some states have implemented their own regulations to some extent. For example, the CCPA has imposed strict requirements on businesses handling consumer data, and these requirements also pertain to those using AI technologies. While AI regulation in the United States lacks centralization, it is compensated for by the extensive sectoral participation.

2. The European Union (EU)

The European Union (EU) has taken a proactive approach to AI legislation, driven by measures such as the General Data Protection Regulation (GDPR) and ongoing discussions about the proposed Artificial Intelligence Act. These initiatives aim to establish stringent guidelines for the collection, use, and preservation of personal data.

Since AI systems operate based on the collection and use of personal data, there is a need for strict rules to respect and safeguard individual privacy. The EU’s proposed legislation aims to control the unchecked operation of AI systems. The AI Act complements the GDPR and seeks to give the EU significant authority over the development, use, and regulation of AI. Importantly, the Act is anticipated to be guided by transparency, accountability, and ethical principles to address the concerns and interests of users.

By leveraging these principles and considerations, the EU aims to position itself as the global leader in setting ethical standards and, consequently, in promoting competitiveness and innovation in AI deployment.

3. China

China has emerged as a major force in the AI sector, positioning itself as a leading global power in AI. The country’s objective to become the premier AI innovation hub by 2030 is well underway, marking a decade-long journey towards significant technological dominance. Despite the government’s assertion of complete control in reshaping all aspects of technology through AI, there is a high level of awareness of AI’s ethical and security implications.

Consequently, the Chinese government has formulated regulations to govern the growth and operations of AI. Moreover, China’s extensive regulations on AI and cybersecurity encompass most of the guiding principles applied to AI.

The Chinese Cybersecurity Law and the New Generation AI Development Plan provide measures for data protection and cybersecurity in AI, emphasizing compliance and timely risk management. With an integrated strategy aimed at attaining AI supremacy while ensuring its ethical and secure application, China is prudently navigating the use of the technology, while averting its articulated risks.

In this respect, China is confident in implementing AI-safe measures in line with upcoming global standards, while striving to establish a new operational paradigm for AI that can position China as the eminent AI superpower.

4. Canada

Canada has taken a proactive approach to AI regulation by striking a delicate balance between fostering innovation and upholding ethical standards and societal interests. The country has introduced significant government-led initiatives, such as the Pan-Canadian AI Strategy and the Canadian AI Ethics Council, to advocate for the responsible advancement of AI and address pertinent ethical issues in the AI sector.

These initiatives play a crucial role in facilitating collaboration among stakeholders to develop policies that align with respect for ethical values and the advancement of technology.

Furthermore, Canada has enacted the Personal Information Protection and Electronic Documents Act to regulate the collection, use, and disclosure of individuals’ personal information using AI technologies. The Act ensures the preservation of individuals’ privacy rights and mandates that AI technology meets rigorous data protection criteria.

5. Australia

In Australia, several laws promote effective governance of AI. The National Artificial Intelligence Ethics Framework is central to AI regulation in Australia. It outlines the ethical principles guiding the development and implementation of AI systems. This framework is used in Australia to ensure the ethical development of AI technologies, fostering public trust in the technology.

Moreover, regulatory authorities in Australia, such as the ACCC, play a crucial role in enforcing regulations. They are responsible for monitoring compliance with competition and consumer protection laws in the context of AI applications. Through these efforts, Australia aims to create a supportive environment for AI innovation while safeguarding consumer interests and upholding AI ethics.

6. International organizations

International organizations like the Organization for Economic Co-operation and Development (OECD) and the United Nations are actively engaged in establishing global guidelines for AI regulation. For instance, the OECD’s AI Principles advocate for transparency, responsibility, and inclusion in AI development and implementation Similarly, the United Nations Sustainable Development Goals emphasize the use of AI for global benefits and sustainability.

Given the varying regulatory landscapes for AI, collaboration between countries and international organizations is increasingly essential. Through standardizing approaches and guidelines, cooperation ensures that nations responsibly develop and apply AI to address global challenges. Collaborative efforts and dialogue will enable the integration of regulation challenges and the use of AI for shared social good.

Key Considerations for Developing Legislation

The following is a list of essential considerations in shaping AI legislation, encompassing ethical principles, data privacy, algorithmic bias, transparency, explainability, and international cooperation.

  • Ethical principles: Regulations should uphold ethical principles such as transparency, fairness, and accountability to ensure responsible AI development and use.
  • Data privacy: Legislation should include guidelines on how AI collects, uses, and protects personal data to mitigate privacy concerns.
  • Algorithmic bias: Measures should be integrated to address algorithmic bias and facilitate fair and impartial AI decision-making.
  • Transparency and explainability: AI systems should be transparent and comprehensible, enabling users to understand decision-making processes and ensuring accountability.
  • International collaboration: Governments should collaborate with international organizations to establish unified regulations that address global challenges.Takeaway

AI regulations influence significantly the future impact of the technology on society. They should establish clear requirements and support AI across various sectors, always prioritizing and consumer protection principles. As AI becomes more advanced due to advancements in ethical ethical learning, regulations should become more adaptable , updated, and coordinated among all regulatory bodies. Stakeholders should work together at national and global levels to ensure the responsible implementation of AI and maximize the potential benefits of this technology.

As artificial intelligence (AI) becomes more significant in society, professionals in the field have recognized the importance of establishing ethical guidelines for the creation and use of new AI technologies. While there isn’t a comprehensive governing organization to draft and enforce these regulations, numerous tech companies have implemented their own versions of AI ethics or codes of conduct.

AI ethics encompass the moral guidelines that organizations utilize to promote responsible and equitable development and application of AI. This article will examine the concept of ethics in AI, its significance, as well as the challenges and advantages of formulating an AI code of conduct.

AI ethics refer to the framework of guiding principles that stakeholders (which include engineers and government representatives) employ to ensure the responsible development and application of artificial intelligence technologies. This entails adopting a safe, secure, humane, and eco-friendly approach to AI.

A robust AI code of ethics can involve avoiding biases, safeguarding user privacy and their data, and addressing environmental concerns. The two primary avenues for implementing AI ethics are through company-specific ethics codes and government-driven regulatory frameworks. By addressing both global and national ethical concerns in AI and laying a policy foundation for ethical AI within organizations, both methods contribute to regulating AI technologies.

Discussion surrounding AI ethics has evolved from its initial focus on academic studies and non-profit organizations. Presently, major tech firms like IBM, Google, and Meta have assembled teams dedicated to addressing the ethical issues arising from the accumulation of vast data sets. Concurrently, governmental and intergovernmental bodies have begun to formulate regulations and ethical policies grounded in academic research.

Creating ethical principles for responsible AI development necessitates collaboration among industry stakeholders. These parties need to analyze how social, economic, and political factors intersect with AI and determine how humans and machines can coexist effectively.

Each of these groups plays a vital role in minimizing bias and risk associated with AI technologies.

Academics: Scholars and researchers are tasked with generating theory-based statistics, studies, and concepts that assist governments, corporations, and non-profit organizations.

Government: Various agencies and committees within a government can promote AI ethics at a national level. An example of this is the 2016 report from the National Science and Technology Council (NSTC), titled Preparing for the Future of Artificial Intelligence, which outlines the relationship between AI and public outreach, regulation, governance, economy, and security.

Intergovernmental entities: Organizations such as the United Nations and the World Bank are crucial for enhancing awareness and formulating international agreements concerning AI ethics. For instance, UNESCO’s 193 member states adopted a global agreement on the Ethics of AI in November 2021, which aims to uphold human rights and dignity.

Non-profit organizations: Groups like Black in AI and Queer in AI work to elevate the representation of diverse communities within AI technology. The Future of Life Institute formulated 23 guidelines that have since become the Asilomar AI Principles, detailing specific risks, challenges, and outcomes tied to AI technologies.

Private companies: Leaders at tech giants like Google and Meta, as well as industries such as banking, consulting, and healthcare that utilize AI, are accountable for establishing ethics teams and codes of conduct. This often sets a standard for other companies to follow.

The significance of AI ethics arises from the fact that AI technologies are designed to enhance or substitute human intelligence; however, issues that can impair human judgment may inadvertently impact these technologies as well. AI initiatives developed on biased or unreliable data can have detrimental effects, especially for underrepresented or marginalized individuals and groups. Moreover, if AI algorithms and machine learning models are hastily constructed, it may become difficult for engineers and product managers to rectify embedded biases later on. Implementing a code of ethics during the development phase is a more effective way to address potential future risks.

Instances of AI ethics can be illustrated through real-world cases. In December 2022, the application Lensa AI employed artificial intelligence to create stylized, cartoon-like profile pictures from users’ standard images. Ethically, some criticized the application for failing to provide credit or adequate compensation to the artists whose original digital works the AI was trained on. Reports indicated that Lensa was trained on billions of photographs obtained from the internet without prior consent.

Another instance is the AI model ChatGPT, which allows users to engage with it by posing questions. ChatGPT searches the internet for information and responds with a poem, Python code, or a proposal. One ethical concern is that individuals are using ChatGPT to excel in coding competitions or to compose essays. It also prompts similar inquiries to Lensa, but pertains to text instead of images.

These two instances exemplify prevalent issues in AI ethics. As AI has advanced in recent years, impacting nearly every sector and significantly benefiting areas such as health care, the discussion surrounding AI ethics has become increasingly important. How can we ensure that AI is free from bias? What steps can be taken to reduce future risks? There are various potential solutions, but stakeholders need to operate responsibly and collaboratively to achieve positive results worldwide.

Ethical issues related to AI

There are numerous real-world situations that can effectively illustrate AI ethics. Here are just a few.

AI and bias

If AI fails to gather data that accurately reflects the population, its decisions may be prone to bias. In 2018, Amazon faced criticism for its AI recruiting tool, which penalized resumes containing the term “women” (such as “Women’s International Business Society”) [3]. Essentially, the AI software discriminated against women, leading to legal liability for the tech giant.

AI and privacy

As noted earlier with the Lensa AI example, AI depends on data sourced from internet searches, social media images and comments, online transactions, and more. While this personalization enhances customer experience, it raises concerns regarding the apparent absence of genuine consent for these companies to access our private information.

AI and the environment

Certain AI models are extensive and demand substantial energy to train on data. Although research is being conducted to create energy-efficient AI methods, more efforts could be made to include environmental ethical considerations in AI-related policies.

How to foster more ethical AI

Developing more ethical AI necessitates a thorough examination of the ethical ramifications of policy, education, and technology. Regulatory frameworks can help ensure that technologies serve societal benefits rather than causing harm. Globally, governments are starting to implement policies for ethical AI, including guidelines on how companies should address legal concerns when bias or other harms occur.

Everyone who interacts with AI should be aware of the risks and potential adverse effects of unethical or deceptive AI. The development and distribution of accessible resources can help to reduce these types of risks.

It may seem paradoxical to utilize technology to identify unethical conduct in other technological forms, but AI tools can assist in determining whether video, audio, or text (hate speech on Facebook, for instance) is genuine or not. These tools can identify unethical data sources and bias more accurately and efficiently than humans.

Continue learning

The fundamental question for our society is how do we manage machines that surpass our intellect? Lund University’s Artificial Intelligence: Ethics & Societal Challenges examines the ethical and societal implications of AI technologies. Covering topics from algorithmic bias and surveillance to AI in democratic versus authoritarian contexts, you will explore AI ethics and its significance in our society.

FredMT Admin Avatar

Leave a Reply

Your email address will not be published. Required fields are marked *