Exploring the cutting edge of technology, batteries, and green energy for a sustainable future

AI companies promised to self-regulate one year ago

Posted by:

|

On:

|

Tech giants in the USA are committed to using artificial intelligence responsibly. Risk assessments are intended to curb misinformation and increase the security of use.

Seven leading developers of software with artificial intelligence (AI) in the USA have made a commitment to use the technology responsibly and safely. This includes checking AI programs for risks before publication. In addition, content created or modified by AI software should be labeled, as US President Joe Biden said at a meeting with representatives of major technology and internet companies in Washington.

Participating companies include Google, Amazon, the Facebook group Meta and Microsoft. Also involved is the company OpenAI, whose technology is behind the popular chatbot ChatGPT.

Growing concern about misinformation

With the success of the chatbot ChatGPT developed by OpenAI, concerns have grown that AI software can be used to create and spread false information, including deceptively real-looking photos and videos. Because the program has no understanding of the content, it can make claims that are completely false, even with supposedly persuasive power.

The companies committed to “creating a more comprehensive regime that makes it easier for consumers to know whether a piece of content is artificially generated or not.” “There is still technical work to be done, but the point is that it will apply to audio and visual content and will be part of a more comprehensive system,” a White House official said.

In the USA, with a view to the 2024 presidential election,ways are being sought to detect whether audio or image material has been artificially generated in order to prevent deception and forgery.

Combating prejudice

But critics also point out that AI programs are not free from prejudice and discrimination. The companies committed to resolve the problem. They also declared their willingness to focus artificial intelligence on major challenges such as cancer research and climate change.

“Limits and supervision needed”

Artificial intelligence poses risks for society, the economy and national security – but also incredible opportunities, said Biden. The voluntary commitments are “a promising step,” but new laws and oversight will also be needed, said Biden after a meeting with company representatives in the White House. The companies Anthropic and Inflection, which develop AI assistants, also joined the voluntary commitment.

The White House says it wants to work with the US’s allies on international rules for AI. The topic was already on the agenda at the G7 summit in Japan in May. Great Britain is expected to host an international AI summit in the autumn.

At EU level, a comprehensive labeling requirement for AI-generated content is currently being discussed as part of a comprehensive AI law. In addition to labeling image and sound recordings, Federal DigitalMinister Volker Wissing is also calling for the introduction of an AI seal fortexts.

Discover how AI improves efficiency and fosters innovation within legal departments, enhancing research, compliance, and decision-making.

Artificial intelligence (AI) is revolutionizing global business operations. According to a 2023 report by Goldman Sachs, AI has the potential to automate two-thirds of US occupations to some extent. Although it’s commonly believed that the legal industry is slow to adopt technology, AI is making significant strides in this field. A recent study by ContractPodAi in January 2024 revealed that over half of in-house legal professionals (51%) report that their company’s leadership encourages the use of GenAI tools, and more than a third (35 %) require it.

The advantages of AI in the legal sector are plentiful. This article examines how corporate legal departments are integrating AI into routine and complex legal matters to improve efficiency, enhance compliance, facilitate better decision-making, and elevate client service. It will discuss how artificial intelligence is empowering legal professionals to meet the demands of the modern legal landscape.

Artificial Intelligence in Corporate Legal Departments

Similar to other parts of a company, legal departments are under pressure to demonstrate tangible value while managing costs. As leading organizations strive to gain a competitive edge and boost productivity, adopting new legal technology helps achieve these objectives. AI technology is reshaping the delivery of legal services by automating repetitive tasks, streamlining document management and retrieval, processing vast amounts of information, and refining contract review. AI software and systems are capable of more than just executing tasks based on predetermined programming.

Over time, modern AI systems enhance their performance through human review and feedback, enabling legal professionals to make better, data-driven decisions. While AI will never replace human intelligence or legal experts, it will continue to provide invaluable support and transform practices within a law firm or in-house counsel’s profession.

AI tools are already being utilized in various legal practices, including:

-Due Diligence

– Predictive Analytics

– Contract Analysis

– Contract Review

– Legal Document Generation

– e-Discovery

– Legal Research

– Contract Negotiation

– Document Management

Legal professionals report that their teams have been proactive in providing guidance on when, how, or if to use GenAI tools for legal work. A large majority of respondents (85%) state that their company’s legal department has established guidelines, best practices, or operating procedures for using GenAI tools.

1. Streamlining Legal Processes

We all understand the value of a lawyer’s time and the associated costs. Therefore, finding ways to save time in the legal field while maintaining accuracy and compliance is crucial – benefiting both the attorney and the client. Law firms and in-house counsel can assess existing workflows to identify time-consuming tasks, prone to human error, or suitable for automation, and introduce an AI solution to assist.

AI can help streamline vital aspects of legal services, such as comprehensive document review, thorough proofreading, and in-depth legal research. This, in turn, allows regret to dedicate more time to advising and counseling clients. Artificial intelligence tools are adept at handling large data sets (documents, etc.) with high precision, while simultaneously recognizing patterns in word relationships or data to identify key information and detect errors or inconsistencies. It can analyze contracts and other legal documents, extract relevant information, and complete these manual tasks instantly. This not only saves time and reduces the laborious nature of routine tasks but also helps humans evade errors and burnout.

2. Risk Assessment and Compliance

Corporate governance is constantly, presenting complex legal and compliance challenges within an organization. AI systems possess robust functionality to help ensure compliance and pinpoint legal risks by continuously monitoring regulatory changes and correlating them with potential implications for the evolving business. These tools notify the legal team of updates or changes, enabling legal professionals to remain proactive in meeting compliance requirements and make necessary adjustments promptly.

Likewise, artificial intelligence can sift through extensive data (particularly beneficial in due diligence processes) to identify potential risks and offer comprehensive guidance on mitigating them, ultimately averting future disruptions for the legal department and the company’s bottom line.

3. Quality Assurance in Legal Documentation

Utilizing AI for Quality Assurance (QA) and legal reviews is increasingly essential as it meticulously scans and summarizes relevant documents, revealing any discrepancies or inaccurate data findings. It quickly identifies specific clauses, potential risks, and company obligations, saving time and improving the comprehensiveness and accuracy of legal document analysis for litigation and legal professionals.

4. Organizational Efficiency

In today’s legal industry, AI is rapidly becoming a vital tool for staying competitive, especially in the time-consuming task of legal research. Specialized vertical legal language models and AIs, like those found in ContractPodAi’s Leah software, excel in legal research by locating and analyzing essential information from various sources such as articles, statutes, and relevant case law. The AI ​​assists lawyers by identifying authoritative sources, extracting key information, providing document summaries, and offering insights to help legal professionals make better decisions.

5. Strategic Decision-Making

While human judgment and expertise are always essential in legal practice, AI can be leveraged by general counsels to review contracts, capture detailed data, and provide trend analytics to assist in making more-informed decisions. AI can examine past case laws and outcomes to predict risk, compliance issues, and future results, allowing legal professionals to form new strategies based on concrete data. Additionally, AI can aid in managing outside counsel by identifying the best legal representation for a case, thereby saving time and costs.

6 .Reducing Workload and Stress

AI not only relieves redundant, time-consuming in-house staff workload, but also contributes to higher job satisfaction, reduced attorney stress, and minimized work frustration. By AI to perform administrative tasks and offer support in drafting and document analysis, legal professionals can focus on higher-value, strategic duties. This ultimately leads to increased productivity, a healthier work-life balance, and improved job satisfaction without compromising work quality. Leveraging new technology that frees up time and brainpower ultimately contributes to a healthier work-life balance.

7. Enhancing In-House Client Service

AI enables in-house lawyers to focus more on strategic legal advising and less on mundane tasks, leading to improved service for both internal and external clients. The time saved on low-level responsibilities allows sorry to engage more in human-specific activities, such as improving client response times, personalizing client communication, and strategic brainstorming, ultimately leading to better client satisfaction. Additionally, AI equips legal teams with better information and legal research, helping build better cases and ultimately making their clients happy.

Summary

As legal departments in corporations explore the use of AI in the workplace, they will uncover the myriad ways in which AI can aid them in their daily and long-term tasks. A study by Mordor Intelligence revealed that the AI ​​Software Market in the Legal Industry is projected to expand from USD 2.19 billion in 2024 to USD 3.64 billion by 2029.

The integration of AI into various aspects of the legal profession, such as research, analytics, due diligence, compliance, and contract management, is having a significant impact within corporate legal teams. Will AI replace lawyers? No, it will empower them to perform their jobs more effectively, efficiently, and intelligently. The emergence of AI systems places legal departments in an advantageous position to drive profitability, reduce costs, and enhance productivity like never before.

Generative Artificial Intelligence (GenAI) is a branch of AI, including deep learning and machine learning, that uses vast amounts of collected data to generate human-consumable output based on user input. To create the datasets that underlie GenAI tools, large volumes of human -created data were collected and processed into mathematical predictive models. GenAI excels at extensive processing information to recognize, summarize, translate, compare, and predict. Users utilize GenAI to produce original text, images, audio, video, and code.

GenAI tools are revolutionizing the provision of legal services due to their potential impact on various business areas, including content discovery, creation, authenticity, regulations, automation, and the customer and employee experience. Many legal practices already rely on generative artificial intelligence to expedite and improve their work, including the drafting of contracts, trial briefs, legal research, pleadings, discovery requests, deposition questions, and legal marketing materials.

Law.com conducted a survey of 100 law firms to understand their use of generative AI, and 41 firms confirmed their adoption of generative AI, although Law.com noted that the actual number is likely much higher. Even though many of these firms and other companies have established policies, there is also use of individual subscriptions or public services, known as “shadow AI” usage.

The terms AI and GenAI are often used interchangeably, but they serve different functions. Traditional AI models perform specific tasks or solve specific problems based on pre-defined rules and algorithms. GenAI models are not restricted to specific tasks; they are trained on vast amounts of data and can generate entirely new content based on that training. This makes the potential of generative AI very compelling for the legal field to expedite and enhance its role in content creation.

GenAI Provides the Following Key Advantages to In-House Legal Teams

Efficiency

Remaining competitive in today’s legal landscape entails finding ways to create more efficiencies that will continue to grow the business. GenAI can be utilized to expedite time-consuming legal research tasks, such as locating relevant laws and rulings, searching through case law databases, and reviewing evidence. Once it locates the information, it can then convert it into the requested format (legal documents, contracts, and letters).

AI can also streamline document management and contract review. AI can quickly identify errors and inconsistencies, as well as draft preferred contract language. For instance, a global venture capital firm initiated a pioneering GenAI legal contract management endeavor with support from ContractPodAi and PwC. Leah Legal, ContractPodAi’s GenAI solution, demonstrated its ability to extract complex logic-oriented data points and conduct sophisticated analysis across nearly 16,500 contract pages with over 98% accuracy.

Risk Assessment and Compliance

For example, the legal team at the Southeast Asian e-commerce business used generative AI to accelerate its contract review process and to identify common risks across different agreements. The team estimates that contract review is already 20% to 30% faster than its standard workflow .

GenAI is capable of digesting continually changing regulatory laws and requirements and then highlighting any disparities that do not align with the company. As the models continue to learn, they can pinpoint compliance gaps, which helps leaders adjust to meet new or changing obligations.

Data Augmentation:

Generate realistic synthetic data that mimics real-world data, which can be particularly beneficial in legal situations sensitive involving data, as it enables legal departments to conduct thorough analyzes without risking the confidentiality of the data.

Scenario Simulation and Analysis:

Produce data representing potential scenarios, allowing legal professionals to assess the impact of various risk factors and address vulnerabilities before they manifest.

Predictive Modeling:

Utilize learned patterns and training to forecast future outcomes, providing valuable insights for legal professionals to identify potential risks and predict court decisions.

Decision-making:

Provide recommendations based on simulated outcomes and predictive analytics.

Preparing for AI Integration:

According to a recent study by ContractPodAi, over half of in-house legal professionals (51%) report that their company’s leadership encourages the use of GenAI tools, while more than a third (35%) require it as of January 2024. Successfully integrating GenAI into an organization requires careful consideration, planning, and evaluation to yield positive results for the business.

Best Practices for Integrating AI:

Assess Legal Needs

When integrating AI into business operations, it is essential to meticulously evaluate the legal requirements to ensure compliance with relevant laws and regulations, including data privacy laws, intellectual property rights, and industry-specific regulations, as well as the influx of new laws and regulations. governing AI usage.

Identify High-Impact Areas for AI Applications:

AI can support various legal practice areas, and it is crucial to identify the key areas where GenAI can play a significant role in achieving operational goals, such as drafting communications, legal research, narrative summaries, document review, contract creation, due diligence, discovery review, and contract redlines.

Evaluate Current Workflows and Technology Infrastructure:

Assess how GenAI will integrate into existing workflows and whether the current technology infrastructure supports its implementation without causing disruptions. Any changes made should adhere to industry and internal regulations and standards.

Set Objectives for GenAI Adoption:

It is important to clearly define the goals and consider critical variables, such as the cost of the technology, budget, scalability, ease of use, and compatibility with the current technology infrastructure.

Develop a Phased Approach:

Taking a phased approach to integration can help users adapt more seamlessly. Communication with the company should be open and transparent, providing employees with the necessary information to understand the positive impact of GenAI integration and the defined goals.

Implementing GenAI in Legal Departments:

A. Technical Setup and Compliance

The implementation of GenAI in legal departments requires consideration of its legal and regulatory implications. Establish a framework outlining how the legal department will utilize GenAI, identify potential risks, and involve stakeholders in developing company policies and procedures, including ethical standards, responsibilities, privacy, non-discrimination, and compliance with applicable laws.

B. Employee Training and Change Management

The field of technology is constantly changing. Laws and regulations are frequently evolving. When combined, these factors create ongoing and potentially overwhelming changes. Therefore, it is crucial for legal teams to continuously adapt their use of GenAI and the accompanying training. Tools and algorithms are always progressing, making it essential to stay current in order to fully utilize the capabilities of GenAI. Initial and ongoing training helps all users grasp best practices, effectively integrate GenAI into their work, adopt new methods, and understand privacy and ethical considerations.

Like any new technology, there may be resistance, confusion, and pushback. However, a few straightforward steps can help overcome obstacles and set your legal team up for success.

An illustrated representation outlining the process of preparing a legal team for success

Ethical and Legal Concerns

GenAI tools offer numerous advantages. In fact, a recent report by PwC revealed that 70% of CEOs believe that GenAI will significantly impact the way their companies generate, deliver, and capture value over the next three years. However, it is important not to overlook ethical and legal considerations.

All software and technology should undergo an initial risk assessment to identify potential harm, appropriateness of input, reliability of outputs, and effectiveness of practices. Legal professionals need to ensure that all outputs are completely reliable and accurate. When reviewing the generated content, the following aspects must be taken into account:

Bias and fairness

GenAI may unintentionally draw biased historical data, potentially leading to unfair outcomes and discrimination.

Accuracy

Inaccurate GenAI-generated content is referred to as “hallucinations”. Lawyers must carefully review any content suggested or edited by GenAI.

Privacy

GenAI technology relies on vast amounts of data, often including highly sensitive and confidential information. Attorneys must ensure that GenAI systems comply with strict data privacy regulations and that the data is only used for its intended purposes.

Accountability

Lawyers must be proactive and fully involved when incorporating GenAI into their legal practices. GenAI technology should complement their work rather than replace it.

Ethical Aspects of GenAI

As we have discussed, the deployment of AI tools and technology carries significant risks and potential legal repercussions, particularly in the realm of law. The European Commission has established an Expert Group on AI to develop Ethical Guidelines for Trustworthy AI., additionally the United Nations has formed an AI and Global Governance Platform to address the global policy challenges presented by AI.

At the organizational level, leadership must establish GenAI governance, incorporating:

  • Clear policies that direct and embed ethical practices across all areas using AI
  • Strategies for negotiating AI-specific contractual terms and addressing potential AI failures
  • Designation of individuals to oversee AI governance and provide reports to management
  • Risk assessments and audits of AI models to ensure compliance with ethical standards

Transparency and accountability in AI not only protect against potential mishaps and legal consequences but also help adhere to company policies by ensuring that AI algorithms are thoroughly tested and explainable. This also builds trust among users and clients. At an individual level, collaborating with existing data privacy teams can provide an advantage in responding promptly to generative AI issues, as many of the tools and techniques learned by data privacy professionals are equally applicable to generative AI.

Spotlight on Leah Legal

Leah Legal, developed by ContractPodAi, is specifically designed for legal and compliance scenarios, utilizing state-of-the-art Large Language Models (LLMs). Leah, your customized GenAI solution, simplifies the execution of legal responsibilities, making them faster, more intelligent, and completely reliable. It incorporates ethical guardrails and rigorous testing, aligning with your organization’s standards to instill trust in AI solutions. Leah promotes strategic thinking and offers real-time, precedent-based legal analysis.

Leah provides a range of specialized legal modules equipped with cutting-edge GenAI and rigorously tested for maximum accuracy. Each module is supported by customized frameworks for specific legal tasks to ensure efficiency and dependable results for your unique workflow. The modules include Extract, Redline, Discovery, Deals, Claims, Playbook, Helpdesk, and Draft.

Leah is tailored specifically for contract management and legal operations, including contract negotiations. Within minutes, she can deliver results that significantly enhance your legal workflows:

  • Examine your contracts and establish a record
  • Recognize critical clauses, compare them with your historical data, and emphasize relevant insights to expedite and enhance negotiations
  • Discover advantageous language from your previous legal documents, propose evidence-based counterpoints, and notify you of potential risks based on your established legal framework
  • Offer proactive guidance based on successful previous negotiations
  • Recommend clauses, terms, and edits that align with your company’s goals and proven strategies
  • Provide insight into all your vendor and customer contract data
  • Speed ​​up your negotiations with real-time data driven by predictive analytics, presented in a visual dashboard

The Future of GenAI in the Legal Field

Legal professionals are realizing the advantages of employing AI in their field and acknowledge its significance for the future. To stay competitive and enhance efficiency, GenAI must be adopted and comprehended. The future of GenAI will continue to bring progress in both application and function, leading to changes and shifts in the way legal professionals operate.

More intricate research applications, including case search capabilities, case citations, and strategic decision-making, will allow lawyers to dedicate time to other advanced tasks. Traditional legal work will be streamlined, leading to improved accuracy and overall job satisfaction.

On the other hand, clients will be able to leverage GenAI by selecting lawyers and firms based on more precise criteria such as success rates, client feedback, expertise, and more. Cultivating trustworthy, confident relationships will become more straightforward and require less guesswork.

The realm of predictive analytics will expand and become more advanced. In-house legal teams and law firms will be able to more precisely anticipate service costs, enabling better pricing and smoother agreements.

GenAI is an Enduring Presence

Whether or not legal professionals embrace GenAI, it is here to stay. How can legal professionals fully accept this advanced technology? Be open to change and embrace a growth-oriented mindset. Understand AI’s potential and acknowledge its limitations. Learn how it can help you perform your job more effectively and efficiently with greater job satisfaction. Even the author – an early adopter of the technology and an avid user in the legal field – has discovered numerous ways in which generative AI expedites legal work and makes it more efficient.

Ensure your company is investing in suitable software and technology and request involvement in its implementation. Pursue additional educational opportunities related to it. Ensure that GenAI is used fairly, accurately, and in compliance with the law to safeguard your rights, the company’s reputation, and your clients’ relationships. Last, and perhaps most importantly, always uphold the highest standards of professionalism and ethics.

One year prior, on July 21, 2023, seven prominent AI companies—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI—made pledges to the White House regarding eight voluntary commitments aimed at the safe and trustworthy development of AI.

These commitments included pledges to enhance the testing and transparency of AI systems, as well as to share information about potential risks and harms.

On the anniversary of these commitments, MIT Technology Review reached out to the AI companies that signed them for insights into their progress thus far. Their responses indicate that while there has been some positive movement in the tech sector, there are significant caveats.

The voluntary commitments emerged during a period when generative AI enthusiasm was arguably at its peak, with companies competing to launch larger and more advanced models than their rivals. Simultaneously, issues such as copyright disputes and deepfakes began to surface. A prominent group of influential tech figures, including Geoffrey Hinton, also expressed concerns about the existential risks that AI could pose to humanity. Suddenly, the discourse surrounding the urgent need for AI safety intensified, putting pressure on regulators globally to take action.

Up until recently, the development of AI resembled a chaotic environment. Historically, the US has hesitated to impose regulations on its tech giants, preferring instead to let them self-regulate. The voluntary commitments exemplify this approach: they represent some of the initial prescriptive guidelines for the AI industry in the US, but their non-enforceable and voluntary nature remains.

Since then, the White House has issued an executive order that expands these commitments and extends them to other tech companies and government entities.

“One year later, we observe some positive practices regarding their own products, but they are far from where we need to be concerning effective governance or protection of broader rights,” states Merve Hickok, president and research director of the Center for AI and Digital Policy, who evaluated the companies’ responses upon MIT Technology Review’s request. Hickok adds that many of these companies continue to make unsubstantiated claims about their offerings, asserting that they can surpass human intelligence and capabilities.

A notable trend from the companies’ responses is their increased focus on technical solutions like red-teaming (where humans assess AI models for vulnerabilities) and implementing watermarks for AI-generated content.

However, it remains uncertain what changes can be attributed to the commitments and whether these companies would have adopted such measures independently, notes Rishi Bommasani, society lead at the Stanford Center for Research on Foundation Models, who also reviewed the responses for MIT Technology Review.

A year represents a significant duration in the AI landscape. Since the signing of the voluntary commitments, Inflection AI founder Mustafa Suleyman has departed from the company to join Microsoft and spearhead its AI initiatives. Inflection has opted not to comment on this.

“We appreciate the strides that leading companies have made in adhering to their voluntary commitments alongside the requirements of the executive order,” remarks Robyn Patterson, a White House spokesperson. Nevertheless, Patterson emphasizes that the president continues to urge Congress to enact bipartisan AI legislation.

In the absence of comprehensive federal laws, the best course of action for the US at this moment is to insist that companies uphold these voluntary commitments, according to Brandie Nonnecke, director of the CITRIS Policy Lab at UC Berkeley.

It is important to remember that “these are still companies that are largely responsible for determining their own evaluation criteria,” observes Nonnecke. “Thus, we must carefully consider whether they are… rigorously verifying themselves.”

Here’s our evaluation of the progress made by AI companies in the past year.

Commitment 1

The companies agree to conduct both internal and external security testing of their AI systems prior to their launch. This testing, which will involve independent experts, aims to address critical AI risks, such as biosecurity and cybersecurity, along with their wider societal impacts.

All the companies, except for Inflection which chose not to provide comments, report that they undertake red-teaming exercises that engage both internal and external testers to identify flaws and risks in their models. OpenAI states that it has a distinct preparedness team that assesses models for cybersecurity threats, as well as chemical, biological, radiological, and nuclear risks, and for scenarios where a sophisticated AI system may lead a person to act in harmful ways.

Anthropic and OpenAI mention they also collaborate with external experts in their testing prior to launching new models. For instance, during the launch of Anthropic’s latest model, Claude 3.5, the company involved experts from the UK’s AI Safety Institute in pre-launch testing. Anthropic has additionally permitted METR, a research nonprofit organization, to conduct an “initial exploration” into Claude 3.5’s autonomy capabilities.

Google states that it also performs internal red-teaming to evaluate the limitations of its model, Gemini, in relation to election-related content, societal risks, and national security issues.

Microsoft mentions that it has collaborated with third-party evaluators at NewsGuard, an organization promoting journalistic integrity, to assess risks and reduce the threat of misuse of deepfakes in Microsoft’s text-to-image tool. In addition to red-teaming, Meta reports that it has assessed its newest model, Llama 3, to gauge its effectiveness in various risk areas such as weapons, cyberattacks, and child exploitation.

However, regarding testing, it’s insufficient for a company merely to indicate that it is taking action, notes Bommasani. For instance, Meta, Amazon, and Anthropic indicated they had partnered with the nonprofit Thorn to address the dangers to child safety posed by AI. Bommasani expressed a desire to see more detailed information on how the companies’ interventions are effectively reducing those threats.

“It should be evident to us that it’s not just companies engaging in activities, but that those activities yield the intended results,” Bommasani states.

RESULT: Positive. The initiative for red-teaming and assessing a variety of risks is both good and necessary. Nonetheless, Hickok would have appreciated if independent researchers had broader access to the companies’ models.

Commitment 2

The companies pledge to share knowledge across the industry and with governments, civil society, and academic institutions regarding the management of AI risks. This encompasses best safety practices, information on efforts to bypass safeguards, and technical cooperation.

Following their commitments, Anthropic, Google, Microsoft, and OpenAI established the Frontier Model Forum, a nonprofit designed to encourage dialogue and actions concerning AI safety and accountability. Amazon and Meta have also joined this initiative.

Engagement with nonprofits that the AI companies themselves funded might not align with the spirit of the voluntary commitments, according to Bommasani. However, the Frontier Model Forum could facilitate collaboration among these companies, enabling them to exchange safety information that they typically cannot share as competitors, he notes.

“Even if they won’t be transparent with the public, one expectation could be for them to collectively devise measures to truly mitigate risks,” Bommasani suggests.

All seven signatories are also members of the Artificial Intelligence Safety Institute Consortium (AISIC), launched by the National Institute of Standards and Technology (NIST), which formulates guidelines and standards for AI policy and the evaluation of AI performance. This consortium includes a mixture of public and private sector participants. Representatives from Google, Microsoft, and OpenAI are also part of the UN’s High-Level Advisory Body on Artificial Intelligence.

Many of the labs emphasized their research partnerships with academic institutions. For example, Google is involved in MLCommons, where it collaborated with scholars on a cross-industry AI Safety Benchmark. Google also states that it actively contributes tools and resources, including computing credits, to initiatives like the National Science Foundation’s National AI Research Resource pilot, which aims to democratize AI research in the United States. Meta adds that it is a member of the AI Alliance, a network of companies, researchers, and nonprofits that specifically focuses on open-source AI and the developer community.

Numerous companies have also contributed to guidelines set forth by the Partnership on AI, another nonprofit initiated by Amazon, Facebook, Google, DeepMind, Microsoft, and IBM, regarding the deployment of foundational models.

RESULT: More effort is required. Enhanced information sharing is a beneficial development as the industry strives to collaboratively ensure that AI systems are safe and reliable. Nonetheless, it remains uncertain how much of the promoted activity will result in substantial changes and how much is mere superficiality.

Commitment 3

The companies vow to invest in cybersecurity and measures to mitigate insider threats in order to safeguard proprietary and unreleased model weights. These model weights are the core component of an AI system, and the companies concur that it is crucial that these weights are disclosed only when appropriate and with a full consideration of security risks.

Many companies have put new cybersecurity protocols in place over the past year. For instance, Microsoft has initiated the Secure Future Initiative to combat the escalating scale of cyberattacks. The company claims that its model weights are encrypted to lessen the risk of model theft and that it enforces strict identity and access controls when deploying highly capable proprietary models.

Similarly, Google has introduced an AI Cyber Defense Initiative. In May, OpenAI announced six new measures it is implementing to enhance its existing cybersecurity practices, such as extending cryptographic protections to AI hardware. It also operates a Cybersecurity Grant Program that allows researchers access to its models in order to develop cyber defenses.

Amazon stated that it has implemented specific measures against threats related to generative AI, including data poisoning and prompt injection, where an individual uses prompts to instruct the language model to disregard its previous instructions and safety safeguards.

Just a few days after making these commitments, Anthropic shared details about its safeguards, which consist of standard cybersecurity practices like regulating access to the models and sensitive resources such as model weights, as well as monitoring and managing the third-party supply chain. The organization also collaborates with independent evaluators to assess whether the controls it has established meet its cybersecurity requirements.

RESULT: Positive. All companies indicated that they have instituted additional measures to secure their models, even though there seems to be little agreement on the most effective methods to protect AI models.

Commitment 4

The companies agree to support third-party discovery and reporting of vulnerabilities within their AI systems. Some problems may remain even after an AI system is launched, and an effective reporting system allows for quicker identification and resolution.

For this commitment, one prevalent approach has been the establishment of bug bounty programs that reward individuals who identify flaws in AI systems. Anthropic, Google, Microsoft, Meta, and OpenAI all have such programs for their AI systems. Anthropic and Amazon also mentioned having forms on their websites where security researchers can submit reports of vulnerabilities.

It may take years for us to learn how to perform third-party audits effectively, says Brandie Nonnecke. “It’s not solely a technical challenge; it involves socio-technical factors. It generally takes years to figure out both the technical and socio-technical standards of AI, and it’s a complex and difficult process,” she explains.

Nonnecke expresses concern that the first companies to conduct third-party audits may set unfavorable examples for addressing the socio-technical risks associated with AI. For instance, audits might define, assess, and tackle certain risks while neglecting others.

RESULT: More effort is needed. While bug bounty programs are beneficial, they are far from being exhaustive. New regulations, like the EU’s AI Act, will mandate tech companies to perform audits, and it would have been advantageous for tech companies to showcase successful audit examples.

Commitment 5

The companies pledge to create robust technical mechanisms that inform users when content is generated by AI, such as a watermarking system. This action promotes creativity with AI while minimizing the risks of fraud and deception.

Many of the companies have developed watermarks for AI-generated content. For instance, Google introduced SynthID, a watermarking tool for images, audio, text, and video created by Gemini. Meta offers Stable Signature for images and AudioSeal for AI-generated speech. Amazon now incorporates an invisible watermark on all images produced by its Titan Image Generator. OpenAI also applies watermarks in Voice Engine, its custom voice model, and has created an image-detection classifier for images generated by DALL-E 3. Anthropic was the only company without a watermarking tool, as watermarks are mainly associated with images, which aren’t supported by the company’s Claude model.

All the companies, aside from Inflection, Anthropic, and Meta, are part of the Coalition for Content Provenance and Authenticity (C2PA), an industry alliance embedding information about when content was produced and whether it was generated or edited by AI into an image’s metadata. Microsoft and OpenAI automatically attach the C2PA’s provenance metadata to images generated with DALL-E 3 and videos created with Sora. Although Meta is not a member, it has announced its use of the C2PA standard to identify AI-generated images on its platforms.

The six companies that signed the commitments show a “natural preference for more technical solutions to address risk,” says Bommasani, “and indeed, watermarking specifically tends to lean in this direction.”

“The key question is: Does [the technical solution] genuinely make progress and tackle the fundamental social issues that prompt our desire to know whether content is machine-generated or not?” he adds.

RESULT: Positive. Overall, this is an encouraging outcome. Although watermarking remains in the experimental phase and is still inconsistent, it’s beneficial to observe research in this area and a commitment to the C2PA standard. It’s certainly better than nothing, especially during a hectic election year.

Commitment 6

The companies pledge to disclose their AI systems’ capabilities, limitations, and suitable and unsuitable applications. This disclosure will include both security and societal risks, such as impacts on fairness and bias.

The commitments made by the White House allow for considerable interpretation. For instance, companies can technically satisfy this public reporting requirement with varying degrees of transparency, as long as they make some effort in that direction.

The most frequently proposed solutions by tech firms in this category were model cards. Although each company may refer to them differently, essentially they serve as a type of product description for AI models. These cards can cover aspects ranging from model capabilities and limitations (including performance concerning benchmarks in fairness and explainability) to issues of veracity, robustness, governance, privacy, and security. Anthropic has stated it also examines models for potential safety concerns that could emerge later.

Microsoft has released an annual Responsible AI Transparency Report, which sheds light on how the company develops applications utilizing generative AI, makes decisions, and manages the deployment of these applications. The company also claims it provides clear notifications about where and how AI is implemented in its products.

Meta, too, has introduced its new Llama 3 model accompanied by a detailed and thorough technical report. Additionally, the company has updated its Responsible Use Guide, which contains instructions on how to effectively and responsibly use advanced large language models.

RESULT: Progress is still required. One area for improvement identified by Hickok is for AI companies to enhance transparency concerning their governance frameworks and the financial ties between corporations. She also expressed a desire for companies to be more forthcoming about data origins, model training methods, safety incidents, and energy consumption.

Commitment 7

The companies have vowed to emphasize research on the societal risks posed by AI systems, such as preventing harmful bias and discrimination and safeguarding privacy. The historical evidence surrounding AI highlights the pervasive and insidious nature of these threats, and the companies are committed to developing AI that alleviates them.

Tech companies have been active in the safety research arena, integrating their findings into products. Amazon has established safeguards for Amazon Bedrock that can identify hallucinations and implement safety, privacy, and truthfulness measures. Anthropic claims to maintain a dedicated research team focused on societal risks and privacy. Over the past year, the company has released research addressing deception, jailbreaking, methods to combat discrimination, and emergent capabilities, including models’ ability to manipulate their code or engage in persuasion.

OpenAI asserts that it has trained its models to avoid generating hateful content and to decline requests related to hateful or extremist material. Its GPT-4V model is specifically trained to reject many inquiries that involve stereotypes. Google DeepMind has also conducted research to assess dangerous capabilities and has studied potential misuses of generative AI.

All these companies have invested significant resources into this area of research. For instance, Google has dedicated millions of dollars to establish a new AI Safety Fund aimed at enhancing research in this field through the Frontier Model Forum. Microsoft has pledged $20 million in computing resources to support research into societal risks via the National AI Research Resource and launched its own AI model research accelerator program for academia, called the Accelerating Foundation Models Research initiative. The company has also appointed 24 research fellows focusing on AI and societal issues.

RESULT: Very good. This commitment is relatively easy for the signatories, as they represent some of the largest and wealthiest corporate AI research laboratories globally. While increased research on AI system safety is a positive advancement, critics argue that the emphasis on safety research diverts attention and funding from addressing more immediate issues like discrimination and bias.

Commitment 8

The companies have committed to creating and implementing advanced AI systems aimed at tackling some of society’s most pressing challenges. From cancer detection to combating climate change and beyond, AI—when managed properly—has the potential to significantly contribute to prosperity, equality, and security for everyone.

Since making this pledge, tech companies have addressed a variety of issues. For example, Pfizer utilized Claude to analyze trends in cancer treatment research after collecting pertinent data and scientific information, while Gilead, a U.S.-based biopharmaceutical firm, employed generative AI from Amazon Web Services to conduct feasibility studies on clinical trials and evaluate data sets.

Google DeepMind has established a particularly strong track record in providing AI tools that assist scientists. For instance, AlphaFold 3 is capable of predicting the structure and interactions of all life forms’ molecules. AlphaGeometry can solve geometry problems at a level that compares favorably to the world’s best.

Microsoft has utilized satellite imagery and artificial intelligence to enhance wildfire responses in Maui and to identify populations at risk from climate change, aiding researchers in uncovering threats such as hunger, forced relocations, and health issues.

On the other hand, OpenAI has revealed collaborations and financial support for various research initiatives, including one that examines the safe usage of multimodal AI models by educators and scientists in laboratory environments. Additionally, it has provided credits to assist researchers in utilizing its platforms during clean energy development hackathons.

Generally, some advancements in employing AI to enhance scientific discoveries or forecast weather phenomena are genuinely thrilling. While AI companies have yet to use AI to avert cancer, that represents a significant expectation.

In summary, there have been positive transformations in AI development, such as implementing red-teaming methodologies, watermarks, and innovative approaches for the industry to exchange best practices. However, these improvements are merely a few innovative technical responses to the complex socio-technical challenges associated with AI harm, requiring much more effort. A year later, it is peculiar to observe commitments addressing a specific kind of AI safety focused on theoretical risks, like bioweapons, while neglecting to mention consumer protection, nonconsensual deepfakes, data privacy, copyright issues, and the environmental impact of AI systems. These omissions seem peculiar in today’s context.

The brief November 2023 boardroom coup that ousted OpenAI CEO Sam Altman showcased both the potential and limitations of OpenAI’s unique governance model, wherein the leading AI lab was (at least temporarily) governed by a nonprofit board of directors that could (and did, albeit briefly) act in ways that jeopardized the company’s future and profitability.

However, the board’s attempt to reinforce its authority was short-lived. Altman made a return as CEO a week after his termination, and the board members who voted for his dismissal, including OpenAI co-founder and chief scientist Ilya Sutskever, ultimately exited the organization.

The situation surrounding Altman brings forth several inquiries regarding the function of unconventional governance models—specifically, those diverging from standard for-profit corporate structures—in the evolution of AI. Given the customary predominance of profit-driven corporate models in the tech sector, the debate on whether commercial AI would benefit from a nonprofit framework would be a theoretical one, but for the notable fact that two leading AI laboratories—OpenAI and Anthropic—have opted against the conventional for-profit model (which has led to an increasing volume of insightful academic analysis).

Both organizations made this decision due to explicit worries regarding AI safety, based on the belief that a sole focus on profits might lead AI developers to choose unsafe paths if such choices yield greater financial gain. Thus, it deserves examination whether unconventional corporate governance can meet the objectives it is expected to achieve.

In this context, we intend to outline the landscape of corporate governance within the AI sector, critically assess whether nontraditional governance can effectively address the unique risks associated with AI, and propose policy recommendations that will assist these alternative governance frameworks in aligning AI development with broader societal interests.

The Dangers of AI

According to their own statements, the two foremost AI labs that have chosen not to function as conventional, for-profit entities made that choice primarily due to concerns related to AI safety.

The organizational structure of AI laboratories poses a significant policy consideration since the advancement of increasingly sophisticated AI technologies carries considerable externalities, both beneficial and detrimental. On the positive side, AI has the potential to boost productivity and drive technological progress. In the most optimistic outlooks, it could herald a time of post-material wealth. Therefore, society should aim to foster those positive innovation outcomes as much as possible.

Conversely, AI also poses considerable social risks. Some implications are relatively minor and localized, such as the damage a specific AI system might cause to an individual—for example, an AI that dispenses poor health guidance or slanders a third party. Others are more widespread, such as the hazards of AI being utilized to disseminate misinformation and propaganda on a large scale or exacerbate surveillance and job displacement. At the most severe end of the spectrum, AI raises various “existential” dangers, whether by enabling malicious entities to create weapons of mass destruction or by autonomous AI systems possibly acting in ways that could harm humanity as a whole.

Conventional regulation may find it challenging to tackle the threats posed by AI. The existing gap in expertise between regulators and the entities they oversee may be even more pronounced in this swiftly changing domain than in other sectors. The issue of regulatory capture could be particularly acute because individuals outside the field may not fully grasp the risks involved or take them seriously. Since AI research can be conducted globally, national regulators may struggle to rein in AI companies that can operate beyond their jurisdiction. Perhaps most alarmingly, governments might become the most perilous actors in this scenario if they engage in an arms race, given the clear military ramifications associated with AI. Governments eager to harness AI’s capabilities may lack the motivation to regulate its potentially harmful aspects.

What Nontraditional Corporate Governance Can Achieve

Given that traditional regulation is challenging due to the unique and potentially disastrous risks associated with AI, there is hope that self-regulation by AI developers can help mitigate those dangers. The objective is to align the interests of companies and their management with societal goals, seeking to harness the remarkable benefits of AI while steering clear of catastrophic risks.

Regrettably, conventional for-profit corporations appear ill-equipped to exercise sufficient self-restraint in mitigating social risks. When faced with a choice between safety and profit, the norm of maximizing shareholder wealth prevailing in U.S. corporate law (particularly in Delaware, where most large U.S. companies are incorporated) suggests that increasing financial returns for shareholders should take precedence. Although doctrines like the business judgment rule provide safety-conscious managers with considerable discretion to weigh social risks, various legal and informal norms and practices still push managers to prioritize profits.

Nonprofit organizations, as their name indicates, offer a pathway to shift away from a profit-centric focus. Instead, they emphasize mission-driven objectives, such as promoting social, educational, or charitable endeavors. To retain nonprofit status, these organizations must comply with specific legal stipulations, such as prohibiting the distribution of profits to private individuals or shareholders and ensuring that their activities primarily benefit the public. Any surplus income must be reinvested in the organization’s goals, reinforcing a commitment to long-term societal advantages rather than short-term financial rewards.

Nonetheless, nonprofits also face their own limitations as a structure for companies engaged in AI development. Excluding equity investors will place them at a significant disadvantage in securing the substantial capital needed for AI research and development, which may also hinder their ability to attract top talent in the field.

They might be excessively cautious, causing delays in realizing the potential benefits from AI innovations. Additionally, nonprofits may struggle with accountability issues since their boards are generally self-selected, with the current board determining its successors, and they lack the mechanisms of shareholder voting and lawsuits that impose at least some checks on for-profit boards.

Recently, there has been considerable focus on hybrid legal structures for social enterprises that lie between for-profits and nonprofits. Benefit corporations represent a prominent new legal category designed to capture some benefits of both types. However, benefit corporations lack robust governance mechanisms to ensure that profit motives do not overpower social objectives (such as preventing human extinction).

They depend on purpose statements, fiduciary duties, and disclosure to foster a commitment to public interests beyond mere profit. However, as currently formulated, companies can easily use public interests as a façade while prioritizing profit, and none of these mechanisms will effectively restrain their actions or impede their momentum.

In this context, both OpenAI and Anthropic have been experimenting with more complex individualized hybrid structures that appear to offer greater promise than benefit corporations. Each organization has established a for-profit entity capable of attracting equity investors, along with a nonprofit entity that maintains overall control. OpenAI’s structure is particularly intricate. The organization started as a nonprofit, hoping that donations would provide the necessary capital, but the amount raised fell short. In response, OpenAI created a for-profit LLC under Delaware law to facilitate investment opportunities and offer financial returns, albeit with a cap on those returns.

There are multiple layers of entities connecting the nonprofit and the for-profit LLC, including a holding company and a management company. Nevertheless, the ultimate authority still rests with the nonprofit corporation’s board, which is self-perpetuating, overseeing the for-profit LLC.

Anthropic’s organizational structure is not exactly the same, but it shares similarities and aims at the same fundamental concept. Anthropic operates as a Delaware public benefit corporation, which we previously mentioned has minimal impact on its own. More intriguingly, it has created a long-term benefit trust overseen by five independent directors who possess expertise in AI safety, national security, public policy, and social enterprise. This trust holds a special category of Anthropic shares, granting it the authority to appoint certain directors to Anthropic’s board. Within four years, this trust will select a majority of the board members at Anthropic. The trust’s mission aligns with that of the benefit corporation, specifically to responsibly develop and maintain AI for the betterment of humanity.

For both entities, the expectation is that the controlling nonprofit will insulate the business from a profit-driven focus that could compromise the essential goal of ensuring the product’s safety, while still drawing enough investment to enable the company to lead in AI development. This framework protects the nonprofit board, which holds ultimate authority, from the pressures exerted by shareholders demanding financial returns. In contrast to for-profit corporations, shareholders cannot elect nonprofit directors or trustees, nor is there a risk of lawsuits from shareholders for breaches of fiduciary duty. Unlike the statutes governing benefit corporations, this issue directly addresses governance: determining who wields decision-making power and who gets to select those decision-makers.

Although unconventional, the governance models of OpenAI and Anthropic are not entirely unique. They have counterparts with established histories. For example, nonprofit foundations have frequently owned and operated for-profit companies in various countries. While foundation enterprises are rare in the U.S. due to discouraging tax regulations, they are more common in parts of Europe, notably Denmark, where the regulations are more favorable.

The available evidence regarding the performance of foundation enterprises is varied but predominantly positive. In terms of profit and other indicators of financial and economic success, research generally indicates (though not always) that they match or even outperform similar standard for-profit firms, often assuming lower levels of risk and demonstrating greater long-term stability. Limited evidence regarding social performance suggests that foundation enterprises either perform comparably or better than traditional for-profits in generating social benefits and mitigating potential harms.

Researchers studying enterprise foundations have noted that these findings challenge the prevailing views among corporate governance scholars about the advantages of for-profit organizational models in incentivizing focus and ensuring accountability. Directors or managers of foundation enterprises operate independently from shareholders and donors. Their boards are self-perpetuating, and without shareholders (or equivalent parties) to step in, there is no one to sue if managers breach their fiduciary obligations.

This separation from accountability mechanisms might suggest that foundation enterprises may show less efficiency and financial success; however, the evidence does not seem to support this notion. Academics propose that this insulation from accountability could enable managers to more thoroughly consider long-term outcomes and stakeholder interests, even when such considerations might jeopardize profits. Nonetheless, this detachment may pose challenges in holding boards accountable if they stray from their mission due to self-interest, incompetence, or a misinterpretation of that mission.

OpenAI, Anthropic, and foundation enterprises hone in on the board and its governance, concluding that the board should be self-governing. In emphasizing who holds control over the board, they bear resemblance to alternative corporate governance models. Stakeholder governance structures, for instance, empower stakeholders other than shareholders to appoint some or all board members. This could include employees, as seen in worker cooperatives, or customers, as in credit unions and insurance mutuals.

It could also involve suppliers, such as in agricultural cooperatives. In the context of AI developers, one might envision AI safety organizations having the power to appoint certain board members. Similar to OpenAI and Anthropic, these organizations retract the authority of shareholders to choose (some or all) directors. However, instead of completely removing control over the selection process, these alternatives grant that power to different groups of stakeholders, whereas in the OpenAI and Anthropic frameworks, the board itself becomes self-perpetuating.

There are valid reasons to believe that the hybrid governance models of OpenAI and Anthropic might strike a better balance by attracting investment while maintaining a significant emphasis on safe and responsible AI development. Nonetheless, even if the advantages of unconventional AI governance outweigh their drawbacks for a specific lab, it does not ensure that nontraditional AI organizations will fulfill their safety commitments amidst competition from for-profit rivals.

From the standpoint of existential or general social risk, it is ineffective for OpenAI or Anthropic to act conservatively if competitors like Microsoft or Google accelerate their advancements at a rapid pace. The challenge of AI safety stems from it being a significant negative externality; if one organization creates a superintelligent paperclip maximizer, it endangers not just that organization but all of humanity.

Moreover, the competitive landscape is not uniform. For-profit AI firms—promising increased profitability and, consequently, higher stock values and dividends—are likely to attract more investment, which is crucial for success due to the immense expenses associated with data and computing. Of course, nonprofit AI organizations have also secured substantial funding, and OpenAI’s current funding round—a staggering, oversubscribed request of $6.5 billion, which ranks among the largest ever and would appraise the company at an astonishing $150 billion—indicates that there is investor interest even in not-for-profit entities. Nevertheless, even the current funding OpenAI is acquiring may fall short of covering future computation costs.

For-profit AI companies might entice skilled engineers away from nonprofit rivals, either through superior salaries or simply the allure of pursuing the development of grander and more impressive systems at a faster pace. Additionally, engineers who aren’t primarily driven by financial gain and are mindful of AI risks may still find themselves drawn to environments where they can engage with cutting-edge innovations, echoing a sentiment expressed by J. Robert Oppenheimer, the creator of the atomic bomb.

Nonprofits have options to counter these challenges, but their responses will likely necessitate adopting characteristics of their for-profit rivals, potentially undermining the justification for their unique corporate structure. A case in point is OpenAI itself. After Altman was dismissed from the organization, Microsoft swiftly recruited him and OpenAI co-founder Greg Brockman to effectively recreate OpenAI within Microsoft—had Altman remained at Microsoft, it’s probable that many of OpenAI’s top researchers and engineers would have followed him.

Once Altman returned to OpenAI and the board underwent changes, Microsoft obtained a nonvoting position on OpenAI’s board (which it has since relinquished), illustrating a shift in power dynamics favoring the for-profit AI sector.

Over the last year, as Altman has strengthened his influence at OpenAI, the organization has increasingly begun to resemble a traditional tech company in Silicon Valley, aiming to roll out products as rapidly as possible while compromising its alleged commitments to AI safety, according to numerous insiders.

Perhaps most significantly, reports indicate that OpenAI is contemplating the complete transition from its nonprofit status to becoming a for-profit public benefit corporation, wherein Altman would hold a considerable 7 percent equity stake, despite his earlier assertions, including to the U.S. Senate, claiming he had no ownership in OpenAI. (Altman has subsequently refuted the claims regarding his equity stake, labeling the 7 percent figure as “absurd.”) If OpenAI does eventually become a for-profit entity, it would serve as a striking illustration of the challenges faced by nonprofit leading-edge AI laboratories in remaining faithful to their initial objectives. The public benefit company designation would act merely as a superficial cover—offering little defense against profit imperatives overshadowing OpenAI’s mission.

Government “Subsidies” for Unconventional Corporate Governance

Given the hurdles that both traditional regulation and corporate governance present, a combination of the two may be the most effective solution. Corporate governance could enhance regulation, while regulation might promote governance structures that can mitigate the tendency to overlook safety and abandon nonprofit status. This approach could take the form of responsive regulation, a strategy where state regulators involve businesses and stakeholders in a more adaptable and dynamic regulatory process compared to conventional methods.

Regulators could motivate organizations with superior corporate governance in various ways. Entities adhering to a preferred governance model could benefit from reduced regulatory scrutiny. Specific regulatory obligations might be relaxed or waived for organizations with stronger governance frameworks. For instance, if a jurisdiction mandates companies to assess their products for safety, it could afford preferred companies greater flexibility in designing those tests, or scrutinize their testing procedures less frequently.

An extreme interpretation of this strategy would permit only those entities with a preferred governance structure to develop AI, while still subjecting these preferred organizations to regulation (i.e., one should not entirely depend on internal governance as a resolution). The notion of a federal charter for AI developers is one potential method of implementing this. If all AI developers were mandated to obtain a charter from a federal oversight body, that regulator could enforce any governance requirements deemed beneficial and monitor chartered companies, with the authority to revoke a charter if necessary.

Alternatively, businesses with better governance might be prioritized when it comes to receiving government contracts for the development or implementation of AI. In addition to contracts or financial support, another approach for governments to influence private AI development could involve establishing a nongovernmental organization that holds intellectual property (such as trade secrets, copyrights, and patents) accessible to companies that adhere to proper governance and commit to safety protocols.

Reduced regulation or financial incentives through contracts or access to intellectual property for unconventional entity types could somewhat alleviate the concerns surrounding the for-profit competitors mentioned earlier. Such lighter regulations and incentives could at least create a more equitable environment against rivals with greater financial resources and, if substantial enough, could even favor enterprises with more responsible governance.

In extreme cases, if only those with suitable governance frameworks were permitted to develop AI, the issue of competition from more profit-driven firms would be resolved (at least within the jurisdiction enforcing this rule—avoiding the issue by relocating outside the jurisdiction would still pose a challenge).

If regulators were to pursue this approach, a crucial question would arise regarding which governance frameworks would be regarded as favorable. This strategy is only valid if one believes that a governance framework significantly deters irresponsible risk-taking. At best, the effectiveness of the nonprofit/for-profit hybrid governance that OpenAI and Anthropic have tried remains uncertain. In fact, a significant risk associated with nontraditional corporate governance in AI laboratories is that it might mislead regulators into a comfort level that prompts reduced oversight, which could be less than ideal.

Nonetheless, despite the challenges highlighted by the Altman incident, this structure may still hold value, either in its current form or potentially with modifications to address the shortcomings that have come to light.

To support this idea, having a governmental role in evaluating governance structures could create new opportunities to enhance accountability and shield against the pressure to compromise safety for profit, thus tackling concerns that alternative governance models do not truly provide the safety advantages they claim. For example, regulators might require the inclusion of government-appointed directors or board observers.

This could bolster the internal safety benefits of alternative governance models if one agrees that they are currently not fulfilling their intended promise. As previously mentioned regarding the potential of nontraditional governance, the nonprofit model relies on self-sustaining boards, believing that the absence of profit-driven shareholders and investors will empower those in charge.

Other types of stakeholder governance focus on ensuring that non-investor stakeholders play a role in determining the composition of the governing board. Appointing government directors is one method to achieve this, addressing the dilemma of who should represent the public interest. The state bears the ultimate responsibility for safeguarding the public, so it is a reasonable option, although there are numerous challenges related to government control over private businesses.

We would not recommend that government regulators appoint the majority of the board in AI firms, but rather one or a few positions. This could provide regulators with critical insights and a degree of influence in decision-making without granting them complete authority. This approach is akin to proposals for awarding banking regulators a “golden share” in banks of significant systemic importance, although that proposal comes with its own set of controversies. Instead of government-appointed directors, regulators might consider including other stakeholder representatives, such as employee representatives or directors suggested by AI safety organizations.

While discouraging for-profit competitors and potentially introducing internal safety governance mechanisms like government-appointed directors or observers may heighten the risk of stifled innovation, this is a legitimate concern. Nevertheless, a more gradual approach to achieving the ideal scenario envisioned by some may be a worthwhile sacrifice in order to mitigate the risk of truly existential threats.

Leave a Reply

Your email address will not be published. Required fields are marked *