Exploring the cutting edge of technology, batteries, and green energy for a sustainable future

OpenAI has raised $6.6 billion in a round led by Thrive Capital

Posted by:

|

On:

|

OpenAI announced on Thursday that it has obtained a new $4 billion revolving credit line, shortly after closing a $6.6 billion funding round, solidifying its position as one of the most valuable private companies globally.

The new credit line will increase OpenAI’s liquidity to $10 billion, enabling the company to purchase expensive computing capacity, including Nvidia chips, in its competition with tech giants like Google, which is owned by Alphabet.

OpenAI’s finance chief, Sarah Friar, stated, “This credit facility further strengthens our balance sheet and provides flexibility to seize future growth opportunities.”

The credit line involves JPMorgan Chase, Citi, Goldman Sachs, Morgan Stanley, Santander, Wells Fargo, SMBC, UBS, and HSBC.

Following the latest funding round, OpenAI is now valued at nearly $157 billion, with returning venture capital investors such as Thrive Capital and Khosla Ventures, as well as major corporate backer Microsoft and new investor Nvidia participating in the form of convertible notes.

The conversion to equity is contingent on a successful structural change into a for-profit company and the removal of the cap on returns for investors.

Despite recent executive changes, including the departure of Chief Technology Officer Mira Murati, most investors remain optimistic about significant growth based on CEO Sam Altman’s projections.

OpenAI is projected to generate $3.6 billion in revenue this year, despite losses surpassing $5 billion. It anticipates a substantial revenue increase to $11.6 billion next year, according to sources familiar with the figures.

Additionally, OpenAI is offering Thrive Capital the potential to invest another $1 billion next year at the same valuation if the AI ​​firm achieves a revenue goal, as reported by Reuters last month.

The recent funding round also involved Altimeter Capital, Fidelity, SoftBank, and Abu Dhabi’s state-backed investment firm MGX.

Following the funding, OpenAI’s Chief Financial Officer, Sarah Friar, informed employees that the company will offer liquidity through a tender offer to buy back their shares, although details and timing have yet to be confirmed.

Thrive Capital, which committed approximately $1.2 billion, negotiated the option to invest another $1 billion next year at the same valuation if the AI ​​firm meets a revenue goal.

Apple, who was reportedly in discussions to invest in OpenAI, did not ultimately join the funding, according to sources.

The funding was provided in the form of convertible notes, with the conversion to equity dependent on a successful structural change to a for-profit entity and the removal of the cap on returns for investors.

Most investors remain optimistic about OpenAI’s growth, despite recent personnel changes, and have secured protections as the company undergoes a complex corporate restructuring.

OpenAI has experienced a rapid increase in both product popularity and valuation, capturing the world’s attention. Since the launch of ChatGPT, the platform has amassed 250 million weekly active users. The company’s valuation has soared from $14 billion in 2021 to $157 billion, with revenue growing from zero to $3.6 billion, surpassing Altman’s initial projections.

The company has indicated to investors that it remains committed to advancing artificial general intelligence (AGI), aiming to develop AI systems that surpass human intelligence, while also focusing on commercialization and profitability. OpenAI has successfully concluded a widely-watched funding round, securing $6.6 billion from investors such as Microsoft, Nvidia, and venture capitalists.

The funding round has placed OpenAI’s valuation at $157 billion, with Thrive Capital alone contributing $1.2 billion, alongside investments from Khosla Ventures, SoftBank, and Fidelity, among others. This marked Nvidia’s first investment in OpenAI, while Apple, despite previous speculations, did not participate in the funding round.

In a statement confirming the raise, OpenAI expressed that the funding will enable them to further establish their leadership in frontier AI research, expand compute capacity, and continue developing tools that facilitate problem-solving.

This investment follows a week of significant changes for OpenAI, including restructuring as a for-profit company, with CEO Sam Altman expected to gain a substantial equity stake. Additionally, the company experienced departures from key personnel, raising concerns among some AI observers. , the successful funding round has alleviated such concerns, at least for the time being.

Notably, Thrive Capital has the option to invest an additional $1 billion next year at the same valuation, contingent on OpenAI achieving an undisclosed revenue goal. On the other hand, some investors have clauses that allow them to renegotiate or retract funds if specific restructuring changes are not completed within two years, according to a source.

OpenAI has reported that 250 million individuals utilized ChatGPT on a weekly basis. Sarah Friar, the chief financial officer, highlighted the impact of AI in personalizing learning, accelerating healthcare breakthroughs, and driving productivity, emphasizing that this is just the beginning.

Reports indicate that OpenAI set conditions for investors, requesting them not to fund five competing firms, including Anthropic, xAI, and Safe Superintelligence. These firms develop leading large language models, directly competing with OpenAI. SoftBank and Fidelity have previously funded xAI, but it is understood that OpenAI’s terms are not retroactive.

The funding arrives at a crucial time for OpenAI, as the company requires significant capital to sustain its operations, especially considering the substantial computing requirements for AI and the high salaries of top AI researchers. Reports earlier this year suggested that OpenAI’s costs for training and inference could exceed $7 billion in 2024, with an additional $1.5 billion spent on staff, well above rival Anthropic’s $2.7 billion.

Furthermore, OpenAI continues to invest in developing artificial general intelligence (AGI), while also striving to maintain a competitive edge in AI for business applications. Although OpenAI is projected to generate $3.6 billion in revenue this year, it is expected to incur a loss due to costs exceeding $5 billion. Sources from Reuters suggest that the company anticipates generating over $11 billion in revenue next year.

An additional challenge for OpenAI is the return on investment, as it remains uncertain how much companies will benefit from utilizing these costly technologies. Despite the unclear ROI, CIOs are not deterred. However, if prices rise to support the AI ​​industry and encourage further investment, it could potentially hinder adoption.

OpenAI shift to for-profit company

OpenAI’s decision to transition to a for-profit company could lead to potential safety issues, according to a whistleblower. William Saunders, a former research engineer at OpenAI, expressed concerns about the company’s reported change in corporate structure and its potential impact on safety decisions. He also raised worries about the possibility of OpenAI’s CEO holding a stake in the restructured business. Saunders emphasized that the governance of safety decisions at OpenAI could be compromised if the non-profit board loses control and the CEO gains a significant equity stake.

OpenAI, initially established as a non-profit organization committed to developing artificial general intelligence (AGI) for the benefit of humanity, is now facing scrutiny over its shift to a for-profit entity. Saunders, who previously worked on OpenAI’s superalignment team, highlighted his apprehensions about the company’s ability to make responsible decisions regarding AGI and its alignment with human values ​​and goals.

Saunders pointed out that the transition to a for-profit entity may contradict OpenAI’s original structure, which aimed to limit profits for investors and employees, with the surplus being directed back to the non-profit for the betterment of society. He expressed concerns that a for-profit entity might not prioritize giving back to society, especially if its technology leads to widespread unemployment.

Although OpenAI has made recent changes, such as establishing an independent safety and security committee and considering restructuring as a public benefit corporation, concerns remain about the potential impact of the company’s transition. Reports about the CEO possibly receiving a stake in the business and the company seeking significant investment have sparked debate about the company’s direction and its commitment to its original mission.

Additionally, OpenAI’s decision to delay the release of its Voice Engine technology aligns with its efforts to minimize the risk of misinformation, particularly during a crucial year for global elections. The AI ​​lab has deemed the technology too risky for general release, emphasizing the need to Mitigating potential threats of misinformation in the current global political landscape.

Voice Engine was initially created in 2022, and a first version was utilized for the text-to-speech feature integrated into ChatGPT, the primary AI tool of the organization. However, its full potential has not been publicly disclosed, partially due to OpenAI’s careful and well-informed approach towards its broader release.

OpenAI mentioned in an unsigned blog post that they aim to initiate a discussion on the responsible implementation of synthetic voices and how society can adjust to these new capabilities. The organization stated, “Based on these discussions and the outcomes of these small-scale tests, we will make a more informed decision regarding whether and how to deploy this technology on a larger scale.”

In their post, the company provided instances of real-world applications of the technology from various partners who were granted access to integrate it into their own applications and products.

Age of Learning, an educational technology company, utilizes it to produce scripted voiceovers. Meanwhile, the “AI visual storytelling” app HeyGen enables users to generate translations of recorded content that are fluent while retaining the original speaker’s accent and voice. For example, using an audio sample from a French speaker to generate English results in speech with a French accent.

Notably, researchers at the Norman Prince Neurosciences Institute in Rhode Island employed a low-quality 15-second clip of a young woman presenting at a school project to “restore the voice” she had lost due to a vascular brain tumor.

OpenAI stated, “We have chosen to preview but not widely release this technology at this time,” in order “to strengthen societal resilience against the challenges posed by increasingly realistic generative models.” In the near future, the organization encouraged actions such as phasing out voice-based authentication as a security measure for accessing bank accounts and other sensitive information.

OpenAI also advocated for the exploration of “policies to safeguard the use of individuals’ voices in AI” and “educating the public about understanding the capabilities and limitations of AI technologies, including the potential for deceptive AI content.”

OpenAI mentioned that Voice Engine generations are watermarked, enabling the organization to trace the source of any generated audio. Currently, it added, “our agreements with these partners necessitate explicit and informed consent from the original speaker, and we do not permit developers to create methods for individual users to generate their own voices.”

While OpenAI’s tool is distinguished by its technical simplicity and the minimal amount of original audio required to create a convincing replica, competing products are already accessible to the general public.

Companies such as ElevenLabs can produce a complete voice clone with just “a few minutes of audio”. To mitigate potential harm, the company has introduced a “no-go voices” protection mechanism designed to identify and prevent the creation of voice clones that mimic political candidates actively involved in presidential or prime ministerial elections, starting with those in the US and the UK.

AI systems could be taught to collaboratively solve important business issues

Ever since AI emerged in the 1950s, games have been utilized to assess AI progress. Deep Blue excelled at Chess, Watson triumphed over Jeopardy’s top players, AlphaGo defeated a world Go champion 4-1, and Libratus outplayed the best Texas Hold’Em poker players. Each victory marked a significant advancement in AI history. The next frontier is real-time, multiplayer strategy games.

OpenAI, a non-profit research group based in San Francisco, achieved a breakthrough earlier this year, joining the race alongside other AI researchers and organizations. In a benchmark game in August, OpenAI Five, a team of five neural networks, learned to cooperate and won a best-of-three against a team of professional players in a simplified version of Dota 2.
Dota 2, one of the most popular eSport games globally, has seen 966 tournaments with over $169 million in prize money and more than 10 million monthly active users as of July 2018. In this game, each player is part of a 5-player team , controls a “hero” with specific strengths and weaknesses, and battles opposing teams to destroy the “Ancient,” a structure in the opposite team’s base. Collaboration and coordination among players are crucial for success.

Games like Dota 2 pose challenges for AI programmers due to several reasons:

– Continuous action space: Each hero can make thousands of decisions within fractions of a second.

– Continuous observation space: Each hero can encounter various objects, teammates, or enemies, with over 20,000 observations per fraction of a second.

– Long-time horizons: Short-term actions have minor impacts, requiring a focus on long-term strategy for success.

– Incomplete information: Each hero has limited visibility and must explore the hidden battlefield.

– The need for collaboration: Unlike one-on-one games like Chess or Go, Dota 2 requires high levels of communication and collaboration.

The fact that an AI system was able to challenge and win against professionals in this environment is a remarkable achievement. However, it does not signify that Artificial General Intelligence (AGI) is imminent.

OpenAI Five’s spectacular were achieved under restricted rules, significantly altering the game in its favor. After the last major game restriction was lifted, OpenAI Five lost two games against top Dota 2 players at The International in August. The matches lasted about an hour and were considered “vigorous Dota matches.”

While OpenAI Five had an advantage in precision and reaction time, it fell behind in long-term planning and connecting events minutes apart. Connecting cause and effect in indirect scenarios proved to be challenging for the AI. The robots’ tendency to play aggressively, even when not warranted, highlighted their shortcomings. The teams that defeated OpenAI Five exploited this weakness and learned to quickly outmanoeuver the AI.

Despite these defeats, the progress made by OpenAI Five in just a few weeks is impressive and promising. The hope is that these superhuman skills will contribute to building advanced systems for real-life challenges in the future.

Could this superhuman skill acquired on the battlefield be applied to business?

Although OpenAI has not yet commercialized any of its AI technology, the potential applications are fascinating. Psychologists and management scientists have identified a key human limitation known as Bounded Rationality, which refers to the fact that we often make decisions under time constraints and with limited processing power, preventing us from fully understanding all available information.

For example, when investing in the stock market, it is impractical for individuals to process and access all the information for each stock. As a result, humans often rely on heuristics or seek advice from others when making investment decisions.

However, an algorithm capable of making decisions under incomplete information, in real-time, and with a long-term strategic focus has the potential to overcome this significant human constraint. Many business tasks, such as product launches and negotiations, require these abilities. It could be argued that a majority of business tasks involve collaboration, incomplete information, and a long-term focus.

Over time, AI systems could serve as partners that enhance managers’ capabilities. Rather than replacing or competing with managers, these systems could be taught to collaboratively solve important business issues. The combination of nearly unlimited rationality from AI processing power, combined with the intuition and judgment of skilled managers, could be an unbeatable combination in business.

The future of AI raises an urgent question: Who will control it? The rapid progress in artificial intelligence forces us to consider what kind of world we want to live in. Will it be a world where the United States and its allies advance a global AI that benefits everyone and provides open access to the technology? Or will it be an authoritarian world where nations or movements with different values ​​​​​use AI to strengthen and expand their power? There is no third option, and the time to choose a path is now.

Currently, the United States leads in AI development, but maintaining this leadership is not guaranteed. Authoritarian governments around the world are willing to invest significant resources to catch up and surpass the US Russian President Vladimir Putin has ominously stated that the country leading the AI ​​​​​​race will “become the ruler of the world,” and the People’s Republic of China has announced its aim to become the global leader in AI by 2030.

These authoritarian regimes and movements will tightly control the scientific, health, educational, and societal benefits of AI to solidify their own power. If they take the lead in AI, they may compel US companies and others to share user data, using the technology for surveillance or developing advanced cyberweapons.

The first chapter of AI has already been written. Systems like ChatGPT and Copilot are already functioning as limited assistants, such as by generating reports for medical professionals to allow more time with patients or assisting with code generation in software engineering. Further advancements in AI will mark a critical period in human society.

To ensure that the future of AI benefits the greatest number of people, a US-led global coalition of like-minded countries and an innovative strategy are needed. The United States must get four key things right to shape a future driven by a democratic vision for AI.

First, American AI firms and industry must establish strong security measures to maintain the lead in current and future AI models and support innovation in the private sector. These measures should include cyberdefense and data center security innovations to prevent theft of crucial intellectual property like model weights and AI training data.

Many of these defenses can benefit from the power of AI, making it easier and faster for human analysts to identify risks and respond to attacks. The US government and private sector can collaborate to develop these security measures as quickly as possible.

Second, infrastructure plays a crucial role in the future of AI. The early deployment of fiber-optic cables, coaxial lines, and other broadband infrastructure allowed the United States to lead the digital revolution and build its current advantage in AI. US policymakers must work with the private sector to establish a larger physical infrastructure, including data centers and power plants, that support AI systems.

Establishing partnerships between the public and private sectors to construct essential infrastructure will provide American businesses with the computational capabilities necessary to broaden the reach of AI and more equitably distribute its societal advantages.

The development of this infrastructure will also generate fresh employment opportunities across the country. We are currently witnessing the emergence and progression of a technology that I consider to be as significant as electricity or the internet. AI has the potential to serve as the cornerstone of a new industrial foundation, and it would be prudent for our nation to embrace it.

In addition to traditional physical infrastructure, we must also make substantial investments in human capital. As a nation, we must support and cultivate the next generation of AI innovators, researchers, and engineers. They represent our true strength.

Furthermore, we need to formulate a coherent commercial diplomacy strategy for AI, which includes providing clarity on how the United States plans to enforce export controls and foreign investment regulations for the global expansion of AI systems.

This will involve establishing guidelines for the types of chips, AI training data, and other sensitive code — some of which may need to remain within the United States — that can be housed in the data centers being rapidly constructed around the world to localize AI information .

Maintaining our current lead in AI, especially at a time when nations worldwide are competing for greater access to the technology, will facilitate the inclusion of more countries in this new coalition. Ensuring that open-source models are readily accessible to developers in those nations will further strengthen our advantage. The question of who will take the lead in AI is not solely about exporting technology; it is about exporting the values ​​​​​​that the technology embodies.

Finally, we must think innovatively about new approaches for the global community to establish standards for the development and deployment of AI, with a specific emphasis on safety and ensuring the participation of the global south and other nations that have historically been marginalized. As with other Globally significant issues, this will require us to engage with China and maintain an ongoing dialogue.

I have previously discussed the idea of ​​​​​​creating an entity similar to the International Atomic Energy Agency for AI, but that is only one potential model. One possibility could involve connecting the network of AI safety institutes being established in countries such as Japan and Britain and creating an investment fund from which countries committed to adhering to Democratic AI protocols could draw to enhance their domestic computing capabilities.

Another potential model is the Internet Corporation for Assigned Names and Numbers, which was established by the US government in 1998, less than a decade after the inception of the World Wide Web, to standardize the navigation of the digital world. ICANN is now an independent nonprofit organization with representatives from around the world dedicated to its fundamental mission of maximizing access to the internet in support of an open, interconnected, and democratic global community.

While identifying the appropriate decision-making body is crucial, the fundamental point is that Democratic AI holds an advantage over authoritarian AI because our political system has empowered US companies, entrepreneurs, and academics to conduct research, innovate, and construct. We will not be able to develop AI that maximizes the technology’s benefits minimizing while its risks unless we strive to ensure that the democratic vision for AI triumphs.

If we desire a more democratic world, history teaches us that our only option is to formulate an AI strategy that will contribute to its creation, and that the nations and technologists who have an advantage have a responsibility to make that choice — now.

AGI To Outperform Human Capability

OpenAI is said to be monitoring its advancement in creating artificial general intelligence (AGI), which refers to AI that can surpass humans in most tasks. The company uses a set of five levels to assess its progress towards this ultimate goal.

According to Bloomberg, OpenAI believes its technology is approaching the second level out of five on the path to artificial general intelligence. Anna Gallotti, co-chair of the International Coaching Federation’s special task force for AI and coaching, referred to this as a “super AI” scale on LinkedIn, envisioning potential applications for entrepreneurs, coaches, and consultants.

Axios reported that AI experts are divided on whether today’s large language models, which excel at generating text and images, will ever be capable of comprehensively understanding the world and adapting flexibly to new information and circumstances. Disagreement implies the existence of blind spots, which in turn present opportunities.

Setting aside expert opinions, how much AI are you currently utilizing in your business? What is in the pipeline, and what actions are you taking today? Here are the five steps and their implications for you.

OpenAI’s Metrics: The 5 Steps towards Artificial General Intelligence

Level one: conversational AI

At this stage, computers can engage in conversational language with people, such as customer service support agents, AI coaches, ChatGPT, and Claude assisting with team communication and social media content creation. Hopefully, you are currently implementing something at this level.

Since its launch in November 2022, ChatGPT has attracted 180.5 million users, including many entrepreneurs. Three million developers utilize OpenAI’s API to build their tools, and ChatGPT consulting is one of the highest-paid roles in AI. This marks the beginning.

Level two: reasoning AI

Reportedly forthcoming, this stage involves systems (referred to as “reasoners”) performing basic problem-solving tasks to comparable a human with a doctorate-level education but without access to any tools.

According to a Hacker News forum, the transition from level one to level two is significant as it entails a shift from basic and limited capabilities to a more comprehensive and human-like proficiency. This transition presents possibilities and opportunities for all businesses, but it is not yet fully realized.

Level three: autonomous AI

At level three, AI systems known as “agents” can operate autonomously on a user’s behalf for several days. Imagine having such agents in your business while you take a vacation. Currently, automations are not flawless and require monitoring. Technology is progressing towards a reality where they rarely fail, and when they do, they can self-repair without human intervention.

Similar to team members, but at a fraction of the cost. Similar to suppliers, but operating strictly on rules and processes without deviation. How much more could your business accomplish at level three of AI?

Level four: innovating AI

Referred to as “Innovators,” these AI systems can independently develop innovations. They do not just run your processes, but also enhance them. They do not just follow rules and make predictions, but critically think about how to improve performance and achieve the goal more effectively or efficiently.

How many individuals in your business are actively contemplating its improvement right now? Could you benefit from an AI tool that comprehends your objectives and provides ideas? Currently, you can prompt ChatGPT to help you significantly improve your business, but it will not do so autonomously . This would represent a substantial leap in the capabilities and applications of AI.

Level five: organizational AI

Known as “organizations,” this final stage of super AI involves artificial intelligence capable of performing the work of an entire organization. Every function currently carried out by human personnel can be executed by agents working together, making enhancements, and managing all required tasks without human involvement.

Sam Altman, CEO of OpenAI, anticipates reaching level five within ten years, while some in the field believe it might take up to fifty years. The precise timeline remains uncertain, but the rapid pace of AI development is undeniable.

Achieving Artificial General Intelligence: OpenAI’s Five-Step Process

The more you comprehend what AI can do for your business, the more you will be able to achieve with fewer resources at each stage. Implementing stage one now will position you for success as the technology progresses.

This applies to everyone, including you. In the future, some people will take action now, while others will be left behind, thinking they can catch up but never do. From conversational to reasoning, then autonomous, innovating and organizational AI, each has Significantly different implications for how you operate your business and live your life.

If OpenAI is on the brink of AGI as suggested, why do prominent individuals keep departing?

OpenAI recently undertaken significant changes in leadership as three key figures announced major transitions over the past week. Greg Brockman, the president and co-founder of the company, will be on an extended sabbatical until the end of the year. Another co-founder, John Schulman, has departed for rival Anthropic, while Peter Deng, VP of Consumer Product, has also left the ChatGPT maker.

In a post on X, Brockman mentioned, “I’m taking a sabbatical through end of year. First time to relax since co-founding OpenAI 9 years ago. The mission is far from complete; we still have a safe AGI to build. ”

These changes have led some to question how near OpenAI is to a long-rumored breakthrough in reasoning artificial intelligence, considering the ease with which high-profile employees are departing (or taking extended breaks, in the case of Brockman). As AI developer Benjamin De Kraker stated on X, “If OpenAI is right on the verge of AGI, why do prominent people keep leaving?”

AGI refers to a hypothetical AI system that could match human-level intelligence across a wide range of tasks without specialized training. It’s the ultimate goal of OpenAI, and company CEO Sam Altman has mentioned that it could emerge in the “reasonably close-ish future .” AGI also raises concerns about potential existential risks to humanity and the displacement of knowledge workers. However, the term remains somewhat ambiguous, and there’s considerable debate in the AI ​​community about what truly constitutes AGI or how close we are to achieving it.

Critics such as Ed Zitron view the emergence of the “next big thing” in AI as a necessary step to justify the substantial investments in AI models that aren’t yet profitable. The industry is hopeful that OpenAI, or a competitor, has a secret breakthrough waiting in the wings that will justify the massive costs associated with training and deploying LLMs.

On the other hand, AI critic Gary Marcus has suggested that major AI companies have reached a plateau of large language model (LLM) capability centered around GPT-4-level models since no AI company has yet made a significant leap past the groundbreaking LLM that OpenAI released in March 2023.

Microsoft CTO Kevin Scott has challenged these claims, stating that LLM “scaling laws” (which suggest LLMs increase in capability proportionate to more compute power thrown at them) will continue to deliver improvements over time, and that more patience is needed as the next generation (say, GPT-5) undergoes training.

In the grand scheme of things, Brockman’s move seems like a long-overdue extended vacation (or perhaps a period to address personal matters beyond work). Regardless of the reason, the duration of the sabbatical raises questions about how the president of a major tech company can suddenly be absent for four months without impacting day-to-day operations, especially during a critical time in its history.

Unless, of course, things are relatively calm at OpenAI—and perhaps GPT-5 won’t be released until at least next year when Brockman returns. However, this is mere speculation on our part, and OpenAI (whether voluntarily or not) sometimes surprises us when we least expect it. (Just today, Altman hinted on X about strawberries that some people interpret as a hint of a potential major model undergoing testing or nearing release.)

One of the most significant impacts of the recent departures on OpenAI might be that a few high-profile employees have joined Anthropic, a San Francisco-based AI company established in 2021 by ex-OpenAI employees Daniela and Dario Amodei.

Anthropic provides a subscription service called Claude.ai, which is similar to ChatGPT. Its most recent LLM, Claude 3.5 Sonnet, along with its web-based interface, has quickly gained favor over ChatGPT among some vocal LLM users on social media, although it likely does not yet match ChatGPT in terms of mainstream brand recognition.

In particular, John Schulman, an OpenAI co-founder and key figure in the company’s post-training process for LLMs, revealed in a statement on X that he’s leaving to join rival AI firm Anthropic to engage in more hands-on work: “This decision stems from my desire to deepen my focus on AI alignment and to start a new chapter of my career where I can return to hands-on technical work.” Alignment is a field that aims to guide AI models to produce helpful outputs.

In May, Jan Leike, an alignment researcher at OpenAI, left the company to join Anthropic while criticizing OpenAI’s handling of alignment safety.

According to The Information, Peter Deng, a product leader who joined OpenAI last year after working at Meta Platforms, Uber, and Airtable, has also left the company, although his destination is currently unknown. In May, OpenAI co-founder Ilya Sutskever departed to start a competing startup, and prominent software engineer Andrej Karpathy left in February to launch an educational venture.

De Kraker raised an intriguing point, questioning why high-profile AI veterans would leave OpenAI if the company was on the verge of developing world-changing AI technology. He asked, “If you were confident that the company you are a key part of, and have equity in equity, is about to achieve AGI within one or two years, why would you leave?”

Despite the departures, Schulman expressed optimism about OpenAI’s future in his farewell note on X. “I am confident that OpenAI and the teams I was part of will continue to thrive without me,” he wrote. “I’m incredibly grateful for the opportunity to participate in such an important part of history and I’m proud of what we’ve achieved together. I’ll still be rooting for you all, even while working elsewhere.”

Former employees of OpenAI, Google, and Meta frustrated before Congress on Tuesday about the risks associated with AI reaching human-level intelligence. They urged members of the Senate Subcommittee on Privacy, Technology, and the Law to advance US AI policy to protect against harms caused by AI.

Artificial general intelligence (AGI) is an AI system that achieves nearly human-level cognition. William Saunders, a former member of technical staff at OpenAI who resigned from the company in February, absent during the hearing that AGI could lead to “catastrophic harm” through autonomously conducting cyberattacks or assisting in the creation of new biological weapons.

Saunders suggested that while there are significant gaps in AGI development, it is conceivable that an AGI system could be built in as little as three years.

“AI companies are making rapid progress toward building AGI,” Saunders stated, citing OpenAI’s recent announcement of GPT-o1. “AGI would bring about significant societal changes, including drastic shifts in the economy and employment.”

He also emphasized that no one knows how to ensure the safety and control of AGI systems, which means they could be deceptive and conceal misbehaviors. Saunders criticized OpenAI for prioritizing speed of deployment over thoroughness, leaving vulnerabilities and increasing threats such as theft of the US’s most advanced AI systems by foreign adversaries.

During his time at OpenAI, he observed that the company did not prioritize internal security. He highlighted long periods in which vulnerabilities could have allowed employees to bypass access controls and steal the company’s most advanced AI systems, including GPT-4.

“OpenAI may claim they are improving,” he said. “However, I and other resigning employees doubt that they will be ready in time. This is not only true for OpenAI. The industry as a whole has incentives to prioritize rapid deployment, which is why a policy response is imperative.”

AGI and the lack of AI policy are top concerns for insiders

Saunders urged policymakers to prioritize policies that mandate testing of AI systems before and after deployment, require sharing of testing results, and implement protections for whistleblowers.

“I resigned from OpenAI because I no longer believed that the company would make responsible decisions about AGI on its own,” he stated during the hearing.

Helen Toner, who served on OpenAI’s nonprofit board from 2021 until November 2023, continuing that AGI is a goal that many AI companies believe they could achieve soon, making federal AI policy essential. Toner currently serves as director of strategic and foundational research grants at Georgetown University’s Center for Security and Emerging Technology.

“Many top AI companies, including OpenAI, Google, and Anthropic, are treating the development of AGI as a serious and attainable goal,” Toner stated. “Many individuals within these companies believe that if they successfully create computers as intelligent as or even more intelligent than humans, the technology will be extraordinarily disruptive at a minimum and could potentially lead to human extinction at a maximum.”

Margaret Mitchell, a former research scientist at Microsoft and Google who now serves as chief ethics scientist at the AI ​​startup Hugging Face, emphasized the need for policymakers to address the numerous gaps in AI companies’ practices that could result in harm. David Harris, senior policy advisor at the University of California Berkeley’s California Initiative for Technology and Democracy, stated during the hearing that voluntary self-regulation on safe and secure AI, which multiple AI companies committed to last year, is ineffective.

Harris, who was employed at Meta working on the teams responsible for civic integrity and responsible AI from 2018 to 2023, mentioned that these two safety teams no longer exist. He highlighted the significant reduction in the size of trust and safety teams at technology companies over the past two years.

Harris pointed out that numerous AI bills proposed in Congress offer strong frameworks for ensuring AI safety and fairness. Although several AI bills are awaiting votes in both the House and the Senate, Congress has not yet passed any AI legislation.

During the hearing, Senator Richard Blumenthal (D-Conn.), chair of the subcommittee, expressed concern that we might repeat the same mistake made with social media by acting too late. He emphasized the need to learn from the experience with social media and not rely on big tech to fulfill this role.

Makenzie Holland, a senior news writer covering federal regulation and big tech, joined TechTarget Editorial after working as a general assignment reporter for the Wilmington StarNews and as a crime and education reporter at the Wabash Plain Dealer.

Companies that fail to utilize AI are at risk of falling behind their competitors. While the concept of AI as a fundamental business principle is not new, businesses must ensure they fully exploit the potential of AI as new advancements emerge. Technology-driven businesses use AI to foster innovation, maintain quality control, and monitor employee productivity. Additionally, AI can serve as a valuable tool for enhancing cybersecurity and providing personalized consumer experiences.

Businesses recognize that AI is the future, but integrating it into existing infrastructure poses a common challenge for business decision-makers, as indicated by an HPE survey. Addressing skill and knowledge gaps during implementation and justifying costs are also obstacles to achieving success with AI. Overcoming these challenges is crucial for businesses seeking to leverage new AI technology.

Businesses require a scalable AI-optimized solution that can adapt to heavy AI workloads while ensuring security and ease of management. This solution should also be capable of proactively addressing fluctuating data demands and infrastructure maintenance needs.

The Advancement of AI in the Data Center

As the pace of AI innovation and advancement continues, data centers must keep pace with this evolution. AI not only supports operations but also drives strategic business decisions by using analytics to provide insights. Integrating AI enterprise-wide creates operational efficiencies, positioning businesses ahead of their competitors and delivering significant productivity gains. These efficiencies include time savings, accelerated ideation, and new insights to automate and simplify workflow and processes.

Like any technological advancement, it is crucial to consider potential challenges alongside the benefits. Complete transparency is vital, and when implementing AI in the business, various factors must be taken into account. It is important to carefully plan and assess, considering long-term strategies and providing training and development for employees.

Understanding potential challenges is essential. Traditional data centers designed for CPU-intensive tasks face specific obstacles; for instance, GPUs require more physical space and higher power for operation and cooling. By planning for these challenges and other likely hurdles, businesses can set themselves up for success.

The benefits of AI for any enterprise are extensive and continually expanding. By building an in-house AI ecosystem with pre-trained models, tools, frameworks, and data pipelines, businesses can power new AI applications that drive innovation and expedite time-to- value. Leveraging AI allows data centers to maintain control of their data and ensure more predictable performance for their enterprise.

This places businesses and AI practitioners in control of navigating their AI journey, giving them a competitive edge. While implementing and scaling AI for production is challenging, the right partner and technology stack can mitigate risks and streamline operations to facilitate success.

Using solutions specifically engineered and optimized for AI in the data center mitigates risks and simplifies IT operations. The HPE ProLiant DL380a Gen11 Server with Intel® Xeon® Scalable Processors is an ultra-scalable platform for AI-powered businesses. It serves as an ideal solution for AI infrastructure within the data center and can support generative AI, vision AI, and speech AI initiatives.

The HPE ProLiant DL380a Gen11 server is designed for fine tuning and inference, featuring leading Intel® Xeon® Scalable Processors and NVIDIA GPUs.

The role of AI in modern business is constantly evolving. Integrating AI into the data center presents an opportunity for growth, business success, and operational efficiency. Businesses seeking exceptional processing power, performance, and efficiency to support their AI journey can benefit from solutions like the HPE ProLiant DL380a Gen11 server with Intel® Xeon® Scalable Processors. With AI-driven automation and insights, intelligent businesses can become more resilient, secure, and responsive to market needs.

OpenAI recently introduced a five-tier system to assess its progress toward developing artificial general intelligence (AGI), as reported by Bloomberg. This new classification system was shared with employees during a company meeting to provide a clear framework for understanding AI advancement. However, the system describes hypothetical technology that does not currently exist, and it may be seen as a move to attract investment.

OpenAI has previously stated that AGI, referring to an AI system capable of performing tasks like a human without specialized training, is its primary goal. The pursuit of technology that can replace humans at most intellectual work has generated significant attention, even though it could potentially disrupt society.

OpenAI CEO Sam Altman has expressed his belief that AGI could be achieved within this decade. Much of the CEO’s public messaging has focused on how the company and society might handle the potential disruption brought about by AGI. Therefore, a ranking system to communicate internal AI milestones on the path to AGI makes sense.

OpenAI’s five levels, which it plans to share with investors, range from current AI capabilities to systems that could potentially manage entire organizations. The company believes its technology, such as GPT-4o that powers ChatGPT, currently falls under Level 1, encompassing AI capable of engaging in conversational interactions. Additionally, OpenAI executives have informed staff that they are close to reaching Level 2, known as “Reasoners.”

OpenAI is not the only entity attempting to quantify levels of AI capabilities. Similar to levels of autonomous driving mapped out by automakers, OpenAI’s system resembles efforts by other AI labs, such as the five-level framework proposed by researchers at Google DeepMind in November 2023 .

OpenAI’s classification system also bears some resemblance to Anthropic’s “AI Safety Levels” (ASLs) published by the maker of the Claude AI assistant in September 2023. Both systems aim to categorize AI capabilities, although they focus on different aspects.

While Anthropic’s ASLs are explicitly focused on safety and catastrophic risks, OpenAI’s levels track general capabilities. However, any AI classification system raises questions about whether it is possible to meaningfully quantify AI progress and what constitutes an advancement. The tech industry has a history of overpromising AI capabilities, and linear progression models like OpenAI’s potentially risk fueling unrealistic expectations.

There is currently no consensus in the AI ​​​​research community on how to measure progress toward AGI or even if AGI is a well-defined or achievable goal. Therefore, OpenAI’s five-tier system should be viewed as a communications tool to attract investors, showcasing the company’s aspirational goals rather than a scientific or technical measurement of progress.

Leave a Reply

Your email address will not be published. Required fields are marked *