The future of artificial intelligence involves finding new methods to create AI models

The future of artificial intelligence involves finding new methods to create AI models

In view of a new law regulating artificial intelligence, the head of OpenAI had threatened to withdraw from the European market. Today, the ChatGPT operator has rowed back.

OpenAI now apparently has no plans to withdraw from the European Union (EU). “We are happy to continue to operate here and of course have no plans to leave Europe,” wrote Sam Altman, co-founder and CEO of ChatGPT, on Twitter today. He thus reversed his threat from Wednesday to turn his back on the European market in view of the planned regulations for artificial intelligence (AI).

EU will not be intimidated

“The current draft of the EU AI law would be over-regulation,” Altman had criticized. Yesterday, however, the head of the Microsoft holding was already more conciliatory. “AI should be regulated,” he said at a discussion event at the Technical University (TU) in Munich. “We have called for this.” There are also approaches in Europe that are quite good.”But we need more clarity.” One should wait and see how AI develops further and only then should the state intervened.

His threat to leave Europe had drawn criticism from EU industry chief Thierry Breton and a number of other legislators. Altman had spent the past week traveling Europe, meeting with top politicians in France, Spain, Poland, Germany and the UK to discuss the future of AI and the progress of ChatGPT. He called his tour a “very productive week of conversations in Europe about how best to regulate AI.”

Responding to Altman’s tweet, Dutch MEP Kim van Sparrentak, who worked closely on drafting the AI ​​​​rules, told Reuters today that she and her colleagues must stand firm against pressure from tech companies. “I hope we will continue to stand firm and ensure that these companies have to comply with clear commitments on transparency, safety and environmental standards,” she said. A voluntary code of conduct is not the European way.”

Artificial Intelligence (AI) Act in its final stages

In view of various AI threats, the EU is planning a so called Artificial Intelligence (AI) Act. The law is intended to primarily regulate the provision and use of AI by private and public actors. Among other things, the law stipulates that companies that develop so called generative AI such as ChatGPT must disclose any copyrighted material used.

EU parliamentarians agreed on the draft law at the beginning of the month. Representatives of the Parliament, the EU Council and the Commission are currently working out the final details. In addition to discussions on regulation, the EU wants to encourage companies to make a voluntary commitment. To this end, the Commission is planning a framework agreement with the Internet group Google and other companies. However, the proposal is still the subject of ongoing discussions.

With the release of ChatGPT, OpenAI has sparked the current hype about generative AI. It simulates human interaction and can create texts based on a few keywords. According to experts, this also increases the risk of disinformation campaigns. OpenAI recently came under criticism for not disclosing the training data for its latest AI model GPT-4. The company justified the non disclosure with the “competitive environment and security aspects”.

A new law on dealing with artificial intelligence is being drafted in the EU. The head of OpenAI has threatened to withdraw from the European market if the rules are not relaxed.

ChatGPT provider OpenAI has threatened a possible withdrawal from Europe in view of the European Union’s (EU) planned regulations for artificial intelligence (AI). “The current draft of the EU AI law would be over-regulation,” said Sam Altman, head of Microsoft subsidiary OpenAI, at an event in London yesterday. Although the group wants to make an effort to comply with new legal regulations, if in doubt the company would be prepared to turn its back on the European market.

Today, Altman was more conciliatory. “AI should be regulated,” he said at a discussion event at the Technical University (TU)in Munich. “We have called for this.” There are also approaches in Europe that are quite good. “But we need more clarity.” One should wait and see how AI develops and only then should the state intervene. Before the visit to Munich, the co-founder of OpenAI made a quick trip to Berlin and met with Chancellor Olaf Scholz (SPD).

Details are currently being negotiated

In view of various AI threats, the EU is planning a so-called Artificial Intelligence (AI) Act. The law is intended to extensively regulate the provision and use of AI by private and public actors. Among other things, the law stipulates that companies that develop so-called generative AI such as ChatGPT must disclose any copyrighted material used.

Representatives of the Parliament, the EU Council and the Commission are currently working out the final details. In addition to discussions on regulation, the EU wants to encourage companies to make a voluntary commitment. To this end, the Commission is planning a frame work agreement with the Internet group Google and other companies. However, the proposal is still the subject of ongoing discussions.

With the release of ChatGPT, OpenAI has sparked the current hype about generative AI. It simulates human interaction and can create texts based on a few keywords. According to experts, this also increases the risk of disinformation campaigns.

Sam Altman, the CEO of OpenAI, the company behind ChatGPT, expressed his belief that the future of artificial intelligence involves finding new methods to create AI models beyond simply training them on existing knowledge.

Altman likened the growth of artificial intelligence to the dawn of agriculture or the development of machines in the industrial era. He emphasized that people will utilize these tools to innovate and shape the future we collectively inhabit.

However, individuals in various industries, particularly those in the arts and entertainment fields, do not share Altman’s optimism regarding the increasing sophistication of AI tools. There are concerns about the use of copyrighted material to train AI models and the proliferation of AI-generated disinformation such as deepfakes.

Altman acknowledged that it was “inevitable” for AI technology to be capable of more nefarious uses and expressed concerns about potential misuse of AI, including deepfakes, especially in the context of global elections.

OpenAI was also under scrutiny for its voice assistant Sky, which some online users noted sounded similar to the voice of Scarlett Johansson. OpenAI clarified that Sky’s voice was not an imitation of Johansson’s and belonged to a different actor hired by the company.

During a panel discussion, Altman and Airbnb co-founder and CEO Brian Chesky, who have been friends for over a decade, highlighted their strong relationship, which was instrumental in Altman’s reinstatement at OpenAI after he was fired.

OpenAI, a prominent AI startup, played a pivotal role in the development of generative AI technologies, including the launch of ChatGPT in 2022, which led to the proliferation of various AI tools such as hyperrealistic videos, humanlike music composition, and conversational chat agents.

Despite concerns about the potential implications of advancements in AI, Altman emphasized that even with the development of artificial general intelligence, these technologies would remain tools and not autonomous beings. He views the development of AI as a gradual evolution rather than a race and acknowledges the responsibility to get it right.

Sam Altman’s role in OpenAI’s Safety and Security Committee has raised concerns about its independence. As a result, Altman will no longer be part of the organization’s Safety and Security Committee, which aims to provide independent oversight of the AI models developed and deployed by the Microsoft-backed startup.

The committee was established in May 2024 to provide safety recommendations for the AI models developed and deployed by the startup backed by Microsoft. Concerns were raised about Altman leading the oversight body, suggesting that members might not be able to impartially assess the safety and security of its AI models.

With the CEO no longer in charge, the committee now includes two OpenAI board members – former NSA chief Paul Nakasone and Quora co-founder Adam D’Angelo – as well as Nicole Seligman, the former executive vice president at Sony, and Zico Kolter, director of the machine learning department at Carnegie Mellon University’s school of computer science.

According to OpenAI’s blog post published on Monday, September 16, “The Safety and Security Committee will receive briefings from company leadership on safety evaluations for major model releases, and will, along with the full board, oversee model launches, including having the authority to postpone a release until safety concerns are addressed.”

Upon the release of its new reasoning-based AI model o1, OpenAI stated that the safety committee had “examined the safety and security criteria used to assess OpenAI o1’s suitability for launch as well as the results of safety evaluations of OpenAI o1.”

The committee also completed its 90-day review of OpenAI’s processes and safeguards and provided the following recommendations to the AI firm:

  • Establish independent governance for safety & security
  • Enhance security measures
  • Maintain transparency about OpenAI’s work
  • Collaborate with external organizations
  • Unify OpenAI’s safety frameworks for model development and monitoring

Before establishing the safety committee, both current and former employees of OpenAI had expressed concerns that the company was growing too rapidly to operate safely. Jan Leike, a former executive who left OpenAI along with chief scientist Ilya Sutskever, had posted on X that “OpenAI’s safety culture and processes have taken a backseat to shiny products.”

On May 16, 2023, Sam Altman, OpenAI’s charismatic, softly spoken, eternally optimistic billionaire CEO, and I appeared before the US Senate judiciary subcommittee meeting on AI oversight in Washington DC. At the time, AI was at the peak of its popularity, and Altman, then 38, was at the forefront of it all.

Hailing from St Louis, Missouri, Altman was the Stanford dropout who had risen to become the president of the highly successful Y Combinator startup incubator before the age of 30. A few months prior to the hearing, his company’s product ChatGPT had gained widespread attention.

Throughout the summer of 2023, Altman was treated like a celebrity, touring the world, meeting with prime ministers and presidents. US Senator Kyrsten Sinema praised him, saying, “I’ve never met anyone as smart as Sam… He’s an introvert and shy and humble… But… very good at forming relationships with people on the Hill and… can help folks in government understand AI.”

Flattering profiles at the time depicted the youthful Altman as genuine, talented, wealthy, and solely interested in advancing humanity. His frequent assertions that AI could revolutionize the global economy had world leaders eagerly anticipating it.

Gradually, I realized that I, the Senate, and ultimately the American people, had likely been deceived.

Senator Richard Blumenthal had summoned the two of us (and IBM’s Christina Montgomery) to Washington to discuss what should be done about AI, a “dual-use” technology with great promise but also the potential to cause great harm – from floods of misinformation to enabling the spread of new bioweapons. The focus was on AI policy and regulation. We pledged to tell the whole truth and nothing but the truth.

Altman represented one of the leading AI companies, while I was present as a scientist and author known for my skepticism about many things related to AI. I found Altman surprisingly engaging.

There were instances when he evaded questions (most notably Blumenthal’s “What are you most worried about?”, which I pressed Altman to answer more honestly), but overall, he seemed authentic, and I recall conveying this to the senators at the time. We both strongly advocated for AI regulation. However, little by little, I came to realize that I, the Senate, and ultimately the American people, had probably been manipulated.

In reality, I had always harbored some reservations about OpenAI. For example, the company’s publicity campaigns were often exaggerated and even deceptive, such as their elaborate demonstration of a robot “solving” a Rubik’s Cube that was later revealed to have special sensors inside. It received significant media attention, but ultimately led nowhere.

For years, the name OpenAI – which implied a commitment to openness about the science behind the company’s activities – had felt disingenuous, as it had become progressively less transparent over time.

The constant suggestion from the company that AGI (artificial general intelligence, AI that can at least match the cognitive abilities of any human) was just around the corner always seemed like unwarranted hype to me. However, in person, Altman was very impressive; I started to question whether I had been too critical of him before. Looking back, I realized that I had been too lenient.

I began to reconsider my opinion after receiving a tip about a small but revealing incident. During a Senate hearing, Altman portrayed himself as much more altruistic than he actually was. When Senator John Kennedy asked him, “OK. You make a lot of money. Do you?” Altman replied, “I make no… I get paid enough for health insurance. I have no equity in OpenAI,” and continued to elaborate, stating, “I’m doing this because I love it.” The senators were impressed by his response.

However, Altman wasn’t completely truthful. While he didn’t own any stock in OpenAI, he did own stock in Y Combinator, which in turn owned stock in OpenAI. This meant that Sam had an indirect stake in OpenAI, a fact acknowledged on OpenAI’s website. If that indirect stake were worth just 0.1% of the company’s value, which seems plausible, it would be worth nearly $100m.

This omission served as a warning sign. When the topic resurfaced, he could have rectified it, but he chose not to. People were drawn to his selfless image. (He even reinforced this image in an article with Fortune, claiming that he didn’t need equity with OpenAI because he had “enough money”.) Not long after that, I discovered that OpenAI had made a deal with a chip company in which Altman owned a stake. The selfless persona he projected began to seem insincere.

In hindsight, the discussion about money wasn’t the only thing from our time in the Senate that felt less than candid. The more significant issue was OpenAI’s stance on AI regulation. Publicly, Altman expressed support for it, but the reality was far more complex.

On one hand, perhaps a small part of Altman genuinely desired AI regulation. He often quoted Oppenheimer and acknowledged the serious risks that AI poses to humanity, likening it to nuclear weaponry. In his own words at the Senate (albeit after some prompting from me), he said, “Look, we have tried to be very clear about the magnitude of the risks here… My worst fears are that we cause significant – we, the field, the technology, the industry – cause significant harm to the world.”

However, behind closed doors, Altman’s lobbyists continued to push for weaker regulation of AI, or no regulation at all.

Presumably, Altman wouldn’t want to be remembered poorly. Yet behind closed doors, his lobbyists persistently lobbied for weaker regulation or none at all.

A month after the Senate hearing, it was revealed that OpenAI was working to soften the EU’s AI act. When Altman was dismissed by OpenAI in November 2023 for being “not consistently candid” with its board, I wasn’t entirely surprised.

At the time, few people supported the board’s decision to dismiss Altman. A large number of supporters rallied behind him, treating him like a saint. The well-known journalist Kara Swisher (known to be quite friendly with Altman) blocked me on Twitter simply for suggesting that the board might have been justified.

Altman handled the media adeptly. Five days later, with the support of OpenAI’s major investor, Microsoft, and a petition from employees backing him, he was reinstated.

However, much has changed since then. In recent months, concerns about Altman’s honesty have gone from being considered rebellious to being fashionable. Journalist Edward Zitron wrote that Altman was “a false prophet – a seedy grifter that uses his remarkable ability to impress and manipulate Silicon Valley’s elite.”

Ellen Huet of Bloomberg News, on the podcast Foundering, reached the conclusion that “when [Altman] says something, you cannot be sure that he actually means it.”

Paris Marx has cautioned against “Sam Altman’s self-serving vision.” AI pioneer Geoffrey Hinton recently questioned Altman’s motives. I myself wrote an essay called the Sam Altman Playbook, analyzing how he had managed to deceive so many people for so long, using a combination of hype and apparent humility.

Many factors have contributed to this loss of faith. For some, the tipping point was Altman’s interactions earlier this year with Scarlett Johansson, who explicitly asked him not to create a chatbot with her voice.

Altman proceeded to use a different voice actor, but one who was obviously similar to her in voice, and tweeted “Her” (a reference to a movie in which Johansson provided the voice for an AI). Johansson was furious.

The ScarJo incident highlighted a larger problem: major corporations like OpenAI claim that their models cannot function without being trained on all of the world’s intellectual property, but they have not fairly compensated many of the creators, such as artists and writers. Justine Bateman described this as “the largest theft in the history of the United States.”

Although OpenAI has repeatedly emphasized the importance of developing safety measures for AI, several key staff members focused on safety have recently left, stating that the company did not fulfill its promises. Jan Leike, a former OpenAI safety researcher, criticized the company for prioritizing flashy advancements over safety, a sentiment echoed by another former employee, William Saunders.

Co-founder Ilya Sutskever departed and launched a new venture called Safe Superintelligence, while former OpenAI employee Daniel Kokotajlo also expressed concerns that safety commitments were being disregarded. While social media has had negative impacts on society, the inadvertent development of problematic AI by OpenAI could be even more detrimental, as noted by Altman himself.

The disregard for safety exhibited by OpenAI is compounded by the company’s apparent efforts to silence its employees. In May, journalist Kelsey Piper uncovered documents revealing that the company could reclaim vested stock from former employees who did not agree to refrain from speaking negatively about the company, a practice that many industry insiders found alarming.

Subsequently, numerous former OpenAI employees signed a letter at righttowarn.ai requesting whistleblower protections, prompting the company to retract its decision to enforce these contracts.

Even the company’s board members felt deceived. In May, former OpenAI board member Helen Toner stated on the Ted AI Show podcast, “For years, Sam made it really difficult for the board… by withholding information, misrepresenting company events, and in some cases, outright lying to the board.”

By late May, negative publicity about OpenAI and its CEO had accumulated to the point where venture capitalist Matt Turck posted a cartoon on X: “days since the last easily avoidable OpenAI controversy: 0.”

There is a lot at stake. The way that AI is currently developing will have long-term implications. Altman’s decisions could significantly impact all of humanity, not just individual users, in enduring ways. OpenAI has acknowledged that its tools have been utilized by Russia and China to create disinformation, presumably to influence elections.

More advanced forms of AI, if developed, could pose even more serious risks. Despite the impact of social media on polarizing society and subtly influencing people’s beliefs, major AI companies could exacerbate these issues.

Moreover, generative AI, popularized by OpenAI, is having a substantial environmental impact in terms of electricity usage, emissions, and water consumption. As Bloomberg recently stated, “AI is already wreaking havoc on global power systems.” This impact could grow significantly as models continue to expand in size, which is the objective of all major players.

To a large extent, governments are relying on Altman’s assurances that AI will ultimately be beneficial, despite the lack of evidence so far, to justify the environmental costs.

I genuinely believe that if we continue on the current path, we will not achieve an AI that we can trust.

Meanwhile, OpenAI has taken a leading role, and Altman sits on the homeland security safety board. His counsel should be viewed with skepticism.

Altman may have briefly attempted to attract investors for a $7 trillion investment in infrastructure related to generative AI, which might end up being a significant waste of resources that could be better utilized elsewhere if, as many suspect, generative AI is not the right path to AGI [artificial general intelligence].

Overestimating current AI could potentially lead to conflicts. For example, the US-China “chip war” concerning export controls, where the US is restricting the export of crucial GPU chips designed by Nvidia and manufactured in Taiwan, is affecting China’s AI progress and escalating tensions between the two nations.

The chip battle is largely based on the belief that AI will continue to advance exponentially, despite data indicating that current approaches may have reached a point of diminishing returns.

Altman may have initially had good intentions. Perhaps he genuinely aimed to protect the world from AI threats and guide AI for positive purposes. However, greed might have taken over, as is often the case.

Unfortunately, many other AI companies appear to be following the same path of hype and cutting corners as Altman. Anthropic, formed by a group of OpenAI ex-employees concerned about the lack of focus on AI safety, seems to be increasingly competing directly with its parent company.

The billion-dollar startup Perplexity also appears to be a lesson in greed, using data it should not be using. Meanwhile, Microsoft shifted from advocating “responsible AI” to rapidly releasing products with significant issues, pressuring Google to do the same. Money and power are corrupting AI, much like they corrupted social media.

We cannot rely on large privately held AI startups to self-govern in ethical and transparent ways. If we cannot trust them to govern themselves, we certainly should not allow them to govern the world.

I sincerely believe that we will not achieve trustworthy AI if we continue on the current path. Apart from the corrupting influence of power and money, there is also a significant technical issue: large language models, the fundamental technique of generative AI, are unlikely to be safe. They are inherently stubborn and opaque – essentially “black boxes” that we can never fully control.

The statistical techniques behind these models can achieve remarkable feats, such as accelerating computer programming and creating believable interactive characters resembling deceased loved ones or historical figures. However, such black boxes have never been reliable and are therefore an unsuitable basis for AI that we can entrust with our lives and infrastructure.

Nonetheless, I do not advocate for abandoning AI. Developing better AI for fields like medicine, material science, and climate science could truly revolutionize the world. Generative AI may not be the solution, but a future form of AI yet to be developed might be.

Ironically, the biggest threat to AI today could be the AI companies themselves; their unethical behavior and exaggerated promises are turning many people away. Many are ready for the government to take a more active role. According to a June survey by the Artificial Intelligence Policy Institute, 80% of American voters prefer “regulation of AI that mandates safety measures and government oversight of AI labs instead of allowing AI companies to self-regulate.”

To achieve trustworthy AI, I have long advocated for an international effort similar to Cern’s high-energy physics consortium. The time for that is now. Such an initiative, focused on AI safety and reliability rather than profit, and on developing a new set of AI techniques that belong to humanity rather than just a few greedy companies, could be transformative.

Furthermore, citizens need to voice their opinions and demand AI that benefits the majority, not just a select few. One thing I can guarantee is that we will not achieve the promised potential of AI if we leave everything in the hands of Silicon Valley. Tech leaders have been misleading for decades. Why should we expect Sam Altman, last seen driving a $4 million Koenigsegg supercar around Napa Valley, to be any different?

When did OpenAI start?

OpenAI embarked on its groundbreaking journey on December 11, 2015, as a response to the potential dominance of AI by large tech companies.

Who are the current owners of OpenAI?

During its early stages, OpenAI received substantial support from influential figures in the industry, including contributions from Elon Musk and Peter Thiel.

As the company evolved, Elon Musk decided to step down from the board in 2018 to avoid potential conflicts with his other ventures like Tesla and SpaceX.

Due to its ambitious goals and financial requirements, OpenAI transitioned from a nonprofit to a “capped-profit” for-profit entity in 2019, with a significant $1 billion investment from Microsoft.

Ownership of OpenAI is divided among Microsoft (49%), other stakeholders (49%), and the original OpenAI non-profit foundation, which maintains its autonomy.

Other stakeholders in OpenAI include a16z, Sequoia, Tigers Global, and Founders Fund.

OpenAI Inc. functions as the overarching non-profit umbrella, while its for-profit activities are managed by OpenAI LP.

Is OpenAI a publicly traded company?

Despite its significant presence in the AI field, OpenAI is a private company and is not subject to the strict regulations and quarterly pressures faced by public companies.

However, there is considerable demand for OpenAI stock, so a public offering cannot be ruled out in the future.

Conflicts within the OpenAI Board

Elon Musk Sues OpenAI for ‘Placing Profit Above Humanity’

In late February 2024, Elon Musk, who co-founded OpenAI in 2015, filed a lawsuit against OpenAI, alleging that the company had shifted its focus from creating artificial intelligence for the benefit of humanity to pursuing profit.

Musk claims that OpenAI, which was established as a not-for-profit organization with the goal of developing artificial general intelligence, has become a closed-source subsidiary of Microsoft, focusing on maximizing profits for the company.

Musk’s lawsuit seeks to compel OpenAI to adhere to its founding agreement and return to its mission of developing AGI for the benefit of humanity.

In response to Musk’s claims, OpenAI released an open letter stating that Musk had been involved in discussions about creating a for-profit entity in 2017 and had sought majority equity and control over the board and CEO position.

Elon Musk decided to leave OpenAI and started his own AGI competitor named xAI within Tesla.

Sam Altman’s Unexpected Departure from OpenAI

On November 17, 2023, Sam Altman was unexpectedly removed from his position as CEO of OpenAI.

Mira Murati, the company’s chief technology officer, assumed the role of interim CEO, and Emmett Shear, the former CEO of Twitch, was appointed as the new CEO.

Microsoft CEO Satya Nadella offered Altman a position to lead an internal AI division at Microsoft, which Altman accepted, and OpenAI’s president Greg Brockman also transitioned to a role at Microsoft.

However, just four days later, Sam Altman resumed his position as CEO of OpenAI, despite having accepted a role at Microsoft.

OpenAI’s founder and CEO, Sam Altman, recently saw his net worth reach $2 billion according to the Bloomberg Billionaire Index. However, this figure does not reflect any financial gains from the AI company he leads.

This marks the first time the index has assessed the wealth of the 38-year-old, who has become synonymous with artificial intelligence as the CEO of OpenAI, which was recently valued at $86 billion.

According to a report by Bloomberg, Altman has consistently stated that he does not hold any equity in the organization. The report also indicated that a significant portion of his observable wealth comes from a network of venture capital funds and investments in startups.

Moreover, his wealth is expected to increase with the upcoming initial public offering of Reddit, where he stands as one of the largest shareholders.

In related news, Tesla’s CEO, Elon Musk, has filed a lawsuit against OpenAI and Sam Altman, accusing them of violating contractual agreements made when Musk helped establish the ChatGPT developer in 2015.

A lawsuit submitted on Thursday in San Francisco claims that Altman, along with OpenAI’s co-founder Greg Brockman, originally approached Musk to develop an open-source model.

The lawsuit further stated that the open-source initiative promised to advance artificial intelligence technology for the “benefit of humanity.”

In the legal filing, Musk alleged that the focus on profit by the Microsoft-backed company violates that agreement.

It is important to mention that Musk co-founded OpenAI in 2015 but resigned from its board in 2018. Subsequently, in October 2022, Musk acquired Twitter for $44 billion.

OpenAI’s ChatGPT became the fastest-growing software application globally within six months of its launch in November 2022.

Moreover, ChatGPT triggered the development of competing chatbots from companies such as Microsoft, Alphabet, and various startups that capitalized on the excitement to secure billions in funding.

Since ChatGPT’s introduction, many companies have started utilizing its capabilities for diverse tasks. This includes document summarization, coding, and igniting a competitive race among major tech firms to release their own generative AI products.

Although OpenAI is currently valued at $157 billion, it still faces challenges ahead

Recently, OpenAI completed the most lucrative funding round in Silicon Valley’s history. The next step is to successfully navigate a highly competitive AI landscape.

Even though Sam Altman’s company has solidified its leading position in the generative AI boom by achieving a new $157 billion valuation after securing $6.6 billion in fresh capital from prominent investors, its top position is not assured.

Since the launch of ChatGPT in late 2022, it has become evident that their mission to create large language models that can rival human intelligence will involve substantial costs necessitating extensive resources.

Though Altman’s company now casts a significant influence over the industry with its new valuation of $157 billion, numerous competitors are vying for capital and resources, making the startup’s path to profitability more complex.

Thus, while OpenAI has a moment to commend, the situation will soon reveal how strong its competitive advantage is and whether a severe wave of consolidation is imminent in Silicon Valley’s booming industry.

While OpenAI’s recent valuation and capital influx represent enormous amounts that any founder in Silicon Valley would envy, indications suggest that Altman remains somewhat apprehensive.

As per a Financial Times report about the fundraising, Altman’s nearly nine-year-old venture urged its new investors—a group led by Thrive Capital, which includes Nvidia, SoftBank, and Microsoft—to refrain from funding rival companies, of which there are many.

Anthropic and Mistral, both valued in the billions, are aiming to challenge OpenAI. Additionally, Musk’s xAI and Safe Superintelligence (SSI), a startup founded in June by Ilya Sutskever, a former chief scientist at OpenAI who previously attempted a coup against his ex-boss, are also in the mix.

“For the main model developers, these mega-funding rounds are becoming standard as the expenses for training the largest models are soaring into the hundreds of millions of dollars,” remarked Nathan Benaich, founder and partner at Air Street Capital, an investment firm.

Several significant factors indicate that OpenAI cannot afford to be complacent.

For starters, the expenses associated with delivering groundbreaking advancements in generative AI are projected to escalate. Dario Amodei, CEO of Anthropic, noted earlier this year that he anticipates training expenses for AI models could exceed $10 billion by 2026 and potentially reach $100 billion afterward.

OpenAI itself might face training costs surpassing $3 billion annually, as previously estimated by The Information. Training GPT-4o, for instance, costs around $100 million, but this figure is expected to increase based on the complexity of future AI models.

A portion of the expenses is fueled by the acquisition of powerful chips, referred to as GPUs, primarily sourced from Jensen Huang’s Nvidia, to establish clusters in data centers. These chips are crucial for supplying the computational strength necessary to operate large language models (LLMs).

The competition for talent has been intense in this current wave, as AI laboratories strive to gain an advantage over their rivals, prompting them to present ever more extravagant compensation packages.

“Benaich remarked to BI, “These expenses are set to escalate as firms continue to invest heavily to compete for often slight performance improvements over their rivals. This competition lacks clear historical comparisons, largely due to the staggering capital expenditure requirements and the absence of a straightforward path to profitability.”

Although OpenAI’s newfound capital will assist in financing some of the more costly aspects of its operations, it isn’t exactly in a strong position at this moment. A report from The New York Times last week indicated that the leading AI laboratory globally is poised to finish the year with a $5 billion deficit.

Additionally, OpenAI’s rumored push for exclusivity among its investors may have potential downsides. Benaich characterized this approach as “uncommon” but also as a representation of how OpenAI views its own clout in the market.

“This is also a daring strategy that may attract unwanted scrutiny from regulatory bodies,” he added.

For experts in the industry, this situation poses questions about the long-term sustainability of such practices.

Investors foresee some degree of consolidation approaching.

As OpenAI solidifies its role as the leading player in the industry, investors expect some consolidation among startups focusing on foundational models in the upcoming year.

LLM startups require continuous access to substantial capital, but not everyone can secure the same inflow of funds as OpenAI. With Microsoft acquiring Inflection.ai and Google similarly attracting the founding team of Character.ai, investors anticipate more acquisitions of this nature in the near future.

“This is a competition for capital as well, and ultimately only financial backers like sovereign wealth funds will be capable of providing the necessary capital for these LLM startups,” a European growth-stage venture capitalist mentioned to BI.

When funding becomes scarce, established giants, including major tech companies, might acquire smaller focused companies. These smaller firms have access to a vast array of proprietary data for training their models.

Venture capitalists also predict a more grounded approach to investing in LLM leaders at inflated valuations. “Many other firms are raising funds based on aspiration rather than substance, and I believe we will begin to witness a certain rationalization in that area,” another growth-stage VC informed BI, noting that “the overheated excitement surrounding AI is likely to temper next year.”

“You don’t require 50 foundational model enterprises — it’s more likely that you’ll end up with two or four,” he stated.

He added that those companies which endure will be the ones that effectively cater to consumer needs. “You might see Amazon, Anthropic, OpenAI, Meta, and Google, but I struggle to envision many others existing.”

OpenAI has successfully secured $6.6 billion in a significant funding round that places a valuation of $157 billion on the startup, placing it in a small group of tech startups with extraordinarily high private valuations.

This deal, which roughly doubles OpenAI’s valuation from just this past February, highlights the intense expectations investors have for the generative AI surge that OpenAI catalyzed with the launch of ChatGPT in 2022.

“The new funds will enable us to strengthen our position in leading-edge AI research, enhance our computing power, and continue developing tools that assist people in tackling challenging problems,” OpenAI stated in its announcement regarding the deal on Wednesday.

The funding arrives as the nine-year-old AI startup, helmed by CEO Sam Altman, confronts rising competition from companies like Google, Meta, and other AI startups, and during a time when OpenAI is navigating its own transitions — most famously marked by a boardroom incident last year that saw Altman briefly ousted and then reinstated within five days.

Since that time, the firm has faced a series of significant leadership exits as it tries to shift from its origins as a nonprofit research entity to a producer of commercial products that can take advantage of the booming AI sector. Recently, OpenAI’s chief technology officer Mira Murati unexpectedly stepped down to “create the time and space for my own exploration.” Moreover, as recently reported by Fortune, some insiders have expressed concerns that the company’s focus on safety may have been compromised in the rush to release new products ahead of competitors.

Despite the internal upheaval, investors seemed eager to gain a stake in the startup.

OpenAI did not reveal the identities of its investors, but Thrive Capital confirmed via email to Fortune that they had invested and led this latest funding round. According to Bloomberg, which first shared news of the deal, Khosla Ventures, Altimeter Capital, Fidelity, SoftBank, and the Abu Dhabi-based MGX also joined in, along with AI chip manufacturer Nvidia and Microsoft, which had previously invested $13 billion.

OpenAI has reported that ChatGPT is used by over 250 million individuals weekly

With this funding, OpenAI solidifies its position as one of the most valuable startups globally, following TikTok parent company ByteDance, valued at $225 billion, and SpaceX, led by Elon Musk, with a valuation of $200 billion, according to CB Insights’ rankings of tech company valuations.

On Wednesday, OpenAI announced that more than 250 million people worldwide engage with ChatGPT weekly.

While the company does not share its financial outcomes, the New York Times has indicated that OpenAI’s monthly earnings reached $300 million in August and anticipates generating $11.6 billion in revenue in the coming year.

With a new valuation of $157 billion after funding, investors seem to be assessing the company at 13 times its expected revenue for next year.

In comparison, Google’s parent company, Alphabet, is traded on the stock market at 5.3 times its predicted revenue for next year, while Nvidia is evaluated at approximately 16 times projected revenue.

On Wednesday, OpenAI referenced its foundational principles, emphasizing that it is “making strides toward our goal of ensuring that artificial general intelligence serves the entire human race.”

Artificial general intelligence, or AGI, remains a theoretical concept of an AI system capable of performing tasks as well as or even better than humans.

The potential risks associated with AGI were part of the rationale behind OpenAI’s establishment in 2015, as Altman, Elon Musk, and the other co-founders aimed to create a counterbalance to Google’s DeepMind, which they were concerned would develop AGI driven solely by commercial motives.

Musk, who departed from OpenAI, has criticized the organization for straying from its original purpose, even as he has ventured into his own AI enterprise, xAI.

The valuation of OpenAI has nearly doubled since earlier this year when it arranged a tender offer allowing employees to sell a portion of their shares to private investors, valuing the company at about $80 billion.

FredMT Admin Avatar

Leave a Reply

Your email address will not be published. Required fields are marked *