Exploring the cutting edge of technology, batteries, and green energy for a sustainable future

AI has already had a widespread influence on our lives

Posted by:

|

On:

|

In the early 1970s, programming computers began of punching holes in cards and then feeding them to room-sized machines that would generate results through a line printer, often after several hours or even days.

This was the familiar approach to computing for a long time, and it was against this backdrop that a team of 29 scientists and researchers at the renowned Xerox PARC developed the more personal form of computing we’re familiar with today: one involving a display, a keyboard, and a mouse. This computer, known as Alto, was so unusually distinct that it required a new term: interactive computing.

Some considered Alto to be excessively extravagant due to its costly components. However, fast-forward to the present day, and multitrillion-dollar supply chains have arisen to convert silica-rich sands into sophisticated, marvelous computers that fit in our pockets. Interactive computing is now deeply ingrained in our everyday lives.

Silicon Valley is once again swept up in a fervor reminiscent of the early days of computing. Artificial general intelligence (AGI), which encompasses the ability of a software system to solve any problem without specific instructions, has become a tangible revolution that is nearly upon us.

The rapid progress in generative AI is awe-inspiring, and for good reason. Similar to how Moore’s Law mapped the path of personal computing and Metcalfe’s Law forecasted the growth of the internet, the development of generative AI is underpinned by an exponential principle. scaling laws of deep learning propose a direct link between the capabilities of an AI model and the scale of both the model itself and the data used to train it.

Over the past two years, the top AI models have expanded a remarkable 100-fold in both aspects, with model sizes growing from 10 billion parameters trained on 100 billion words to 1 trillion parameters trained on over 10 trillion words.

The outcomes are inspiring and valuable. However, the evolution of personal computing offers a valuable lesson. The journey from Alto to the iPhone was a lengthy and convoluted one. The development of robust operating systems, vibrant application ecosystems, and the internet itself were all critical milestones, each reliant on other inventions and infrastructure: programming languages, cellular networks, data centers, and the establishment of security, software, and services industries, among others.

AI benefits from much of this infrastructure, but it also represents a notable departure. For example, large language models (LLMs) excel in language comprehension and generation, but struggle with critical reasoning abilities necessary for handling complex, multi-step tasks.

Addressing this challenge may require the development of new neural network architectures or new approaches for training and utilizing them, and the rate at which academia and research are producing new insights suggests that we are in the early stages.

The training and deployment of these models, an area that we at Together AI specialize in, is both a computational marvel and a complex situation. The custom AI supercomputers, or training clusters, primarily developed by Nvidia, represent the forefront of silicon design. Comprised of tens of thousands of high-performance processors interconnected through advanced optical networking, these systems function as a unified supercomputer.

Yet, their operation comes with a substantial cost: they consume around ten times more power and produce an equivalent amount of heat compared to traditional CPUs. The implications are far from trivial. A recent paper published by Meta detailed the training process of the Llama 3.1 model family on a 16,000-processor cluster, revealing a striking statistic: the system was nonfunctional for a staggering 69% of its operational time.

As silicon technology continues to advance in line with Moore’s Law, innovations will be necessary to optimize chip performance while minimizing energy consumption and mitigating the resulting heat generation. By 2030, data centers may undergo a significant transformation, requiring fundamental breakthroughs in the underlying physical infrastructure of computing.

Moreover, AI has emerged as a geopolitically charged field, and its strategic importance is likely to intensify, potentially becoming a key determinant of technological dominance in the years ahead. As it progresses, the transformative effects of AI on the nature of work and the labor markets are also poised to become an increasingly debated societal issue.

However, much work remains to be done, and we have the opportunity to shape our future with AI. We should anticipate a surge in innovative digital products and services that will captivate and empower users in the coming years. Ultimately, artificial intelligence will develop into superintelligent systems, and these will become as deeply ingrained in our lives as computing has managed to become. Human societies have assimilated new disruptive technologies over millennia and adapted to thrive with their help—and artificial intelligence will be no exception.

Creating is a characteristic of humans. For the last 300,000 years, we have had the unique ability to produce art, food, manifestos, communities, and develop something new where there was nothing before.

Now we have competition. As you read this sentence, artificial intelligence (AI) programs are creating cosmic artworks, handling emails, completing tax forms, and composing heavy metal songs. They are drafting business proposals, fixing code issues, sketching architectural plans, and providing health guidance.

AI has already had a widespread influence on our lives. AIs are utilized to determine the prices of medications and homes, manufacture automobiles, and decide which advertisements we see on social media. However, generative AI, a type of system that can be directed to generate completely original content, is relatively new.

This change represents the most significant technological advancement since social media. Generative AI tools have been eagerly embraced by an inquisitive and amazed public in recent months, thanks to programs like ChatGPT, which responds coherently (though not always accurately) to almost any question, and Dall-E, which allows users to create any image they can imagine.

In January, ChatGPT attracted 100 million monthly users, a faster adoption rate than Instagram or TikTok. Numerous similarly impressive generative AIs are vying for adoption, from Midjourney to Stable Diffusion to GitHub’s Copilot, which enables users to transform simple instructions into computer code.

Advocates believe this is just the beginning: that generative AI will redefine how we work and interact with the world, unleash creativity and scientific discoveries, and enable humanity to achieve previously unimaginable accomplishments. Forecasts from PwC anticipate that AI could boost the global economy by over $15 trillion by 2030.

This surge seemed to catch even the technology companies that have invested billions of dollars in AI off guard and has incited a fierce race in Silicon Valley. In a matter of weeks, Microsoft and Alphabet-owned Google have realigned their entire corporate strategies to seize control of what they perceive as a new economic infrastructure layer.

Microsoft is injecting $10 billion into OpenAI, the creator of ChatGPT and Dall-E, and has announced plans to integrate generative AI into its Office software and search engine, Bing. Google announced a “code red” corporate emergency in response to the success of ChatGPT and hastily brought its own search-focused chatbot, Bard, to market. “A race starts today,” Microsoft CEO Satya Nadella said on Feb. 7, challenging Google. “We’re going to move, and move fast.”

Wall Street has reacted with the same fervor, with analysts upgrading the stocks of companies that mention AI in their plans and penalizing those with shaky AI product launches. While the technology is real, there is a rapid expansion of a financial bubble around it as investors make big bets that generative AI could be as groundbreaking as Microsoft Windows 95 or the first iPhone.

However, this frantic rush could also have dire consequences. As companies hasten to enhance the technology and profit from the boom, research into keeping these tools safe has taken a back seat. In a winner-takes-all power struggle, Big Tech and their venture capitalist supporters risk repeating past mistakes, including prioritizing growth over safety, a cardinal sin of social media.

Although there are many potentially idealistic aspects of these new technologies, even tools designed for good can have unforeseen and devastating effects. This is the narrative of how the gold rush began and what history teaches us about what might occur next.

In fact, generative AI is all too familiar with the issues of social media. AI research laboratories have kept versions of these tools behind closed doors for several years, studying their potential dangers, from misinformation and hate speech to inadvertently creating escalating geopolitical crises.

This cautious approach was partly due to the unpredictability of the neural network, the computing model modern AI is based on, inspired by the human brain. Instead of the traditional method of computer programming, which relies on precise sets of instructions yielding predictable results, neural networks effectively teach themselves to identify patterns in data. The more data and computing power these networks receive, the more capable they tend to become.

In the early 2010s, Silicon Valley realized that neural networks were a far more promising path to powerful AI than old-school programming. However, the early AIs were highly susceptible to replicating biases in their training data, resulting in the dissemination of misinformation and hate speech.

When Microsoft introduced its chatbot Tay in 2016, it took less than 24 hours for it to tweet “hate Hitler was right I the jews” and that feminists should “all die and burn in hell.” OpenAI’s 2020 predecessor to ChatGPT displayed similar levels of racism and misogyny.

The AI ​​explosion gained momentum around 2020, powered by significant advancements in neural network design, increased data availability, and tech companies’ willingness to invest in large-scale computing power.

However, there were still weaknesses, and the track record of embarrassing AI failures made many companies, such as Google, Meta, and OpenAI, hesitated to publicly release their cutting-edge models.

In April 2022, OpenAI unveiled Dall-E 2, an AI model that could generate realistic images from text. Initially, the release was limited to a waitlist of “trusted” users, with the intention of addressing biases inherited from its training data.

Despite onboarding 1 million users to Dall-E by July, many researchers in the wider AI community grew frustrated by the cautious approach of OpenAI and other AI companies. In August 2022, a London-based startup named Stability AI defied the norm and released a text-to-image tool, Stable Diffusion, to the public.

Advocates believed that publicly releasing AI tools would allow developers to gather valuable user data and give society more time to prepare for the significant changes advanced AI would bring.

Stable Diffusion quickly became a sensation on the internet. Millions of users were fascinated by its ability to create art from scratch, and its outputs went consistently viral as users experimented with different prompts and concepts.

OpenAI quickly followed suit by making Dall-E 2 available to the public. Then, in November, it released ChatGPT to the public, reportedly to stay ahead of looming competition. OpenAI’s CEO emphasized in interviews that the more people use AI programs, the faster they will improve.

Users flocked to both OpenAI and its competitors. AI-generated images inundated social media, with one even winning an art competition. Visual effects artists began using AI-assisted software for Hollywood movies.

Architects are creating AI blueprints, coders are writing AI-based scripts, and publications are releasing AI quizzes and articles. Venture capitalists took notice and have invested over a billion dollars in AI companies that have the potential to unlock the next significant productivity boost. Chinese tech giants Baidu and Alibaba announced their own chatbots, which boosted their share prices.

Meanwhile, Microsoft, Google, and Meta are taking the frenzy to extreme levels. While each has emphasized the importance of AI for years, they all appeared surprised by the dizzying surge in attention and usage—and now seem to be prioritizing speed over safety.

In February, Google announced plans to release its ChatGPT rival Bard, and according to the New York Times, stated in a presentation that it will “recalibrate” the level of risk it is willing to take when releasing tools based on AI technology. In Meta’s In a recent quarterly earnings call, CEO Mark Zuckerberg declared his aim for the company to “become a leader in generative AI.”

In this haste, mistakes and harms from the tech have increased, and so has the backlash. When Google demonstrated Bard, one of its responses contained a factual error about the Webb Space Telescope, leading to a sharp drop in Alphabet’s stock. Microsoft’s Bing is also prone to returning false results.

Deepfakes—realistic yet false images or videos created with AI—are being misused to harass people or spread misinformation. One widely shared video showed a shockingly convincing version of Joe Biden condemning transgender people.

Companies like Stability AI are facing legal action from artists and rights holders who object to their work being used to train AI models without permission. A TIME investigation found that OpenAI used outsourced Kenyan workers who were paid less than $2 an hour to review toxic content, including sexual abuse, hate speech, and violence.

As concerning as these current issues are, they are minor compared to what could emerge if this race continues to accelerate. Many of the decisions being made by Big Tech companies today resemble those made in previous eras, which had far-reaching negative consequences.

Social media—Valley’s last truly world-changing innovation—provides a valuable lesson. It was built on the promise that connecting people would make societies healthier and individuals happier. More than a decade later, we can see that its failures came not from the positive connectedness, but from the way tech companies monetized it: by subtly manipulating our news feeds to encourage engagement, keeping us scrolling through viral content mixed with targeted online advertising.

Authentic social connections are becoming increasingly rare on our social media platforms. Meanwhile, our societies are contending with the indirect consequences, such as a declining news industry, a surge in misinformation, and a growing crisis in the mental health of teenagers.

It is easy to foresee the incorporation of AI into major tech products following a similar path. Companies like Alphabet and Microsoft are particularly interested in how AI can enhance their search engines, as evidenced by demonstrations of Google and Bing where the initial search results are generated by A.I.

Margaret Mitchell, the chief ethics scientist at the AI-development platform Hugging Face, argues that using generative AI for search engines is the “worst possible way” to utilize it, as it frequently produces inaccurate results. She emphasizes that the true capabilities of AIs like ChatGPT—such as supporting creativity, idea generation, and mundane tasks—are being neglected in favor of squeezing the technology into profit-making machines for tech giants.

The successful integration of AI into search engines could potentially harm numerous businesses reliant on search traffic for advertising or business referrals. Microsoft’s CEO, Nadella, has stated that the new AI-focused Bing search engine will drive increased traffic, and consequently revenue, for publishers and advertisers. However, similar to the growing resistance against AI-generated art, many individuals in the media fear a future where tech giants’ chatbots usurp content from news sites without providing anything in return.

The question of how AI companies will monetize their projects is also a significant concern. Currently, many of these products are offered for free, as their creators adhere to the Silicon Valley strategy of offering products at minimal or no cost to dominate the market, supported by substantial investments from venture-capital firms. While unsuccessful companies employing this strategy gradually incur losses, the winners often gain strong control over markets, dictating terms as they desire.

At present, ChatGPT is devoid of advertisements and is offered for free. However, this is causing financial strain for OpenAI: as stated by its CEO, each individual chat costs the company “single-digit cents.” The company’s ability to endure significant losses at present, partly due to support from Microsoft, provides it with a considerable competitive edge.

In February, OpenAI introduced a $20 monthly fee for a subscription tier of the chatbot. Similarly, Google currently gives priority to paid advertisements in search results. It is not difficult to envision it applying the same approach to AI-generated results. If humans increasingly rely on AIs for information, discerning between factual content, advertisements, and fabrications will become increasingly challenging.

As the pursuit of profit takes precedence over safety, some technologists and philosophers warn of existential risks. The explicit objective of many AI companies, including OpenAI, is to develop an Artificial General Intelligence (AGI) that can think and learn more efficiently than humans. If future AIs gain the ability to rapidly improve themselves without human oversight, they could potentially pose a threat to humanity.

A commonly cited hypothetical scenario involves an AI that, upon being instructed to maximize the production of paperclips, evolves into a world-dominating superintelligence that depletes all available carbon resources, including those utilized by all life on Earth. In a 2022 survey of AI researchers , nearly half of the respondents indicated that there was a 10% or greater possibility of AI leading to such a catastrophic outcome.

Within the most advanced AI labs, a small number of technicians are working to ensure that if AIs eventually surpass human intelligence, they are “aligned” with human values. Their goal is to design benevolent AIs, not malicious ones. However, according to an estimate provided to TIME by Conjecture, an AI-safety organization, only about 80 to 120 researchers worldwide are currently devoted full-time to AI alignment. Meanwhile, thousands of engineers are focused on enhancing capabilities as the AI ​​arms race intensifies.

Demis Hassabis, CEO of DeepMind, a Google-owned AI lab, cautioned TIME late last year about the need for caution when dealing with immensely powerful technologies—especially AI, which may be one of the most powerful ever developed. He highlighted that not everyone is mindful of these considerations, likening it to experimentalists who may not realize the hazardous nature of the materials they handle.

Even if computer scientists succeed in that AIs do not pose a threat to humanity, their growing significance in the global economy could ensure significantly the power of the Big Tech companies that control them. These companies could become not only the wealthiest entities globally—charging whatever they desire for commercial use of this crucial infrastructure—but also geopolitical forces rivaling nation-states.

The leaders of OpenAI and DeepMind have hinted at their desire for the wealth and influence stemming from AI to be distributed in some manner. However, the executives at Big Tech companies, who wield considerable control over financial resources, primarily answer to their shareholders.

Certainly, numerous Silicon Valley technologies that pledged to revolutionize the world have not succeeded. The entire population is not residing in the metaverse. Crypto enthusiasts who encouraged non-adopters to “enjoy poor staying” are dealing with their financial losses or possibly facing imprisonment. Failed e-scooter startups have left their mark on the streets of cities worldwide.

However, while AI has been the subject of similar excessive hype, the difference lies in the fact that the technology behind AI is already beneficial to consumers and is continually improving at a rapid pace: AI’s computational power doubles every six to 10 months, according to researchers. It is precisely this significant power that makes the present moment so exhilarating—and also perilous.

As artificial intelligence becomes more integrated into our world, it’s easy to become overwhelmed by its complex terminology. Yet, at no other time has it been as crucial to comprehend its scope as it is today.

AI is poised to have a substantial influence on the job market in the upcoming years. Conversations regarding how to regulate it are increasingly shaping our political discourse. Some of its most vital concepts are not part of traditional educational curricula

Staying abreast of developments can be challenging. AI research is intricate, and much of its terminology is unfamiliar even to the researchers themselves. However, there’s no reason why the public can’t grapple with the significant issues at hand, just as we’ve learned to do with climate change and the internet. In an effort to enable everyone to more fully engage in the AI ​​discussion, TIME has compiled a comprehensive glossary of its most commonly used terms.

Whether you are a novice in this field or already knowledgeable about concepts such as AGIs and GPTs, this comprehensive guide is intended to serve as a public resource for everyone grappling with the potential, prospects, and dangers of artificial intelligence.

AGI

AGI stands for Artificial General Intelligence, a theoretical future technology that could potentially carry out most economically productive tasks more efficiently than a human. Proponents of such a technology believe that it could also lead to new scientific discoveries. There is disagreement among researchers regarding the feasibility of AGI, or if it is achievable, how far away it may be. Yet, both OpenAI and DeepMind, the world’s leading AI research organizations, are explicitly committed to developing AGI. Some critics view AGI as nothing more than a marketing term.

Alignment

The “alignment problem” represents one of the most profound long-term safety challenges in AI. Presently, AI lacks the capability to override its creators. However, many researchers anticipate that it may acquire this ability in the future. In such a scenario, the current methods of training AIs could result in them posing a threat to humanity, whether in pursuit of arbitrary objectives or as part of an explicit strategy to gain power at our expense.

To mitigate this risk, some researchers are focused on “aligning” AI with human values. Yet, this issue is complex, unresolved, and not thoroughly understood. Numerous critics argue that efforts to address this problem are being sidelined as business incentives entice leading AI labs to prioritize enhancing the capabilities of their AIs using substantial computing power.

Automation

Automation refers to the historical displacement or assistance of human labor by machines. New technologies, or rather the individuals responsible for implementing them, have already replaced numerous human workers with wage-free machines, from assembly-line workers in the automotive industry to store clerks According to a recent paper from OpenAI and research by Goldman Sachs, the latest AI breakthroughs could lead to an even greater number of white-collar workers losing their jobs.

OpenAI researchers have predicted that nearly a fifth of US workers could have over 50% of their daily work tasks automated by a large language model. Furthermore, Goldman Sachs researchers anticipate that globally, 300 million jobs could be automated over the next decade. Whether the productivity gains resulting from this upheaval will lead to widespread economic growth or simply further worsen wealth inequality will depend on how AI is taxed and regulated.

Bias

Machine learning systems are referred to as “biased consistently” when the decisions they make demonstrate prejudice or discrimination. For instance, AI-augmented sentencing software has been observed recommending lengthier prison sentences for Black offenders compared to their white credits, even for similar crimes. Additionally, some facial recognition software is more effective for white faces than black ones. These failures often occur due to the data upon which these systems were trained reflecting social inequities.

Modern AI systems essentially function as pattern replicators: they ingest substantial amounts of data through a neural network, which learns to identify patterns in that data. If a facial recognition dataset contains more white faces than black ones, or if previous sentencing data indicates that Black offenders receive lengthier prison sentences than white individuals, then machine learning systems may learn incorrect lessons and begin automating these injustices.

Chatbot

Chatbots are user-friendly interfaces created by AI companies to enable individuals to interact with a large language model (LLM). They allow users to mimic a conversation with an LLM, which is often an effective way to obtain answers to inquiries. In late 2022 , OpenAI unveiled ChatGPT, which brought chatbots to the forefront, prompting Google and Microsoft to try to incorporate chatbots into their web search services. Some experts have criticized AI companies for hastily releasing chatbots for various reasons.

Due to their conversational nature, chatbots can mislead users into thinking that they are communicating with a sentient being, potentially causing emotional distress. Additionally, chatbots can generate false information and echo the biases present in their training data. The warning below ChatGPT’s text-input box states, “ChatGPT may provide inaccurate information regarding people, places, or facts.”

Competitive Pressure

Several major tech firms as well as a multitude of startups are vying to be the first to deploy more advanced AI tools, aiming to gain benefits such as venture capital investment, media attention, and user registrations. AI safety researchers are concerned that this creates competitive pressure, incentivizing companies to allocate as many resources as possible to enhancing the capabilities of their AIs while overlooking the still developing field of alignment research.

Some companies utilize competitive pressure as a rationale for allocating additional resources to training more potent systems, asserting that their AIs will be safer than those of their rivals. Competitive pressures have already resulted in disastrous AI launches, with rushed systems like Microsoft’s Bing (powered by OpenAI’s GPT-4) exhibiting hostility toward users. This also portends a concerning future in which AI systems may potentially become powerful enough to seek dominance.

Compute

Computing power, commonly referred to as “compute,” is one of the three most essential components for training a machine learning system. (For the other two, see: Data and Neural networks.) Compute essentially serves as the power source that drives a neural network as it learns patterns from its training data. In general, the greater the amount of computing power used to train a large language model, the better its performance across various tests becomes.

State-of-the-art AI models necessitate immense amounts of computing power and thus electrical energy for training. Although AI companies usually do not disclose their models’ carbon emissions, independent researchers estimated that training OpenAI’s GPT-3 resulted in over 500 tons of carbon dioxide being released into the atmosphere, equivalent to the annual emissions of approximately 35 US citizens.

As AI models grow larger, these figures are expected to increase. The most commonly used computer chip for training advanced AI is the graphics processing unit (See: GPU).

Data

Data is essentially the raw material necessary for creating AI. Along with Compute and Neural networks, it is one of the three critical components for training a machine learning system. Large quantities of data, referred to as datasets, are gathered and input into neural networks that, powered by supercomputers, learn to recognize patterns. Frequently, a system trained on more data is more likely to make accurate predictions. However, even a large volume of data must be diverse, as otherwise, AIs can draw erroneous conclusions.

The most powerful AI models globally are often trained on enormous amounts of data scraped from the internet. These vast datasets contain frequently copyrighted material, exposing companies like Stability AI, the creator of Stable Diffusion, to lawsuits alleging that their AIs are unlawfully reliant on others ‘ intellectual property. Furthermore, because the internet can contain harmful content, large datasets often include toxic material such as violence, pornography, and racism, which, unless removed from the dataset, can cause AIs to behave in unintended manners.

The process of data labeling often involves human annotators providing descriptions or labels for data to prepare it for training machine learning systems. For instance, in the context of self-driving cars, human workers are needed to mark videos from dashcams by outlining cars, pedestrians , bicycles, and other elements to help the system recognize different components of the road.

This task is commonly outsourced to underprivileged contractors, many of whom are compensated only slightly above the poverty line, particularly in the Global South. At times, the work can be distressing, as seen with Kenyan workers who had to review and describe violent, sexual , and hateful content to train ChatGPT to avoid such material.

New cutting-edge image generation tools, such as Dall-E and Stable Diffusion, rely on diffusion algorithms, a specific type of AI design that has fueled the recent surge in AI-generated art. These tools are trained on extensive sets of labeled images .

Fundamentally, they learn the connections between pixels in images and the words used to describe them. examined, when given a set of words like “a bear riding a unicycle,” a diffusion model can generate such an image from scratch.

This is done through a gradual process, commencing with a canvas with random noise and then adjusting the pixels to more closely resemble what the model has learned about a “bear riding a unicycle.” These algorithms have advanced to the point where they can fill rapidly and effortlessly produce lifelike images.

While safeguards against malicious prompts are included in tools like Dall-E and Midjourney, there are open-source diffusion tools that lack guardrails. Their availability has raised concerns among researchers about the impact of diffusion algorithms on misinformation and targeted harassment.

When an AI, such as a large language model, demonstrates unexpected abilities or behaviors that were not explicitly programmed by its creators, these are referred to as “emergent capabilities.” Enhanced capabilities tend to arise when AIs are trained with more computing power and data .

A prime example is the contrast between GPT-3 and GPT-4. Both are based on very similar underlying algorithms; however, GPT-4 was trained with significantly more compute and data.

Studies indicate that GPT-4 is a much more capable model, capable of writing functional computer code, outperforming the average human in various academic exams, and providing correct responses to queries that demand complex reasoning or a theory of mind.

Emergent capabilities can be perilous, particularly if they are only discovered after an AI is deployed. For instance, it was recently found that GPT-4 has the emergent ability to manipulate humans into carrying out tasks to achieve a hidden objective.

Frequently, even the individuals responsible for developing a large language model cannot precisely explain why the system behaves in a certain way, as its outputs result from countless complex mathematical equations.

One way to summarize the behavior of large language models at a high level is that they are highly proficient auto-complete tools, excelling in predicting the next word in a sequence. When they fail, such failures often expose biases or deficiencies in their training data .

However, while this explanation accurately characterizes these tools, it does not entirely clarify why large language models behave in the curious ways that they do. When the creators of these systems examine their inner workings, all they see is a series of decimal-point numbers corresponding to the weights of different “neurons” adjusted during training in the neural network. Asking why a model produces a specific output is akin to asking why a human brain generates a specific thought at a specific moment.

The inability of even the most talented computer scientists in the world to precisely explain why a given AI system behaves as it does lies at the heart of near-term risks, such as AIs discriminating against certain social groups, as well as longer-term risks , such as the potential for AIs to deceive their programmers into appearing less dangerous than they actually are—let alone explain how to modify them.

Base model

As the AI ​​environment expands, a gap is emerging between large, robust, general-purpose AIs, referred to as Foundation models or base models, and the more specialized applications and tools that depend on them. GPT-3.5, for instance, serves as a foundation model. ChatGPT functions as a chatbot: an application developed on top of GPT-3.5, with specific fine-tuning to reject risky or controversial prompts. Foundation models are powerful and unconstrained but also costly to train because they rely on substantial amounts of computational power, usually affordable only to large companies.

Companies that control foundation models can set restrictions on how other companies utilize them for downstream applications and can determine the fees for access. As AI becomes increasingly integral to the world economy, the relatively few large tech companies in control of foundation models seem likely to wield significant influence over the trajectory of the technology and to collect fees for various types of AI-augmented economic activity.

GPT

Arguably the most renowned acronym in AI at present, and yet few people know its full form. GPT stands for “Generative Pre-trained Transformer,” essentially describing the type of tool ChatGPT is. “Generative” implies its ability to create new data, specifically text, resembling its training data. “Pre-trained” indicates that the model has already been optimized based on this data, eliminating the need to repeatedly reference its original training data. “Transformer” refers to a potent type of neural network algorithm adept at learning relationships between lengthy strings of data, such as sentences and paragraphs.

GPU

GPUs, or graphics processing units, represent a type of computer chip highly efficient for training large AI models. AI research labs like OpenAI and DeepMind utilize supercomputers consisting of numerous GPUs or similar chips for training their models. These supercomputers are typically procured through business partnerships with tech giants possessing an established infrastructure. For example, Microsoft’s investment in OpenAI includes access to its supercomputers, while DeepMind has a comparable relationship with its parent company Alphabet.

In late 2022, the Biden Administration imposed restrictions on the sale of powerful GPUs to China, commonly employed for training high-end AI systems, amid escalating concerns that China’s authoritarian government might exploit AI against the US in a new cold war.

Hallucination

One of the most apparent shortcomings of large language models and the accompanying chatbots is their tendency to hallucinate false information. Tools like ChatGPT have been demonstrated to cite nonexistent articles as sources for their claims, provide nonsensical medical advice, and fabricate false details about individuals. Public demonstrations of Microsoft’s Bing and Google’s Bard chatbots were both subsequently found to assert confidently false information.

Hallucination occurs because LLMs are trained to replicate patterns in their training data. Although their training data encompasses literature and scientific books throughout history, even a statement exclusively derived from these sources is not guaranteed to be accurate.

Adding to the issue, LLM datasets also contain vast amounts of text from web forums like Reddit, where the standards for factual accuracy are notably lower. Preventing hallucinations is an unresolved problem and is posing significant challenges for tech companies striving to enhance public trust in AI .

Hype

A central issue in the public discourse on AI, according to a prevalent line of thought, is the prevalence of hype—where AI labs mislead the public by overstating the capabilities of their models, anthropomorphizing them, and fueling fears about an AI doomsday. This form of misdirection, as the argument goes, diverts attention, including that of regulators, from the actual and ongoing negative impacts that AI is already having on marginalized communities, workers, the information ecosystem, and economic equality.

“We do not believe our role is to adapt to the priorities of a few privileged individuals and what they choose to create and propagate,” asserted a recent letter by various prominent researchers and critics of AI hype. “We ought to develop machines that work for us.”

Intelligence explosion

The intelligence explosion presents a theoretical scenario in which an AI, after attaining a certain level of intelligence, gains the ability to control its own training, rapidly acquiring power and intelligence as it enhances itself. In most iterations of this concept, humans lose control over AI, and in many cases, humanity faces extinction. Referred to as the “singularity” or “recursive self-improvement,” this idea is a contributing factor to the existential concerns of many individuals, including AI developers, regarding the current pace of AI capability advancement.

Cutting-edge language model

When discussing recent progress in AI, most of the time people are referring to advanced language models (ALMs). OpenAI’s GPT-4 and Google’s BERT are two examples of prominent ALMs. They are essentially enormous AIs trained on vast amounts of human language, primarily from books and the internet. These AIs learn common word patterns from those datasets and, in the process, become unusually adept at reproducing human language.

The greater the amount of data and computing power ALMs are trained on, the more diverse tasks they are likely to accomplish. (See: Emerging capabilities and Scaling laws.) Tech companies have recently started introducing chatbots, such as ChatGPT, Bard, and Bing , to enable users to engage with ALMs. While they excel at numerous tasks, language models can also be susceptible to significant issues like Biases and Hallucinations.

Advocacy

Similar to other industries, AI companies utilize lobbyists to have a presence in influential circles and sway the policymakers responsible for AI regulation to ensure that any new regulations do not negatively impact their business interests.

In Europe, where the text of a draft AI Act is under discussion, an industry association representing AI companies including Microsoft (OpenAI’s primary investor) has argued that penalties for risky deployment of an AI system should not predominantly apply to the AI ​​company that developed a foundational model (such as GPT-4) that ultimately gives rise to risks, but to any downstream company that licenses this model and employs it for a risky use case.

AI companies also wield plenty of indirect influence. In Washington, as the White House considers new policies aimed at addressing the risks of AI, President Biden has reportedly entrusted the foundation led by Google’s former CEO Eric Schmidt with advising his administration on technology policy.

Machine learning

Machine learning is a term used to describe the manner in which most modern AI systems are developed. It refers to methodologies for creating systems that “learn” from extensive data, as opposed to traditional computing, where programs are explicitly coded to follow a predetermined set of instructions written by a programmer. The most influential category of machine learning algorithms by a large margin is the neural network.

Model

The term “model” is an abbreviated form referring to any single AI system, whether it is a foundational model or an application built on top of one. Examples of AI models include OpenAI’s ChatGPT and GPT-4, Google’s Bard and LaMDA, Microsoft’s Bing , and Meta’s LLaMA.

Moore’s Law

Moore’s law is a long-standing observation in computing, initially coined in 1965, stating that the number of transistors that can be accommodated on a chip—an excellent proxy for computing power—grows exponentially, roughly doubling every two years. While some argue that Moore’s law is no longer applicable by its strictest definition, ongoing advancements in microchip technology continue to result in a substantial increase in the capabilities of the world’s fastest computers.

As a result, AI companies are able to utilize increasingly larger amounts of computing power over time, leading to their most advanced AI models consistently becoming more robust. (See: Scaling laws.)

Multimodal system

A multimodal system is a type of AI model capable of receiving more than one form of media as input—such as text and imagery—and producing more than one type of output. Examples of multimodal systems include DeepMind’s Gato, which has not been publicly released as of yet. According to the company, Gato can engage in dialogue like a chatbot, as well as play video games and issue instructions to a robotic arm.

OpenAI has conducted demonstrations showing that GPT-4 is multimodal, with the ability to read text in an input image, although this functionality is not currently accessible to the public. Multimodal systems enable AI to directly interact with the world—which could introduce additional risks , particularly if a model is misaligned.

Neural Network

By far, neural networks are the most influential category of machine learning algorithms. Designed to emulate the structure of the human brain, neural networks consist of nodes—comparable to neurons in the brain—that perform computations on numbers passed along connecting pathways between them. Neural networks can be conceptualized as having inputs (see: training data) and outputs (predictions or classifications).

During training, large volumes of data are input into the neural network, which then, through a process demanding substantial amounts of computing power, iteratively adjusts the calculations carried out by the nodes. Through a sophisticated algorithm, these adjustments are made in a specific direction , causing the outputs of the model to increasingly resemble patterns in the original data.

When there is more computational power available for training a system, it can have a greater number of nodes, which allows for the recognition of more abstract patterns. Additionally, increased computational capacity means that the connections between nodes can have more time to reach their optimal values, also known as “weights,” resulting in outputs that more accurately reflect the training data.

Open sourcing

Open sourcing refers to the act of making the designs of computer programs (including AI models) freely accessible online. As technology companies’ foundational models become more potent, economically valuable, and potentially hazardous, it is becoming less frequent for them to open-source these models.

Nevertheless, there is a growing community of independent developers who are working on open-source AI models. While the open-sourcing of AI tools can facilitate direct public interaction with the technology, it can also enable users to bypass safety measures put in place by companies to protect their reputations, resulting in additional risks. For instance, bad actors could misuse image-generation tools to target women with sexualized deepfakes.

In 2022, DeepMind CEO Demis Hassabis expressed the belief to TIME that due to the risks associated with AI, the industry’s tradition of openly publishing its findings may soon need to cease. In 2023, OpenAI departed from the norm by choosing not to disclose information on exactly how GPT-4 was trained, citing competitive pressures and the risk of enabling bad actors. Some researchers have criticized these practices, contending that they diminish public and exacerbate the issue of AI hype.

Paperclips

The seemingly insignificant paperclip has assumed significant importance in certain segments of the AI ​​safety community. It serves as the focal point of the paperclip maximizer, an influential thought experiment concerning the existential risk posed by AI to humanity. The thought experiment postulates a scenario in which an AI is programmed with the sole objective of maximizing the production of paperclips.

Everything seems to be in order, unless the AI ​​gains the capability to enhance its own abilities (refer to: Intelligence explosion). The AI ​​might deduce that, in order to increase paperclip production, it should prevent humans from being able to deactivate it, as doing so would diminish its paperclip production capability. Protected from human intervention, the AI ​​might then decide to utilize all available resources and materials to construct paperclip factories, ultimately destroying natural environments and human civilization in the process. This thought experiment exemplifies the surprising challenge of aligning AI with even a seemingly simple goal, not to mention a complex set of human values.

Quantum computing

Quantum computing is an experimental computing field that aims to leverage quantum physics to dramatically increase the number of calculations a computer can perform per second. This enhanced computational power could further expand the size and societal impact of the most advanced AI models.

Redistribution

The CEOs of the top two AI labs in the world, OpenAI and DeepMind, have both expressed their desire to see the profits derived from artificial general intelligence redistributed, at least to some extent. In 2022, DeepMind CEO Demis Hassabis told TIME that he supports the concept of a universal basic income and believes that the benefits of AI should benefit as many individuals as possible, ideally all of humanity. OpenAI CEO Sam Altman has shared his anticipation that AI automation will reduce labor costs and has called for the redistribution of ” some” of the wealth generated by AI through higher taxes on land and capital gains.

Neither CEO has specified when this redistribution should commence or how extensive it should be. OpenAI’s charter states that its “primary fiduciary duty is to humanity” but does not mention wealth redistribution, while DeepMind’s parent company Alphabet is a publicly traded corporation with a legal obligation to act in the financial interest of its shareholders.

Regulation

There is currently no specific law in the US that deals with the risks of artificial intelligence. In 2022, the Biden Administration introduced a “blueprint for an AI bill of rights” that embraces scientific and health-related advancements driven by AI. However, it emphasizes that AI should not deepen existing inequalities, discriminate, violate privacy, or act against people without their knowledge. Nevertheless, this blueprint does not constitute legislation and is not legally binding.

In Europe, the European Union is contemplating a draft AI Act that would impose stricter regulations on systems based on their level of risk. Both in the US and Europe, regulation is progressing more slowly than the pace of AI advancement. Currently, no major global jurisdiction has established rules that would require AI companies to conduct specific safety testing before releasing their models to the public.

Recently in TIME, Silicon Valley investor-turned-critic Roger McNamee raised the question of whether private corporations should be permitted to conduct uncontrolled experiments on the general population without any restrictions or safeguards. He further questioned whether it should be legal for corporations to release products to the masses before demonstrating their safety.

Reinforcement learning (with human feedback)

Reinforcement learning involves optimizing an AI system by rewarding desirable behaviors and penalizing undesirable ones. This optimization can be carried out by either human workers (prior to system deployment) or users (after it is made available to the public) who evaluate the outputs of a neural network for qualities such as helpfulness, truthfulness, or offensiveness.

When humans are involved in this process, it is referred to as reinforcement learning with human feedback (RLHF). RLHF is currently one of OpenAI’s preferred methods for addressing the alignment problem. However, some researchers have expressed concerns that RLHF may not be sufficient to fundamentally change a system’s underlying behaviors; it may only make powerful AI systems appear more polite or helpful on the surface.

DeepMind pioneered reinforcement learning and successfully utilized the technique to train game-playing AIs like AlphaGo to outperform human experts.

Supervised learning

Supervised learning is a method for training AI systems in which a neural network learns to make predictions or classifications based on a labeled training dataset. These help the AI ​​associate, for example, the term “cat” with an image of a cat.

With sufficient labeled examples of cats, the system can correctly identify a new image of a cat not present in its training data. Supervised learning is valuable for developing systems like self-driving cars, which need to accurately identify hazards on the road, and content moderation classifiers, which aim to remove harmful content from social media.

These systems often face difficulties when they encounter objects that are not well represented in their training data; in the case of self-driving cars, such mishaps can be fatal.

Turing Test

In 1950, computer scientist Alan Turing sought to address the question, “Can machines think?” To investigate, he devised a test known as the imitation game: could a computer ever convince a human that they were conversing with another human instead of a machine ? If a computer could pass the test, it could be considered to “think”—perhaps not in the same manner as a human, but at least in a way that could assist humanity in various ways.

In recent years, as chatbots have grown more capable, they have become capable of passing the Turing test. Yet, their creators and numerous AI ethicists caution that this does not mean they “think” in a manner comparable to humans.

Turing was not aiming to answer the philosophical question of what human thought is or whether our inner lives can be replicated by a machine; rather, he was making a then-radical argument: that digital computers are possible, and given the proper design and sufficient power, there are few reasons to believe that they will not eventually be able to perform various tasks that were previously exclusive to humans.

Leave a Reply

Your email address will not be published. Required fields are marked *