Blog

  • The field of AI music has seen rapid advancement in recent years

    The field of AI music has seen rapid advancement in recent years

    Artificial intelligence is making its way into various aspects of daily life, including music composition. Universal Music is now seeking to take a stand against this trend, as AI-generated music, based on existing works, is increasingly surfacing on music streaming platforms. music giant has reportedly reached out to major streaming services like Spotify and Apple, urging them to address the dissemination of AI-generated music. According to internal emails obtained by the Financial Times, Universal Music is determined to protect the rights of its artists and is prepared to take action if necessary.

    The concern revolves around AI bots using existing songs by popular artists on streaming platforms to learn how to compose new music, often resulting in compositions that sound similar to the original artists. Universal Music stressed unauthorized its moral and commercial obligation to prevent use of its artists ‘ music and to ensure that platforms do not feature content that violates the rights of artists and other creators.

    Universal Music represents well-known artists such as Sarah Conner, Rammstein, Eminem, and Billie Eilish, and is determined to safeguard their rights. The surge in AI programs capable of generating music pieces, including Google’s MusicLM, has led to a growing concern within the music industry. MusicLM, for example, can create music based on text descriptions, showcasing its advancements in both audio quality and adherence to the provided description.

    Additionally, there have been significant achievements in the AI-generated music realm, such as the completion and premiere of Beethoven’s 10th Symphony in 2021, brought to life by an AI program. Despite this progress, there is skepticism from individuals within the music industry regarding AI’s ability to create truly original works of art.

    A study from the Humboldt University of Berlin (HU) and the University of Essex revealed that AI is nearly on par with humans when it comes to creativity. This has raised concerns within the music industry, as there is fear that AI-generated music could Potentially harmful artists.

    While experts like Antonio Krüger, director of the German Research Center for Artificial Intelligence, believe that AI may not be able to venture into entirely new creative territories, the music industry remains vigilant. The industry anticipates that platform partners will take measures to prevent their services from being used in ways that could potentially harm artists. As of now, the streaming services have not provided any statements regarding their stance on AI-generated music or the actions they plan to take.

    Grimes, the musician, made a daring prediction on Sean Carroll’s Mindscape podcast. She expressed her belief that we are approaching the conclusion of human art with the arrival of Artificial General Intelligence (AGI). Grimes stated that once AGI is realized, it will surpass human artistry.

    Her comments incited strong reactions on social media. Zola Jesus, another musician, labeled Grimes as the “voice of silicon fascist privilege,” while Devon Welsh, the frontman of Majical Cloudz, accused her of having a “bird’s-eye view of billionaires.” ”

    Some musicians, however, disagree with Grimes and believe that the emergence of AI will not bring an end to human art, but rather inspire a new era of creativity. Artists like Arca, Holly Herndon, and Toro y Moi have embraced AI to explore innovative musical directions in recent years.

    Furthermore, musicians and researchers worldwide are actively developing tools to make AI more accessible to artists. Despite existing obstacles such as copyright complexities, those working with AI in music hope that the technology will become a democratizing force and an integral part of everyday musical creation.

    Arca, a producer renowned for collaborating with Kanye West and Björk on groundbreaking albums, expressed relief and excitement about the vast potential AI offers. He highlighted the feeling of possibility and the wide-open creative horizon that AI has provided him.

    Artificial intelligence has been closely connected with music for a long time. In 1951, Alan Turing, a pioneer in computer science, constructed a machine that generated three simple melodies. In the 90s, David Bowie experimented with a digital lyric randomizer for inspiration. During inspiration. the same period, a music theory professor trained a computer program to compose new pieces in the style of Bach; when the audience compared its work to a real Bach piece, they couldn’t tell the difference.

    The field of AI music has seen rapid advancement in recent years, thanks to dedicated research teams at universities, investments from major tech companies, and machine learning conferences like NeurIPS. In 2018, Francois Pachet, a longstanding AI music innovator, led the creation of the first pop album composed with artificial intelligence, Hello, World. Last year, the experimental singer-songwriter Holly Herndon garnered praise for Proto, an album in which she collaborated with an AI version of herself.

    Despite the considerable progress, believe many that AI still has a long way to go before it can create hit songs on its own. Oleg Stavitsky, the CEO and co-founder of Endel, an app that generates sound environments, remarked, “AI music is simply not enough advanced to produce a song that you would prefer over a track by Drake.” For example, “Daddy’s Car,” a song created by AI in 2016 to mimic the Beatles, is a confusing mix of psychedelic rock elements that fails to cohesively come together.

    Due to these limitations, very few mainstream pop songs are being created by AI. Instead, more exciting progress is being made in two seemingly opposing branches of music: the practical and the experimental.

    Addressing Needs

    On one end of the spectrum, AI music is meeting a simple demand: there is a greater need for music than ever before, due to the growing number of content creators on streaming and social media platforms. In the early 2010s, composers Drew Silverstein, Sam Estes, and Michael Hobe, while working on music for Hollywood films like The Dark Knight, were inundated with requests for simple background music for film, TV, or video games. “Many of our colleagues wanted music that they couldn’t afford or didn’t have time for — and they didn’t want to use stock music,” explained Silverstein.

    To address this, the trio created Amper, which enables non-musicians to create music by specifying parameters such as genre, mood, and tempo. Amper’s music is now used in podcasts, commercials, and videos for companies like Reuters. According to Silverstein, “Previously, a video editor would search stock music and settle for something sufficient. Now, with Amper, they can say, ‘I know what I want, and in a matter of minutes, I can make it.’” In a recent test similar to the Turing test, the company found that consumers couldn’t differentiate between music composed by humans and that composed by Amper’s AI.

    Similarly, Endel was created to fulfill a modern need: personalized soundscapes. Stavitsky realized that as people increasingly turn to headphones to navigate through the day, “there’s no playlist or song that can adapt to the context of whatever’s happening around you,” he says The app takes several real-time factors into account — including the weather, the listener’s heart rate, physical activity rate, and circadian rhythms — to generate gentle music designed to aid sleep, study, or relaxation.

    Stavitsky mentions that users have effectively used Endel to address ADHD, insomnia, and tinnitus; a company representative reported that the app reached one million downloads by the end of January. Both Amper and Endel empower non-musicians to become involved in a process they may have been excluded from due to a lack of training or background. Silverstein mentioned that Amper will introduce a user-friendly interface this year so that anyone, not just companies, can use it to create songs. “Billions of individuals who may not have been part of the creative class can now be,” he says.

    Advancing Music

    Of course, creating simple tunes or enhanced background noise is vastly distinct from creating exceptional music. This represents a major concern that many have about AI in music: that it could reduce music to functional and generic sounds until every song sounds more or less the same. What if major labels use AI and algorithms to inundate us with simple catchy tunes indefinitely?

    However, musician Claire Evans of the Los Angeles-based electropop band YACHT points out that such opportunistic optimization already lies at the core of the music industry: “That algorithm exists, and it’s called Dr. Luke,” she says, referring to the once exceedingly prevalent producer who creates pop hits based on specific formulas. Thus, it falls upon forward-thinking musicians to leverage the technology for the opposite purpose: to resist standardization and explore uncharted territories that they couldn’t have otherwise.

    The band YACHT used a machine learning system to create their latest album, Chain Tripping. They fed their entire music catalog into the system and then selected the most interesting melodies and lyrics from the output to use in their songs. The resulting dance pop album was unconventional and challenging to both listen to and perform.

    YACHT’s member Evans pointed out that musicians often underestimate how much their playing is influenced by their physical experiences and habits. Learning the new AI-generated music was difficult for the band, as it deviated slightly from their familiar patterns. This venture led to YACHT’s first Grammy nomination after two decades, for best immersive audio album.

    Musician Ash Koosha’s work with AI led to an unexpected emotional breakthrough. He created an AI pop star named Yona, which generates songs using software. Some of Yona’s lyrics were surprisingly vulnerable, which Koosha found astounding. He noted that expressing such raw emotion is something most humans struggle to do unless triggered.

    In Berlin, the hacker duo Dadabots is using AI to create musical disorientation and chaos. They are experimenting with AI-generated death metal livestreams and collaborating with avant-garde songwriters to develop new tools. Co-founder CJ Carr views AI as both a trainer for musicians and a creative force that produces unprecedented sounds and emotions.

    For other artists, AI serves as a gateway to revive pre-recorded music. A new version of the 2012 cult classic “Jasmine” by Jai Paul appeared online last summer. This AI-generated track evolves continuously, deviating from the original, and offers an infinite, infectious jam session experience.

    The London-based company Bronze created this AI-generated track, aiming to liberate music from the static nature of recordings. They wanted to present music as a constantly evolving form, just as it exists in their hands.

    Bronze’s project caught the attention of Arca, known for her work on albums by Kanye West, Björk, and FKA Twigs. She saw potential in the technology to bridge the gap between live and recorded music. Collaborating with Bronze, she worked on an installation by the French artist Philippe Parreno at New York’s Museum of Modern Art.

    Arca found that experiencing the music she had ostensibly composed was both unusual and captivating. She mentioned the freedom in creating an ecosystem where things happen organically, rather than making every microdecision. She also revealed plans for new music projects using Bronze’s technology.

    It discusses the current state and future of AI in music.

    Many express concerns about the potential displacement of musicians by AI technology, which is being used by creators like Arca to foster innovation. However, Ash Koosha points out that similar fears have arisen with every major technological advancement of the past century. This fear is likened to that of guitarists in the 1970s, who rejected synthesizers. Despite some individuals being replaced, this resistance led to the emergence of a new generation of home producers and the rise of hip-hop and house music.

    Francois Pachet, director of Spotify’s Creator Technology Research Lab, asserts that we are still at the initial stages of experimenting with AI-generated music. He notes that the quantity of music produced by AI is minimal compared to the amount of research being conducted in this field.

    Legal battles are expected to arise once more AI-created music is released to the public. The existing copyright laws do not account for AI-generated music, leaving ambiguity regarding ownership rights. Questions about whether the rights belong to the programmer, the original musician whose work was used to train the AI, or even the AI itself remain unanswered. This poses concerns that musicians could potentially have no legal recourse if a company used AI to replicate their work without permission.

    Despite these pending issues, musicians worldwide are diligently working to make their tools accessible to aspiring music-makers. The goal is to inspire young producers to create innovative music that transcends current imagination.

    AI is revolutionizing the music industry by transforming the creation and consumption of music. Many artists have shifted from traditional production methods to utilizing AI in various stages of music production. From composing and mastering to identifying songs and curating personalized playlists, AI is reshaping the music landscape.

    Before we delve deeper into this topic, let’s clarify what we mean by artificial intelligence (AI). Some people are startled by the term “artificial intelligence” as they believe that machines cannot possess intelligence. Philosophically, a machine’s intelligence is limited to the information it receives from humans and the evaluations made by humans. There’s an ongoing debate about whether AI can have its own consciousness. Nevertheless, if intelligence is defined as the ability to solve problems through thought, then AI certainly possesses intelligence.

    AI has diverse applications, including composing new music, creating unique mashups, and even developing robotic musicians. These applications are seemingly limitless, but they are constrained by programming and the information provided by humans. AI can also construct lyrics with specific emotions, explore new musical genres, and push the boundaries of music. AI-supported songwriting can help overcome writer’s blocks, offering unusual suggestions that may unlock creativity. Music based on self-learning algorithms leads us into uncharted digital territory, where the future of music remains a deeply hidden secret waiting to be unlocked.

    AI’s impact on the music industry is not a novel subject but a longstanding theme. For instance, AI-generated mindfulness ambient music, royalty-free music for content creators, and automated mixing and mastering have become substantial industries over the past five years. Additionally, streaming services leverage AI to provide personalized music recommendations based on the analysis of specific musical qualities. AI and machine learning have significantly transformed the music industry, making it easier than ever before to create and enjoy delightful music.

    Concerns are reasonable, but fears are often baseless.

    Certainly, there are potential dangers. One of the primary worries is that AI-generated music could make human musicians and songwriters obsolete, displacing them and leading to unemployment. However, these concerns should be taken with a grain of salt. Ultimately, there is one thing AI cannot replicate: the creativity of a musician. The fear that AI music could result in an oversaturation among listeners due to repetitive sounds or styles also seems unfounded. After all, individuals still make their own decisions about their musical preferences. If a genre is at risk of becoming monotonous, consumers naturally turn away, rather than rejecting music altogether. In this context, AI music might at most lead to an oversaturation of itself.

    As with any new development since the invention of sliced bread, it is crucial to use artificial intelligence ethically, morally, and within the bounds of the law. A copyright violation by AI remains a copyright violation; a song created by artificial intelligence remains an artificial creation. These scenarios do not originate from AI itself. The existing legal framework remains unchanged.

    AI: Attempting to decode Mozart’s genetic makeup

    In recent times, various noteworthy projects have been carried out using artificial intelligence. For instance, in 2021, the music of the composer was visualized in several projects for the 100th Mozart Festival. These projects aimed to uncover the musical essence of the genius. A research team from the University of Würzburg created an AI named “Mozart Jukebox” as well as an augmented reality (AR) app. It was demonstrated that there is not just one AI, but that it evolves based on user interactions. Thus, humans are far from being excluded from the process.

    Artificial intelligence brings musicians back to life

    Also in 2021, “The Lost Tapes of the 27 Club” were released, featuring vocals as the only “real” element of the recordings. However, the vocals did not originate from the original artists but from musicians in cover bands who specialized in emulating their idols. Using the Google AI Magenta, songs by Kurt Cobain with Nirvana, Jim Morrison with the Doors, Amy Winehouse, and Jimi Hendrix were (re)composed. Subsequently, the music was created using digital instruments controlled by computers. This was not the first AI music project, as similar projects had previously produced music in the style of the Beatles, Bach, or Beethoven.

    AI: A unique form of human-machine collaboration

    The fact that the compositions of contemporary artists are not solely the result of the “human factor” is often imperceptible in many productions, as long as AI is utilized tastefully. In contrast, some deliberately emphasize the role of digital technology. For example, in 2018, Taryn Southern released an album titled “I am AI,” which was composed and produced using four music programs: AIVA, Google Magenta, Watson Beat, and Amper Music.

    Songs featuring data-driven voices and sounds

    Holly Herndon, along with her partner Mat Dryhurst, developed “baby AI Spawn,” primarily fueled by data-driven voices and sounds. Prior to this, she had already released AI-generated songs and eventually the full album “Proto.” Some even refer to Holly as the Godmother of AI music. Undoubtedly, there are numerous musicians who could claim this title for themselves. How about Kraftwerk, for example?

    Stylistic imitation by AI

    It is noteworthy that researchers have recurrently strived to analyze and replicate the distinctive stylistic nuances of musicians. For instance, scientists at the SONY CSL Research Lab wrote the first complete songs using AI, created on FlowMachines, a system that learns musical styles from an extensive database. The song “Daddy’s Car” is not by the Beatles, but it is composed in their style – as interpreted by the scientists.

    We can see that AI music presents forward-thinking and equally creative opportunities for the future of music. The quintessentially human characteristic – emotional creativity – is unlikely to be hindered. Ultimately, it remains the driving force of humanity.

    Last November, at the Stockholm University of the Arts, a human and an AI collaboratively created music. The performance commenced with musician David Dolan playing a grand piano into a microphone. As he played, a computer system, designed and supervised by composer and Kingston University researcher Oded Ben-Tal, “listened” to the piece, extracting data on pitch, rhythm, and timbre. Subsequently, it added its own accompaniment, improvising just like a human would. Some sounds were transformations of Dolan’s piano, while others were new sounds synthesized in real-time. The performance was chilling, ambient, and textured.

    This situation, where a machine and a person work together peacefully, seems incompatible with the ongoing debate about artists versus machines. You may have heard that AI is taking over journalism, producing error-filled SEO copy. Or that AI is taking from illustrators, leading to lawsuits against Stability AI, DeviantArt, and Midjourney for copyright infringement. Or that computers are attempting to rap: Capitol Records dropped the “robot rapper” FN Meka following criticism that the character was “an amalgamation of gross stereotypes.” Most recently, Noam Chomsky claimed that ChatGPT demonstrates the “banality of evil.”

    These concerns fit neatly with worries about automation, that machines will replace people—or, more accurately, that those in control of these machines will use them to replace everyone else. However, some artists, especially musicians, are quietly interested in how these models might complement human creativity, and not just in a “hey, this AI plays Nirvana” way. They are exploring how AI and humans might collaborate rather than compete.

    “Creativity is not a singular thing,” says Ben-Tal, speaking over Zoom. “It encompasses many different aspects, including inspiration, innovation, craft, technique, and hard work. And there is no reason why computers cannot be involved in that process in a helpful way.”

    The idea that computers might compose music has been around as long as the computer itself. Mathematician and writer Ada Lovelace once suggested that Charles Babbage’s steam-powered Analytical Engine, considered the first computer, could be used for purposes other than numbers. In her view, if the “science of harmony and of musical composition” could be adapted for use with Babbage’s machine, “the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.”

    The earliest book on the topic, “Experimental Music: Composition with an Electronic Computer,” written by American composer and professor Lejaren Hiller Jr. and mathematician Leonard Isaacson, was published in 1959. In popular music, artists such as Ash Koosha, Arca, and most notably Holly Herndon have utilized AI to enhance their work. When Herndon talked to WIRED last year about her free-to-use, “AI-powered vocal clone,” Holly+, she succinctly explained the tension between technology and music. “There’s a narrative surrounding a lot of this stuff, that it’s a scary dystopia,” she said. “I’m trying to present another perspective: This is an opportunity.”

    Musicians have also responded to the general unease created by ChatGPT and Bing’s AI chatbot. Bogdan Raczynski, after reading transcripts of the chatbots’ viral conversations with humans, expressed, via email, that he sensed “fear, confusion, regret, caution, backpedaling, and so on” in the model’s responses. It’s not that he believes the chatbot has feelings, but rather that “the emotions it evokes in humans are very real,” he explains. “And for me, those emotions have been concern and sympathy.” In reaction, he has released a “series of comforting live performances for AI” (emphasis mine).

    Ben-Tal says his work offers an alternative to “the human-versus-machine narrative.” He acknowledges that generative AI can be unsettling because, to some extent, it demonstrates a type of creativity usually attributed to humans, but he adds that it is also simply another technology, another instrument, in a tradition that goes back to the bone flute. For him, generative AI is akin to turntables: When artists discovered they could use them to scratch records and sample their sounds, they created entirely new genres.

    In this regard, copyright may require a significant reconsideration: Google has refrained from releasing its MusicLM model, which converts text into music, due to “the risks associated with music generation, in particular, the potential misappropriation of creative content.” In a 2019 paper, Ben-Tal and other researchers urged readers to envision a musician holodeck, an endpoint for music AI, which has archived all recorded music and can generate or retrieve any conceivable sound upon request.

    Where do songwriters fit into this future? And before that, can songwriters protect themselves against plagiarism? Should audiences be informed, as WIRED does in its articles, when AI is used?

    Yet these models still offer appealing creative capabilities. In the short term, Ben-Tal explains, musicians can use an AI, as he did, to improvise with a pianist beyond their skill level. Or they can draw inspiration from an AI’s compositions, perhaps in a genre with which they are not familiar, such as Irish folk music.

    And in the long run, AI might realize a more audacious (though controversial) fantasy: It could effortlessly bring an artist’s vision to life. “Composers, you know, we come up with ideas of what music we would like to create, but then translating these into sounds or scores, realizing those ideas, is quite a laborious task,” he says. “If there was a wire that we could plug in and get this out, that could be very fantastic and wonderful.”

    There are already algorithms disrupting the music industry. Author Cory Doctorow has discussed Spotify’s impact, highlighting how playlists encourage artists to prioritize music that fits into specific categories, and how this influences what audiences listen to. With the introduction of AI into this landscape, musicians may face even more challenges. For example, what if Spotify uses AI to create its own artists and promotes them over human musicians?

    Raczynski is hopeful that he can adapt to these changes and not be overshadowed by them. He acknowledges that he’ll need to engage with AI in some way in order to survive in this industry. However, he aims to develop a mutually beneficial relationship with AI, rather than solely focusing on his own interests.

    AI music capabilities have been quietly present in the music industry for many years. It was not until ChatGPT was released in 2022 that the broader conversation about artificial intelligence began to spread in mainstream media. Currently, some musicians and music industry professionals are excited about the potential of AI music, while others are cautious, especially due to the early stage of regulation in this area. According to a study by the music distribution company Ditto, almost 60 percent of surveyed artists use AI in their music projects, while 28 percent wouldn’t use AI for music purposes.

    Christopher Wares, Assistant Chair of Music Business/Management at Berklee College of Music, is a supporter of AI music technology. He wrote his master’s thesis in 2016 on why Warner Music should invest in artificial intelligence (spoiler alert: they did, along with other major labels). Wares has incorporated AI into his Berklee courses and has observed varied responses among students.

    “Some of my students are enthusiastic about AI and are already utilizing it in different ways, while others are not interested,” says Wares. “There are intense debates, and I encourage my students to embrace this technology and explore new ways to enhance their creative processes.”

    Another proponent of AI music technology is Ben Camp, Associate Professor of Songwriting at Berklee College of Music and author of Songs Unmasked: Techniques and Tips for Songwriting Success. Camp became interested in AI music technology in 2016 after hearing “Daddy’s Car,” one of the first AI-generated pop songs based on music by the Beatles.

    Camp also allows their students to explore AI in the classroom, with the condition that they verify any information obtained from ChatGPT or similar large language models.

    “I believe everyone should make their own decision about it,” says Camp. “I mean, I have friends who still use flip phones because they are uncomfortable with having all their information on their phone. I also have friends who still have landlines. So I’m not saying, ‘Hey everyone, you need to do this.’ But it’s definitely here to stay. It’s not going away. It’s only going to improve.”

    Whether you are actively using AI in your music or have reservations, it is increasingly evident that AI will play a significant role in the music industry. We will discuss the current state of AI in the music industry, including the available tools, with insights from Wares and Camp.

    What is AI Music?

    Before explaining what AI music involves, let’s first define artificial intelligence. Here is Wares’ definition:

    “Artificial intelligence is the computational brainpower that enables machines to imitate human thinking or behavior, such as problem-solving, learning, or recognizing patterns.”

    In the context of music, AI technology has advanced to the point where it can create, compose, and improve musical content previously performed by humans. AI music can take various forms and offer different types of assistance, from composing an entire song to enhancing specific aspects of a composition, to mixing and mastering a production, to voice cloning, and more. We will also outline specific AI music tools capable of performing these tasks, which have raised concerns about copyright issues.

    Copyright and AI Music

    One of the most debated issues concerning AI in the music industry revolves around who profits from a work created using AI, particularly if the algorithm is trained using existing copyrighted material. In March 2023, the U.S. Copyright Office initiated an investigation into copyright issues related to artificial intelligence. Camp is optimistic that regulators will intervene to address this, but is worried that finding a solution is not straightforward due to the outdated nature of the US copyright system within which artists work.

    “The laws and precedents that have shaped our modern copyright system do not align with the current state of music,” says Camp. “I believe creators should receive attribution, credit, and compensation. However, the system through which we are addressing this is severely outdated.”

    The legality of AI-generated music remains uncertain, prompting discussion about how to ensure artists are appropriately recognized, compensated, and willing participants in the use of their work or image for AI, while still allowing for creative use of AI technology in music. At present, it’s unclear where the line between inspiration and infringement lies, as some record labels are beginning to push back.

    In May 2023, Universal Music Group called on streaming services to block AI-generated music, alleging unauthorized use of their artists’ music to train AI algorithms and threatening legal action. In response, Spotify removed 7% of AI-generated music from its platform, amounting to tens of thousands of songs.

    By July 2023, UMG had appealed to Congress for nationwide policies safeguarding creators from AI copyright violations. The record label is among 40 participants supporting the Human Artistry Campaign, an organization advocating for responsible AI use.

    Regarding voice cloning, while there is limited legal precedent, for public figures, it may implicate their right to control the use of their likeness, name, and voice. Notably, a TikToker known as Ghostwriter used AI to create a simulated duet between Drake and The Weeknd titled “Heart on My Sleeve,” which was subsequently taken down, though unauthorized versions persist online.

    The replication of artists’ names and likenesses using AI raises concerns within the music and entertainment industries. Protecting writers from having their work used to train AI systems and actors from unauthorized replication of their image and voice without consent is a key demand of the current SAG-AFTRA strike.

    AI’s ethical considerations extend beyond copyright, with issues such as biased data set training posing immediate challenges. For instance, AI rapper FN Meka, signed by Capitol Music Group in 2022, was dropped for perpetuating racial stereotypes.

    One ethical concern is the training process known as “reinforcement learning,” involving human feedback on potentially disturbing content. A recent episode of The Journal podcast from the Wall Street Journal highlighted the mental health toll on data workers tasked with evaluating such content for AI training.

    Lastly, we can explore various AI music tools. At the Berklee Onsite 2023 music conference, Wares introduced several AI music tools available for exploration and highlighted others that are currently in development.

    BandLab SongStarter

    The SongStarter app by BandLab is a song generator powered by AI that allows you to select a music genre, input lyrics (including emojis), and it will produce ideas that are free from royalties. You can then transfer these ideas to their studio feature to personalize them. This is an excellent way to kickstart a song if you need some initial inspiration.

    Midjourney

    Midjourney, a popular AI image generator, can be utilized to create artwork for albums, songs, posters, Spotify loops, merchandise, and more. What distinguishes it from other AI image generators is its surreal, dream-like style, which is well-suited for musical projects. The software is user-friendly, but it does have a learning curve. As with many new tech programs, it’s advisable to watch some tutorials before getting started.

    Mix Monolith

    The Mix Monolith plug-in is an automated mixing system from AYAIC designed to balance your mix. According to the developer in an article from Mix Online, “its purpose is not to automatically create a finished mix, but to establish the fundamental gain relationships between tracks and ensure proper gain staging.”

    LANDR AI Mastering

    LANDR’s AI mastering tool enables you to drag and drop your track into the program, which will then analyze it and offer straightforward choices for style and loudness. After making these selections, the program will master your track and provide additional options for file type and distribution method. LANDR boasts having mastered over 20 million tracks through their program.

    AIVA

    AIVA is an AI program for composition trained with over 30,000 iconic scores from history. You can choose from various preset music styles, ranging from modern cinematic to twentieth-century cinematic, and tango to jazz. You also have the option to input the key signature, time signature, pacing, instrumentation, duration, and more. If you’re unsure, AIVA can do it for you. Finally, you can generate a track, adjust the instrumentation, and download various file types. As a subscriber, you have full copyright license to anything you create.

    ChatGPT for Musicians

    ChatGPT from OpenAI is one of the most widely used AI tools and has numerous applications for musicians. The company is currently under investigation by the Federal Trade Commission, so it’s important to take precautions about the information you share with ChatGPT as well as verify any facts you retrieve from it.

    Having said that, the program has the potential to reduce the time spent on tasks that divert you from actually creating music. Wares and Camp have been experimenting with ChatGPT since its release and have some specific prompts that could be useful for musicians and music professionals.

    Social Media Strategy

    Managing social media can be time-consuming for a DIY musician, and ChatGPT can help ease the burden. Wares suggests that you can start by prompting ChatGPT with details about the type of artist you are, the music genre you play, and your passions and interests. Then, you can request 30 pieces of content for the next 30 days for platforms like TikTok, Instagram, Facebook, or any other social media platform you use. Not only can you ask for social media content ideas, but you can also ask ChatGPT to generate optimized captions and hashtags. Find some ChatGPT social media tips here.

    Tech Riders for Touring

    When embarking on a tour, musicians often enlist someone to create a technical rider, which outlines all the specific requirements for their show. This could include equipment, stage setup, sound engineering, lighting, hospitality considerations, performance contracts, tour routes, venue options, ticket prices, and more. Wares says that ChatGPT can be used to draft this technical rider and recently collaborated with a band to plan their tour using this technology.

    “We began by creating their technical rider, which included backline requirements, a detailed input list, and specific microphone recommendations, all based on a few simple prompts,” says Wares. “Then we requested tour routing suggestions in the Northeast, ticket pricing advice, as well as ideas for merchandise tailored to the unique interests and demographics of the band’s fanbase. What would have taken days to complete was done in less than an hour.”

    Lyric Writing

    If you need assistance in kickstarting song lyrics, seek inspiration, or require word suggestions, ChatGPT can be a valuable tool for songwriting. Camp provides an example of collaborating with Berklee alum, Julia Perry (who interviewed them for a Berklee Now article about AI and music) to generate song ideas using ChatGPT.

    “We were discussing the magic of the universe and how she wanted to convey this profound, unknowable truth about the universe,” says Camp. “I provided ChatGPT with a detailed explanation of everything she said in two or three paragraphs and asked it to give me 20 opening lines for this song.”

    They ended up using one of the 20 options as a starting point for a new song.

    Can ChatGPT assist with a range of content and copywriting tasks, including drafting a press release, creating bios of various lengths, developing an album release strategy, composing blog posts, crafting website copy, and writing email pitches?

    In an ideal scenario, having a lawyer to create and review agreements and contracts would be the best option. However, this may not always be practical or affordable. In such cases, ChatGPT could help in drafting agreements, providing an alternative to having no agreement at all. This could be useful for creating management agreements, band agreements, split sheets, performance agreements, and more. Nonetheless, engaging an entertainment lawyer is always the preferred choice whenever feasible.

    When it comes to AI and other emerging technologies, one recurring theme is that they are expected to play a significant role in the music industry (and most industries) in the future. Ignoring these technologies is unlikely to benefit the industry’s future leaders.

    Wares believes that AI can enhance productivity and support the creative process of students, allowing them to focus on their primary interests, such as creating and playing music or exploring new business ideas. However, as an educator, it’s important to ensure that students don’t overly rely on these tools, and efforts are constantly made to use AI to help develop their critical thinking skills.

    Camp agrees and advises individuals to do what feels comfortable for them as AI continues to advance. While encouraging the adoption of technology to stay current and relevant, Camp acknowledges that not everyone needs to use AI, drawing a comparison to people who still use landlines or prefer buying vinyl records. AI is making a significant impact, but it’s a choice whether to embrace it.

    According to a survey from Tracklib, a platform that provides licensed samples and stems for music production, a quarter of music producers are currently utilizing AI in their craft. However, the survey also revealed a significant level of resistance to the technology, primarily due to concerns about losing creative control.

    Of the producers using AI, a majority (73.9%) employ it mainly for stem separation. Fewer use it for mastering and EQ plugins (45.5%), generating elements for songs (21.2%), or creating entire songs (3%). Among those not using AI, the majority (82.2%) cite artistic and creative reasons for their resistance, with smaller percentages mentioning concerns about quality (34.5%), cost (14.3%), and copyright (10.2%).

    The survey also found a significant disparity in perceptions of “assistive AI,” which aids in the music creation process, and “generative AI,” which directly creates elements of songs or entire songs. While most respondents hold a negative view of generative AI, there is a more positive perception of assistive AI, although it falls short of majority support.

    Notably, the youngest respondents were most strongly opposed to generative AI, while the oldest respondents exhibited the strongest opposition to assistive AI.

    Willingness to pay for AI technology was generally low, as nearly three-quarters of AI tool users utilized only free tools. Among “beginner” producers, some expressed a willingness to pay, but very few were prepared to pay $25 or more per month.

    Overall, 70% of respondents anticipate that AI will have a “large” or “massive” impact on music production in the future, while 29% expect it to have “some” impact. Only 1% foresee no impact from AI.

    Tracklib conducted a survey with 1,107 music producers, with only 10% being classified as full-time professionals. Among the respondents, 58% were described as “ambitious” and aspiring to pursue music production as a career. The remaining producers were categorized as “beginners” or “hobbyists.”

    The survey respondents were geographically distributed as follows: 54% from the European Union or United Kingdom, 34% from North America, and 12% from the rest of the world.

    Despite the majority of producers showing resistance to AI technology, Tracklib foresees continued adoption of the technology, placing music AI in the “early majority” phase of adoption based on a model of technology adoption that divides the uptake of new technologies into five phases.

    In a survey by DIY distributor TuneCore and its parent company, Believe, it was found that 27% of indie music artists had utilized AI in some capacity. Among the artists who used AI tools, 57% had used it for creating artwork, 37% for promotional assets, and 20% for engaging with fans.

    Approximately half of the survey respondents expressed willingness to license their music for machine learning, while a third expressed consent for their music, voice, or artwork to be used in generative AI.

    Established in 2018, Stockholm-based Tracklib offers a library of over 100,000 songs from 400 labels and publishers. Earlier this year, it introduced Sounds, expanding its platform to include a library of royalty-free loops and one-shots for paying subscribers.

    In 2021, Tracklib disclosed that it had secured USD $21.2 million in funding from investors including Sony Innovation Fund, WndrCo, former NBA player and producer Baron Davis, and Spinnin Records co-founder Eelko van Kooten.

    Earlier this year, Bad Bunny denied rumors of a new song with Justin Bieber, but a song featuring what seemed like their voices circulated on TikTok, generated millions of likes. The song was created with AI by an artist named FlowGPT, imitating the voices of Bad Bunny, Bieber, and Daddy Yankee in a reggaeton anthem. Bad Bunny disapproved of the song, calling it a “poor song” in Spanish, and discouraged his fans from listening. However, many fans of all three megastars enjoyed it nonetheless.

    The song and the conflicting reactions to it exemplify the complex impact of AI in the music industry. Advances in machine learning have enabled individuals to replicate the sound of their musical idols from their homes. Some argue that these advances will democratize music creation, while others express concern about the co-opting and commodification of artists’ voices and styles for others’ benefit. The tension between safeguarding artists, driving innovation, and defining the collaborative roles of humans and machines in music creation will be explored for years to come.

    Lex Dromgoole, a musician and AI technologist, raises thought-provoking questions: “If there’s a surge in music created at an immense scale and speed, how does that challenge our understanding of human creativity? Where does imagination fit into this? How do we infuse our creations with character?”

    AI is currently being utilized by music producers to handle routine tasks. Vocal pitch correction and expedited mixing and mastering of recordings are a few areas where AI can assist. Recently, The Beatles utilized AI to isolate John Lennon’s voice from a 1978 demo, removing other instruments and background noises to create a new, well-produced song. Additionally, AI plays a significant role in personalized music experiences on streaming platforms like Spotify and Apple Music, using algorithms to recommend songs based on user listening habits.

    The creation of music using AI has sparked both enthusiasm and concern. Tools like BandLab offer unique musical loops based on prompts to help musicians overcome writer’s block. The AI app Endel generates customized soundtracks for focusing, relaxing, or sleeping based on user preferences and biometric data. Furthermore, other AI tools produce complete recordings based on text prompts.

    A new YouTube tool powered by Google DeepMind’s large language model Lyria enables users to input a phrase like “A ballad about how opposites attract, upbeat acoustic,” resulting in an instant song snippet resembling Charlie Puth’s style.

    These advancements raise various concerns. For instance, the instantaneous creation of a “Charlie Puth song” using AI prompts questions about the impact on musicians like Charlie Puth and aspiring artists who fear being replaced. Additionally, there are ethical considerations regarding AI companies training their large language models on songs without creators’ consent. AI is even capable of resurrecting the voices of deceased individuals, as demonstrated in a new Edith Piaf biopic featuring an AI-created version of her voice. This raises questions about the implications for memory and legacy if any historical voice can be revived.

    Even proponents of the technology have expressed apprehension. Edward Newton-Rex, the former vice president of audio at AI company Stability AI, resigned out of concern that he was contributing to job displacement for musicians. He highlighted the issue of AI models being trained on creators’ works without permission, resulting in the creation of new content that competes with the original works.

    These issues are likely to be addressed in the legal system in the years to come. Major labels, such as Universal Music Group, have filed lawsuits against startups like Anthropic for AI models producing copyrighted lyrics verbatim. In addition, Sony Music has issued thousands of takedown requests for unauthorized vocal deepfakes. While artists seek to opt out of AI usage entirely, AI companies argue that their use of copyrighted songs falls under “fair use” and is akin to homages, parodies, or cover songs.

    Artist Holly Herndon is proactively navigating these transformative changes. In 2021, she created a vocal deepfake of her own voice, named Holly+, allowing others to transform their voices into hers. Her intention is not to compel other artists to surrender their voices, but to encourage them to actively participate in these discussions and claim autonomy in an industry increasingly influenced by tech giants.

    Musician Dromgoole, co-founder of the AI company Bronze, envisions AI music evolving beyond mimicking singers’ voices and instantly generating music. Bronze has collaborated with artists like Disclosure and Jai Paul to create ever-evolving AI versions of their music, ensuring that no playback sounds the same. Their goal is not to use AI to create a perfect, marketable static song, but to challenge conventional notions of music. Dromgoole emphasizes that the tech industry’s belief that everyone desires a shortcut or a creative solution does not align with the creative process, as creativity and imagination cannot be expedited.

    AI-powered tools for generating text, images, and music have been available for some time. Recently, there has been a surge in the availability of apps that generate AI-made music for consumers.

    Like other AI-based tools, products such as Suno and Udio (and potential future ones) function by transforming a user’s input into an output. For instance, inputting “create a rock punk song about my dog eating my homework” on Suno will result in an audio file (see below) that includes instruments and vocals. The output can be saved as an MP3 file.

    The underlying AI relies on undisclosed datasets to produce the music. Users have the choice to request AI-generated lyrics or write their own, although some apps recommend that the AI works best when generating both.

    The question of who owns the resulting music is important for users of these apps. However, the answer is not simple.

    What are the terms of the apps?

    Suno offers a free version and a paid service. For users of the free version, Suno retains ownership of the created music. Nevertheless, users are allowed to use the sound recording for lawful, non-commercial purposes, provided they credit Suno.

    Paying Suno subscribers are allowed to possess the sound recording as long as they adhere to the terms of service.

    Udio does not assert ownership of the content generated by its users and indicates that users are free to use it for any purpose, “as long as the content does not include copyrighted material that [they] do not own or have explicit permission to use”.

    How does Australian copyright law come into play?

    Although Suno is based in the United States, its terms of service state that users are responsible for adhering to the laws of their own jurisdiction.

    For Australian users, despite Suno granting ownership to paid subscribers, the application of Australian copyright law isn’t straightforward. Can an AI-generated sound recording be subject to “ownership” under the law? For this to occur, copyright must be established, and a human author must be identified. Would a user be considered an “author,” or would the sound recording be considered authorless for copyright purposes?

    Similar to how this would apply to ChatGPT content, Australian case law stipulates that each work must originate from a human author’s “creative spark” and “independent intellectual effort”.

    This is where the issue becomes contentious. A court would likely examine how the sound recording was produced in detail. If the user’s input demonstrated sufficient “creative spark” and “independent intellectual effort,” then authorship might be established.

    However, if the input was deemed too distant from the AI’s creation of the sound recording, authorship might not be established. If authorless, there is no copyright, and the sound recording cannot be owned by a user in Australia.

    Does the training data violate copyright?

    The answer is currently uncertain. Across the globe, there are ongoing legal cases evaluating whether other AI technology (like ChatGPT) has infringed on copyright through the datasets used for training.

    The same question applies to AI music generation apps. This is a challenging question to answer due to the secrecy surrounding the datasets used to train these apps. More transparency is necessary, and in the future, licensing structures might be established.

    Even if there was a copyright infringement, an exception to copyright known as fair dealing might be relevant in Australia. This allows the reproduction of copyrighted material for specific uses without permission or payment to the owner. One such use is for research or study.

    In the US, there is a similar exception called fair use.

    What about imitating a known artist?

    A concern in the music industry is the use of AI to create new songs that imitate famous singers. For example, other AI technology (not Suno or Udio) can now make Johnny Cash sing Taylor Swift’s “Blank Space.”

    Hollywood writers went on strike last year partly to demand guidelines on how AI can be used in their profession. There is now a similar worry about a threat to jobs in the music industry due to the unauthorized use of vocal profiles through AI technology.

    In the US, there exists a right of publicity, which applies to any individual but is mainly utilized by celebrities. It gives them the right to sue for the commercial use of their identity or performance.

    If someone commercially used an AI-generated voice profile of a US singer without permission in a song, the singer could sue for misappropriation of their voice and likeness.

    In Australia, however, there is no such right of publicity. This potentially leaves Australians open to exploitation through new forms of AI, considering the abundance of voices and other materials available on the internet.

    AI voice scams are also on the rise, where scammers use AI to impersonate the voice of a loved one in an attempt to extort money.

    The swift advancement of this technology prompts the discussion of whether Australia should consider implementing a comparable right of publicity. If such a right were established, it could serve to protect the identity and performance rights of all Australians, as well as provide defense against possible AI voice-related offenses.

  • The energy consumption of AI tools is substantial and on the rise

    The use of artificial intelligence is growing, leading to increased energy demands in data centers. Experts warn that the electricity consumption of entire countries could be affected.

    According to Ralf Herbrich, the director of the Hasson Plattner Institute (HPI) in Potsdam and head of the artificial intelligence and sustainability department, the energy consumption of AI tools is substantial and on the rise. The process of managing a single AI model requires a significant amount of energy due to complex prediction calculations.

    Alex de Vries, a data scientist from Amsterdam, has compared the energy consumption of AI-powered search engines to that of entire countries. This issue is becoming increasingly important for climate protection. Efforts are being made by scientists and internet companies to reduce the ecological impact of AI.

    Ralf Herbrich mentioned that data centers currently account for four to five percent of global energy consumption, and this figure rises to eight percent when including the use of digital technologies like laptops and smartphones. It is estimated that this consumption could increase to 30 percent in the coming years.

    To train an AI model, hundreds of graphics cards’ processors, each consuming around 1,000 watts, run for several weeks. Herbrich compared this to an oven, stating that 1,000 watts is as much as an oven consumes.

    The topic of artificial intelligence is currently a dominant factor in public discussions about technology. It has gained considerable attention, especially due to the text robot ChatGPT from the Californian startup OpenAI. AI applications are becoming more widespread, including safety technology in cars and efficient heating systems, as well as various applications in healthcare and other industries.

    Efforts are being made to reduce the energy consumption of AI technology while maintaining the accuracy of predictions. It will take several years to develop solutions, according to Herbrich from the Hasso Plattner Institute. Technology companies are also actively researching energy-efficient AI.

    Researcher de Vries estimates that if every Google search utilized AI, it would require around 29.2 terawatt hours of electricity per year, equivalent to Ireland’s annual electricity consumption. However, this is viewed as an extreme scenario that is unlikely to occur in the near term.

    Google states that the energy required to operate their AI technology is increasing at a slower pace than many had predicted. They have employed proven methods to significantly reduce the energy consumption for training AI models. Additionally, Google uses AI for climate protection, such as for “fuel-efficient route planning” on Google Maps and predicting river flooding.

    In various industries, the rising demand for energy, mainly from the construction and operation of data centers used for training and running AI models, is contributing to global greenhouse gas (GHG) emissions. Microsoft, which has invested in OpenAI, the maker of ChatGPT, and has placed generative AI tools at the core of its product offering, recently declared that its CO2 emissions had increased by almost 30% since 2020 due to the expansion of data centers. Google’s GHG emissions in 2023 were nearly 50% higher than in 2019, largely because of the energy demand related to data centers.

    While AI tools pledge to aid in the energy transition, they also necessitate substantial computing power. The energy consumption of AI currently represents only a small part of the technology sector’s power usage, estimated to be approximately 2-3% of total global emissions. It is probable that this will change as more companies, governments, and organizations utilize AI to drive efficiency and productivity. As shown by this chart, data centers are already significant drivers of electricity demand growth in many regions.

    AI requires significant computing power, and generative AI systems may already consume about 33 times more energy to complete a task than task-specific software. With the increasing adoption and advancement of these systems, the training and operation of the models will lead to a substantial escalation in the required number of global data centers and associated energy usage. Consequently, this will exert additional pressure on already overburdened electrical grids.

    Notably, training generative AI is exceptionally energy-intensive and consumes a much greater amount of electricity compared to traditional data center activities. As an AI researcher articulated, “When you deploy AI models, you have to have them always on. ChatGPT is never off.” The growing sophistication of a large language model, like the one on which ChatGPT is constructed, serves as evidence of this escalating energy demand.

    Training a model such as Generative Pre-trained Transformer 3 (GPT-3) is believed to consume just under 1,300 megawatt hours (MWh) of electricity, roughly equivalent to the annual power consumption of 130 homes in the US. Meanwhile, training the more advanced GPT-4 is estimated to have utilized 50 times more electricity.

    Overall, the computational power essential for supporting AI’s growth is doubling approximately every 100 days. Society therefore contends with challenging questions, pondering whether the economic and societal benefits of AI outweigh its environmental cost. Specifically, the inquiry arises as to whether the benefits of AI for the energy transition outweigh its heightened energy consumption.

    The quest for the optimal balance between challenges and opportunities is crucial for obtaining the answers we seek. Reports forecast that AI has the potential to mitigate 5-10% of global GHG emissions by 2030. Thus, what needs to happen to strike the right balance?

    Regulators, including the European Parliament, are commencing efforts to establish requirements for systems to be designed with the ability to record their energy consumption. Furthermore, technological advancements could mitigate AI’s energy demand, with more advanced hardware and processing power anticipated to enhance the efficiency of AI workloads.

    Researchers are crafting specialized hardware, such as new accelerators, as well as exploring new technologies like 3D chips that offer significantly improved performance, and novel chip cooling techniques. Nvidia, a computer chip manufacturer, asserts that its new ‘superchip’ can achieve a 30 times improvement in performance when operating generative AI services while consuming 25 times less energy.

    Concurrently, data centers are becoming more efficient, with ongoing exploration into new cooling technologies and sites capable of executing more computations during periods of cheaper, more available, and sustainable power to further advance this efficiency. Alongside this, reducing overall data usage, including addressing the phenomenon of dark data — data generated and stored but then never used again — is crucial. Additionally, being more selective about how and where AI is used, for instance, by employing smaller language models, which are less resource-intensive, for specific tasks will also contribute. Striking a better balance between performance, costs, and the carbon footprint of AI workloads will be fundamental.

    What about AI’s impact on the electrical grid? AI is not the sole factor applying pressure to the grid. Increasing energy needs due to growing populations, as well as trends toward electrification, are creating heightened demand that could result in a slower decarbonization of the grid.

    Nonetheless, a clean, modern, and decarbonized grid will be imperative in the broader shift to a net-zero emissions economy. Data center operators are exploring alternative power options, such as nuclear technologies for powering sites, or storage technologies like hydrogen. Additionally, companies are investing in emerging technologies, such as carbon removal, to extract CO2 from the air and store it securely.

    AI can help overcome obstacles to integrating the necessary large amounts of renewable energy into existing grids.

    The fluctuation in renewable energy generation often leads to excess production during peak times and shortages during lulls, causing inefficient energy usage and unstable power grids. By analyzing large sets of data, ranging from weather patterns to energy consumption trends, AI can accurately predict energy production. This could facilitate scheduling tasks and shifting loads to ensure that data centers use energy when renewable energy sources are available, thus ensuring stable grid operations, efficiency, and continuous clean power. AI is also aiding in improving the energy efficiency of other industries that produce large amounts of carbon, from analyzing buildings to anticipate energy usage and optimize heating and cooling system performance to enhancing manufacturing efficiency with predictive maintenance. In agriculture, sensors and satellite imagery are being used to forecast crop yields and manage resources.

    Effectively managing the energy consumption and emissions of AI while maximizing its societal benefits involves addressing multiple interconnected challenges and requires input from various stakeholders.

    The World Economic Forum’s Artificial Intelligence Governance Alliance is examining how AI can be utilized in different industries and its impact on innovation, sustainability, and growth.

    As part of this effort, the Forum’s Centre for Energy and Materials and Centre for the Fourth Industrial Revolution are launching a specific workstream to explore the energy consumption of AI systems and how AI can facilitate the transition to clean energy.

    In an era where the rapid advancements in Artificial Intelligence (AI) captivate society, the environmental impact of these advancements is often disregarded. The significant ecological consequences of AI demand attention and action.

    For AI to realize its potential for transformation, offering unprecedented levels of productivity and enhancing societal well-being, it must develop sustainably.

    At the core of this challenge is the significant energy demand of the AI ecosystem, encompassing everything from hardware to training procedures and operational methods.

    Notably, the computational power required to sustain the rise of AI is doubling approximately every 100 days. To achieve a tenfold improvement in AI model efficiency, the demand for computational power could increase by up to 10,000 times. The energy required to perform AI tasks is already increasing at an annual rate of between 26% and 36%. This means that by 2028, AI could be utilizing more power than the entire country of Iceland did in 2021.

    The environmental impact of the AI lifecycle is significant during two key phases: the training phase and the inference phase. During the training phase, models learn and improve by processing large amounts of data. Once trained, they move into the inference phase, where they are used to solve real-world problems. Currently, the environmental impact is divided, with training accounting for about 20% and inference consuming the majority at 80%. As AI models gain traction across various sectors, the need for inference and its environmental impact will increase.

    To align the rapid progress of AI with the imperative of environmental sustainability, a carefully planned strategy is crucial. This entails immediate and near-term actions while also establishing the groundwork for long-term sustainability.

    Immediate Approach: Reducing AI’s energy consumption today

    Research is emerging about the practical steps we can take now to align AI progress with sustainability. For instance, capping power usage during the training and inference phases of AI models provides a promising avenue for reducing AI energy consumption by 12% to 15%, with a marginal tradeoff in task completion time, as GPUs are expected to take around 3% longer.

    Another impactful method is optimized scheduling for energy conservation. Tasking AI workloads to align with periods of lower energy demand — such as running shorter tasks overnight or planning larger projects for cooler months in regions where air conditioning is widely used — can also result in significant energy savings.

    Finally, transitioning towards the use of shared data centers and cloud computing resources instead of individually setting up private infrastructure can concentrate computational tasks in collective infrastructures and reduce the energy consumption associated with AI operations. This can also lead to cost savings on equipment and potentially lower energy expenses, particularly when resources are strategically placed in areas with lower energy costs.

    Near-Term Focus: Utilizing AI for the energy transition

    Beyond immediate measures, the near-term focus should be on leveraging AI’s capabilities to promote sustainability. AI, when used effectively, can be a powerful tool in meeting the ambitious goal of tripling renewable energy capacity and doubling energy efficiency by the end of the decade, as established in last year’s United Nations Climate Change Conference (COP28).

    AI supports climate and energy transition efforts in various ways. It assists in developing new materials for clean energy technologies and optimizing solar and wind farms. AI can also enhance energy storage capabilities, improve carbon capture processes, and refine climate and weather predictions for better energy planning, as well as stimulate innovative breakthroughs in green energy sources like nuclear fusion.

    Strategically using AI to improve our renewable energy landscape offers the promise of not only making AI operations environmentally friendly, but also contributing to the creation of a more sustainable world for future generations.

    In the long run, creating synergy between AI and emerging quantum technologies is a crucial approach to guiding AI toward sustainable development. Unlike traditional computing, where energy usage increases with greater computational demand, quantum computing shows a linear relationship between computational power and energy consumption. Furthermore, quantum technology has the potential to transform AI by making models more compact, improving their learning efficiency, and enhancing their overall functionality, all without the significant energy footprint that is currently a concern in the industry.

    Realizing this potential requires a collective effort involving government support, industry investment, academic research, and public engagement. By combining these elements, it is conceivable to envision and establish a future where AI advances in harmony with the preservation of the planet’s health.

    Standing at the intersection of technological innovation and environmental responsibility, the way forward is clear. It requires a collective effort to embrace and propel the integration of sustainability into the core of AI development. The future of our planet depends on this crucial alignment. Decisive and collaborative action is necessary.

    Global spending on offshore energy infrastructure over the next decade is projected to exceed US$16 billion (£11.3bn), which includes laying an additional 2.5 million kilometers of global submarine cables by 2030.

    The process of laying and securing these cables against ocean currents involves disturbing the seabed and depositing rocks and concrete “mattresses” to serve as a base for the cables. These procedures can have a significant impact on the marine ecosystem, which is home to numerous creatures.

    The installation of offshore wind farms entails many high-impact procedures that are often carried out with little consideration for their effects on the delicately balanced ocean environment, which supports the food and livelihoods of over 3 billion people.

    Human activities, including the construction of renewable offshore energy infrastructure, have impacted over 40% of the ocean’s surface, leading to dead ocean zones devoid of oxygen, harmful algae blooms, and a devastating loss of biodiversity.

    If we continue on this trajectory, the anticipated green-tech revolution risks causing an unprecedented level of harm to the world’s oceans. The new generation of renewable energy producers needs to evaluate the long-term impact of their actions on the ocean environment to determine the true sustainability of their supply chains and practices.

    As the UN commences its decade of Ocean Resilience this year, the role that autonomous technologies can play in supporting the marine environment is increasingly gaining recognition. Implementing sustainable technology necessitates instilling environmentally conscious practices within the renewable energy sector itself. This is where robotics can contribute.

    Approximately 80% of the cost of maintaining offshore wind farms is allocated to sending personnel for inspections and repairs via helicopter, maintaining support vehicles such as boats, and constructing offshore renewable energy platforms to accommodate turbine workers. All of these activities contribute to carbon emissions, and they also pose risks to human safety.

    However, a unified team of humans, robots, and AI working together could maintain this infrastructure with significantly less impact on the environment and better safety for humans. Such teams could involve humans working remotely with multi-robot teams of autonomous aerial and underwater vehicles, as well as with crawling or land-based robots.

    Robotic technology can enable humans to interact with complex and vulnerable environments without causing harm. Robots equipped with non-contact sensing methods, such as radar and sonar, can interact with ocean infrastructure and its surrounding environment without causing any disruption or damage.

    Even more advanced sensing technology, inspired by the communication signals used by dolphins, makes it possible to inspect structures such as subsea infrastructure and submarine cables in the ocean without harming the surrounding environment.

    Using autonomous underwater vehicles (AUVs) that operate independently, we can gain a better understanding of how offshore energy structures, like underwater cables, interact with the environment, through the deployment of low-frequency sonar technology. This technology can also assist in preventing issues such as biofouling, where microorganisms, plants, algae, or small animals accumulate on the surfaces of cables.

    Biofouling can cause a bio-fouled cable to become heavy, potentially distorting its outer protective layers and reducing its useful life span. AUVs have the capability to monitor and clean these cables safely.

    Robotic assistance can also be extended to offshore energy infrastructure above the water. When wind turbine blades reach the end of their useful lives, they are often incinerated or disposed of in landfills. This practice contradicts the principles of the “circular economy,” which emphasizes waste prevention and the reuse of materials for sustainability. Instead, robots can be employed to repair, repurpose, or recycle deteriorating blades, thereby reducing unnecessary waste.

    Advanced radar sensing technology mounted on drones enables us to detect defects in turbines as they start to develop. By utilizing robot assistants to stay updated on turbine maintenance, we can avoid the need for costly field support vessels to transport turbine inspectors offshore, which can amount to around £250,000 a day. This approach helps in saving time, money, and reducing risk.

    In addition to cutting the financial and carbon cost of turbine maintenance, robots can also minimize the inherent risks to humans working in these unpredictable environments, while operating more harmoniously with the environment. By deploying resident robots for the inspection and maintenance of offshore renewable infrastructure, energy companies could initially decrease the number of people working in hazardous offshore roles. Over time, this could lead to autonomous operation, where human operators remain onshore and connect remotely to offshore robotics systems.

    AI plays a significant role in the establishment of sustainable offshore energy systems. For instance, artificially intelligent programs can aid offshore energy companies in planning the safe disassembly and transportation of turbines back to shore. Upon arrival onshore, turbines can be taken to “smart” factories that utilize a combination of robotics and AI to identify which parts can be reused.

    By collaborating in these efforts, we can develop a resilient, sustainable circular economy for the offshore renewable energy sector.

    The latest IPCC report is clear: urgent action is needed to avoid severe long-term climate effects. Given that more than 80% of global energy still comes from fossil fuels, the energy sector must play a central role in addressing this issue.

    Thankfully, the energy system is already undergoing a transformation: renewable energy production is rapidly expanding due to decreasing costs and growing investor interest. However, the scale and cost of decarbonizing the global energy system are still enormous, and time is running out.

    Thus far, most of the efforts to transition the energy sector have focused on physical infrastructure: new low-carbon systems that will replace existing carbon-intensive ones. Comparatively little effort and investment have been directed toward another crucial tool for the transition: next-generation digital technologies, particularly artificial intelligence (AI). These powerful technologies can be adopted on a larger scale and at a faster pace than new physical solutions and can become a crucial enabler for the energy transition.

    Three significant trends are propelling AI’s potential to expedite the energy transition:

    1. Energy-intensive sectors like power, transportation, heavy industry, and buildings are at the outset of transformative decarbonization processes driven by increasing government and consumer demands for rapid CO2 emission reductions. The scale of these transitions is immense: BloombergNEF estimates that achieving net-zero emissions in the energy sector alone will necessitate between $92 trillion and $173 trillion of infrastructure investments by 2050. Even slight gains in flexibility, efficiency, or capacity in clean energy and low-carbon industry can result in trillions of value and savings.

    2. As electricity powers more sectors and applications, the power sector is becoming the cornerstone of global energy supply. Scaling up the deployment of renewable energy to decarbonize the expanding power sector globally will result in a greater portion of power being supplied by intermittent sources (such as solar and wind), creating new demand for forecasting, coordination, and flexible consumption to ensure the safe and reliable operation of power grids.

    3. The transition to low-carbon energy systems is fueling the rapid expansion of distributed power generation, distributed storage, and advanced demand-response capabilities, which need to be coordinated and integrated through more interconnected, transactional power grids.

    Navigating these trends presents significant strategic and operational challenges to the energy system and energy-intensive industries. This is where AI comes in: by establishing an intelligent coordination layer across energy generation, transmission, and utilization, AI can assist energy-system stakeholders in identifying patterns and insights in data, learning from experience, enhancing system performance over time, and predicting and modeling potential outcomes of complex, multivariate scenarios.

    AI is already demonstrating its value to the energy transition in various areas, driving verifiable enhancements in renewable energy forecasting, grid operations and optimization, coordination of distributed energy assets and demand-side management, and materials innovation and discovery.

    While AI’s application in the energy sector has shown promise thus far, innovation and adoption are still limited. This presents a significant opportunity to expedite the transition toward the zero-emission, highly efficient, and interconnected energy system needed in the future.

    AI holds far greater potential to expedite the global energy transition, but realizing this potential will only be achievable through greater AI innovation, adoption, and collaboration across the industry. This is why the World Economic Forum has published ‘Harnessing AI to Accelerate the Energy Transition,’ a new report aimed at defining and catalyzing the necessary actions.

    The report, developed in collaboration with BloombergNEF and Dena, establishes nine ‘AI for the energy transition principles’ targeting the energy industry, technology developers, and policymakers. If implemented, these principles would hasten the adoption of AI solutions that support the energy transition by establishing a shared understanding of what is required to unlock AI’s potential and how to adopt AI in the energy sector in a safe and responsible manner.

    The principles define the actions needed to unlock AI’s potential in the energy sector across three vital domains:

    1. Governing the use of AI:

    Standards – implement compatible software standards and interoperable interfaces.

    Risk management – agree on a common approach to technology and education to manage the risks posed by AI.

    Responsibility – ensure that AI ethics and responsible use are at the heart of AI development and deployment.

    2. Designing AI that’s fit for purpose:

    Automation – design generation equipment and grid operations for automation and increased autonomy of AI.

    Sustainability – adopt the most energy-efficient infrastructure as well as best practices for sustainable computing to reduce the carbon footprint of AI.Design – focus AI development on usability and interoperability.

    3. Facilitating the implementation of AI on a large scale:

    Data – establishing standards for data, mechanisms for sharing data, and platforms to enhance the availability and quality of data.

    Education – empowering consumers and the energy workforce with a human-centered approach to AI and investing in education to align with technological advancements and skill development.

    Incentives – developing market designs and regulatory frameworks that enable AI use cases to capture the value they generate.

    AI is not a universal solution, and no technology can substitute for strong political and corporate commitments to reducing emissions.

    However, considering the urgency, scale, and complexity of the global energy transition, we cannot afford to disregard any tools in our arsenal. Used effectively, AI will expedite the energy transition while broadening access to energy services, fostering innovation, and ensuring a secure, resilient, and affordable clean energy system. It is time for industry stakeholders and policymakers to establish the groundwork for this AI-powered energy future and to form a trustworthy and collaborative ecosystem around AI for the energy transition.

    In the energy sector, our research indicates that digital applications can contribute up to 8% of greenhouse gas (GHG) reductions by 2050. This could be accomplished by improving efficiency in carbon-intensive processes and enhancing energy efficiency in buildings, as well as by utilizing artificial intelligence powered by cloud computing and highly networked facilities with 5G to deploy and manage renewable energy.

    An excellent example of this is IntenCity – the Schneider Electric building is equipped with IoT-enabled solutions, creating an end-to-end digital architecture that captures more than 60,000 data points every 10 minutes. It is smart-grid ready and energy-autonomous, featuring 4,000 m2 of photovoltaic panels and two vertical wind turbines.

    IntenCity has its own building information modeling system, which is an accurate representation of the construction and energy model capable of replicating the energy behavior of the actual building.

    In the materials sector, digital applications can lead to up to 7% of GHG reductions by 2050. This would be achieved by enhancing mining and upstream production and leveraging foundational technologies such as big data analytics and cloud/edge computing. Furthermore, use cases leveraging blockchain could enhance process efficiency and promote circularity.

    In mobility, digital applications could reduce up to 5% of GHG emissions by 2050, according to our research. This would involve utilizing sensing technologies like IoT, imaging, and geo-location to gather real-time data for informing system decision-making, ultimately improving route optimization and reducing emissions in both rail and road transport.

    For instance, Mobility-as-a-Service (MaaS) platforms are increasingly serving as advanced mobility planning tools for consumers, offering a wide range of low-carbon options such as eBikes, scooters, or transit.

    Uber has incorporated non-rideshare options into its customer app and digital platform, utilizing analytics to suggest transportation solutions for consumers. Other studies have shown an estimated emission reduction of over 50% if MaaS could replace individual private car use.

    There are high-priority, impactful use cases that, if scaled, can deliver the most benefits in the energy, materials, and mobility sectors.

    The opportunity is evident: companies can expedite their net-zero goals by adopting digital use cases with high potential for decarbonizing industries. While many World Economic Forum partner companies are beginning to implement such pioneering examples, they can learn from each other and collaborate to swiftly transform their businesses, systems, workforces, and partnerships on a wide scale.

    First, businesses must ensure that their data is shared, autonomous, connected, and allows for transparency to support various outcomes – from identifying and tracing source materials to optimizing routes and enhancing efficiency. They must invest in new data architectures and integrate recognized frameworks into their internal reporting structures. This ensures that data is available, standardized, and shareable across value chains and with partners outside their traditional operating environment.

    Second, businesses must prioritize digital inclusion and skills development. They must ensure that their current and future workforce has access to new technologies and the necessary skills to scale digital technologies and transform business processes in high-emission industries.

    Third, businesses must foster collaboration among their digital, sustainability, and operations teams, not only within their enterprises but also across value chains and industries. Partnerships between private companies, startups, technology providers, investors, and public agencies will be crucial for scaling investments , reducing the risks associated with technologies, and accelerating the sharing of knowledge.

    Power consumption of training GPT-3

    It is crucial to ensure that the digital transformations that expedite the clean energy transition are inclusive and sustainable so that the benefits are accessible to all. Furthermore, we must mitigate the emissions caused by the electrification and digitalization of industries through technological advancement and the development of supportive policies.

    In an ever-changing time characterized by constant change, the convergence of AI and sustainable development represents a glimmer of hope, ready to redefine our joint response to pressing global issues. As environmental worries continue to grow, the need to speed up our journey towards sustainable development becomes more pressing. At this critical juncture, we see AI not just as an impressive piece of technology, but as a potent catalyst for positive change.

    The potential of AI lies in its capacity to utilize data, streamline processes, and ignite innovation, positioning it to become an essential foundation in our shared pursuit of global advancement. Standing at the crossroads of innovation and sustainability, the need for action is mounting to transition towards a future characterized by resilience, sustainability, and mutual prosperity.

    Calculating the energy consumption of a single Balenciaga pope in terms of watts and joules is quite challenging. However, we do have some insight into the actual energy cost of AI.

    It’s widely known that machine learning requires a substantial amount of energy. The AI models powering email summaries, chatbots, and various videos are responsible for significant energy consumption, measured in megawatts per hour. Yet, the precise cost remains uncertain, with estimates considered incomplete and contingent due to the variability of machine learning models and their configurations.

    Additionally, the companies best positioned to provide accurate energy cost information, such as Meta, Microsoft, and OpenAI, have not shared relevant data. While Microsoft is investing in methodologies to quantify the energy use and carbon impact of AI, OpenAI and Meta have not responded to requests for comment.

    One key factor to consider is the disparity between the energy consumption during model training and its deployment to users. Training a large language model like GPT-3, for instance, is estimated to consume just under 1,300 megawatt hours (MWh) of electricity, equivalent to the annual power consumption of 130 US homes.

    To put this into perspective, streaming an hour of Netflix requires around 0.8 kWh (0.0008 MWh) of electricity. This means you would need to watch 1,625,000 hours of Netflix to match the power consumption of training GPT-3.

    However, it’s challenging to determine how these figures apply to current state-of-the-art systems, as energy consumption could be influenced by the increasing size of AI models and potential efforts by companies to improve energy efficiency.

    According to Sasha Luccioni, a researcher at Hugging Face, the challenge of estimating up-to-date energy costs is exacerbated by the increased secrecy surrounding AI as it has become more profitable. Companies have become more guarded about details of their training regimes and the specifics of their latest models, such as ChatGPT and GPT-4.

    Luccioni suggests that this secrecy is partly driven by competition and an attempt to deflect criticism, especially regarding the energy use of frivolous AI applications. She also highlights the lack of transparency in energy usage statistics for AI, especially in comparison to the wastefulness of cryptocurrency.

    It’s important to note that training a model is only part of the energy consumption picture. After creation, the model is deployed for inference, and last December, Luccioni and her colleagues published the first estimates of inference energy usage for various AI models.

    Luccioni and her team conducted tests on 88 different models across various applications, such as answering questions, object identification, and image generation. For each task, they performed the test 1,000 times and estimated the energy usage. Most tasks required a small amount of energy, for instance, 0.002 kWh for classifying written samples and 0.047 kWh for generating text. To put it in perspective, this is equivalent to the energy consumed while watching nine seconds or 3.5 minutes of Netflix, respectively, for each task performed 1,000 times.

    The energy consumption was notably higher for image-generation models, averaging at 2.907 kWh per 1,000 inferences. As noted in the paper, the average energy usage of a smartphone for charging is 0.012 kWh. This means that generating a single image using AI can consume almost as much energy as charging a smartphone.

    It’s important to note that these figures may not apply universally across all use cases. The researchers tested ten different systems, ranging from small models producing 64 x 64 pixel pictures to larger ones generating 4K images, resulting in a wide range of values. Additionally, the researchers used standardized hardware to facilitate a better comparison of different AI models. However, this may not accurately reflect real-world deployment, where software and hardware are often optimized for energy efficiency.

    Luccioni emphasized that these figures do not represent every use case, but they provide a starting point for understanding the energy costs. The study offers valuable relative data, showing that AI models require more power to generate output compared to classifying input. Moreover, it demonstrates that tasks involving imagery are more energy-intensive than those involving text. Luccioni expressed that while the contingent nature of the data can be frustrating, it tells a story in itself, indicating the significant energy cost associated with the generative AI revolution.

    Determining the energy cost of generating a single Balenciaga pope is challenging due to the multitude of variables involved. However, there are alternative approaches to better understand the planetary cost. One such approach is taken by Alex de Vries, a PhD candidate at VU Amsterdam, who has utilized Nvidia GPUs to estimate the global energy usage of the AI sector. According to de Vries, by 2027, the AI sector could consume between 85 to 134 terawatt hours annually, approximately equivalent to the annual energy demand of the Netherlands.

    AI electricity consumption could potentially represent half a percent of global electricity consumption by 2027

    De Vries emphasizes the significance of these numbers, stating that AI electricity consumption could potentially represent half a percent of global electricity consumption by 2027. A recent report by the International Energy Agency also offers similar estimates, suggesting a significant increase in electricity usage by data centers in the near future due to the demands of AI and cryptocurrency. The report indicates that current data center energy usage stands at around 460 terawatt hours in 2022 and could increase to between 620 and 1,050 TWh in 2026, equivalent to the energy demands of Sweden or Germany, respectively.

    De Vries notes the importance of contextualizing these figures, highlighting that data center energy usage remained fairly stable between 2010 and 2018, accounting for around 1 to 2 percent of global consumption. Despite an increase in demand over this period, hardware efficiency improved, effectively offsetting the increase.

    His concern is that AI may face different challenges due to the trend of companies simply increasing the size of models and using more data for any task. De Vries warns that this dynamic could be detrimental to efficiency, as it creates an incentive for continually adding more computational resources. He also expresses uncertainty about whether efficiency gains will balance out the increasing demand and usage, lamenting the lack of available data but emphasizing the need to address the situation.

    Some AI-involved companies argue that the technology itself could help tackle these issues. Priest from Microsoft claims that AI could be a powerful tool for advancing sustainability solutions and stresses that Microsoft is working towards specific sustainability goals. However, Luccioni points out that the goals of one company may not fully address the industry-wide demand, suggesting the need for alternative approaches.

    Luccioni suggests introducing energy star ratings for AI models, allowing consumers to compare energy efficiency similar to how they do for appliances. De Vries advocates for a more fundamental approach, questioning the necessity of using AI for certain tasks, considering its limitations. He emphasizes the importance of not wasting time and resources by using AI inappropriately.

    Reducing the power consumption of hardware will decrease the energy consumption of artificial intelligence. However, transparency regarding its carbon footprint is still necessary.

    In the late 1990s, some computer scientists realized they were heading towards a crisis. Manufacturers of computer chips had been increasing computer power by adding more and smaller digital switches called transistors onto processing cores and running them at higher speeds. However, increasing speeds would have made the energy consumption of central processing units unsustainable.

    To address this, manufacturers shifted their approach by adding multiple processing cores to chips, which provided more energy-efficient performance gains. The release of the first mainstream multicore computer processor by IBM in 2001 marked a significant milestone, leading other chipmakers to follow suit. Multicore chips facilitated progress in computing, enabling today’s laptops and smartphones.

    Now, some computer scientists believe the field is confronting another challenge due to the growing adoption of energy-intensive artificial intelligence. Generative AI can perform various tasks, but the underlying machine-learning models consume significant amounts of energy.

    The energy required to train and operate these models could pose challenges for the environment and the advancement of machine learning. Wang emphasizes the importance of reducing power consumption to avoid halting development. Schwartz also expresses concerns about AI becoming accessible only to a few due to the resources and power required to train generative AI models.

    Amidst this potential crisis, many hardware designers see an opportunity to redesign computer chips to enhance energy efficiency. This would not only enable AI to function more efficiently in data centers but also allow for more AI tasks to be performed on personal devices, where battery life is often critical. However, researchers will need to demonstrate significant benefits to persuade the industry to embrace such substantial architectural changes.

    According to the International Energy Agency (IEA), data centers consumed 1.65 billion gigajoules of electricity in 2022, which is approximately 2% of global demand. The widespread use of AI is expected to further increase electricity consumption. By 2026, the agency predicts that energy consumption by data centers will have risen by 35% to 128%, equivalent to adding the annual energy consumption of Sweden at the lower estimate or Germany at the higher estimate.

    The shift to AI-powered web searches is one potential factor driving this increase. While it’s difficult to determine the exact energy consumption of current AI algorithms, the IEA states that a typical request to the chatbot ChatGPT uses 10 kilojoules, which is about ten times more than a conventional Google search.

    Despite the significant energy costs, companies view these expenses as a worthwhile investment. Google’s 2024 environmental report revealed a 48% increase in carbon emissions over 5 years. In May, Microsoft president Brad Smith stated that the company’s emissions had increased by 30% since 2020. Companies developing AI models prioritize achieving the best results, often at the expense of energy efficiency. Naresh Shanbhag, a computer engineer at the University of Illinois Urbana–Champaign, notes, “Usually people don’t care about energy efficiency when you’re training the world’s largest model.”

    The high energy consumption associated with training and operating AI models is largely due to their reliance on large databases and the cost of moving data between computing and memory, and within and between chips. According to Subhasish Mitra, a computer scientist at Stanford University in California, up to 90% of the energy used in training large AI models is spent on accessing memory.

    For instance, a machine-learning model that identifies fruits in photographs is trained by exposing the model to numerous example images, requiring the repeated movement of large amounts of data in and out of memory. Similarly, natural language processing models are not created by programming English grammar rules; instead, some models are trained by exposing them to a significant portion of English-language material on the Internet. This extensive training process necessitates moving substantial amounts of data in and out of thousands of graphics processing units (GPUs).

    The current design of computing systems, with separate processing and memory units, is not well-suited for this extensive data movement. Mitra states, “The biggest problem is the memory wall.”

    Addressing the challenge

    GPUs are widely used for developing AI models. William Dally, chief scientist at Nvidia in Santa Clara, California, mentions that the company has improved the performance-per-watt of its GPUs by 4,000-fold over the past decade. Although Nvidia continues to develop specialized circuits called accelerators for AI calculations, Dally believes that significant architectural changes are not imminent. “I think GPUs are here to stay.”

    Introducing new materials, processes, and designs into a semiconductor industry projected to reach a value of US$1 trillion by 2030 is a complex and time-consuming process. To encourage companies like Nvidia to take risks, researchers will need to demonstrate substantial benefits. However, some researchers believe that significant changes are necessary.

    They argue that GPUs will not be able to provide sufficient efficiency improvements to address the growing energy consumption of AI and are working on high-performance technologies that could be ready in the coming years. Shanbhag notes, “There are many start-ups and semiconductor companies exploring alternate options.” These new architectures are likely to first appear in smartphones, laptops, and wearable devices, where the benefits of new technology, such as the ability to fine-tune AI models using localized, personal data, are most apparent, and where the energy needs of AI are most limiting.

    While computing may seem abstract, there are physical forces at play. Whenever electrons move through chips, some energy is dissipated as heat. Shanbhag is one of the early developers of an architecture that aims to minimize this energy wastage.

    Referred to as computing in memory, these methods involve techniques such as integrating a memory island within a computing core, which reduces energy consumption by shortening data travel distances. Researchers are also experimenting with various computing approaches, such as executing certain operations within the memory itself.

    To function in the energy-limited environment of a portable device, some computer scientists are exploring what might seem like a significant step backward: analog computing. Unlike digital devices that have been synonymous with computing since the mid-twentieth century and operate in a clear world of on or off, represented as 1s and 0s, analog devices work with the in-between, enabling them to store more data in a given area due to their access to a range of states. This results in more computing power from a given chip area.

    Analog states in a device could be different forms of a crystal in a phase-change memory cell or a continuum of charge levels in a resistive wire. As the difference between analog states can be smaller than that between the widely separated 1 and 0, it requires less energy to switch between them. According to Intel’s Wang, “Analog has higher energy efficiency.”

    The drawback is that analog computing is noisy and lacks the signal clarity that makes digital computation robust. Wang mentions that AI models known as neural networks are inherently tolerant to a certain level of error, and he’s researching how to balance this trade-off. Some teams are focusing on digital in-memory computing, which circumvents this issue but may not offer the energy advantages of analog approaches.

    Naveen Verma, an electrical engineer at Princeton University and the founder and CEO of start-up firm EnCharge AI, anticipates that early applications for in-memory computing will be in laptops. EnCharge AI’s chips utilize static random-access memory (SRAM), which uses crossed metal wires as capacitors to store data in the form of different amounts of charge. According to Verma, SRAM can be manufactured on silicon chips using existing processes.

    These analog chips can run machine-learning algorithms at 150 tera operations per second (TOPS) per watt, compared to 24 TOPS per watt by an equivalent Nvidia chip performing a similar task. Verma expects the energy efficiency metric of his technology to triple to about 650 TOPS per watt by upgrading to a semiconductor process technology that can trace finer chip features.

    Larger companies are also investigating in-memory computing. In 2023, IBM detailed an early analog AI chip capable of performing matrix multiplication at 12.4 TOPS per watt. Dally states that Nvidia researchers have also explored in-memory computing, although he warns that gains in energy efficiency may not be as significant as they seem. While these systems may consume less power for matrix multiplications, the energy cost of converting data from digital to analog and other overheads diminishes these gains at the system level. “I haven’t seen any idea that would make it substantially better,” Dally remarks.

    IBM’s Burns concurs that the energy cost of digital-to-analog conversion is a major challenge. He suggests that the key is determining whether the data should remain in analog form when transferred between parts of the chip or if it’s better to transfer them in 1s and 0s. “What happens if we try to stay in analog as much as possible?” he asks.

    Wang remarks that several years ago he wouldn’t have anticipated such rapid progress in this field. However, he now anticipates that start-up firms will bring in-memory computing chips to the market in the next few years.

    The AI-energy challenge has also spurred advancements in photonics. Data transmission is more efficient when encoded in light compared to along electrical wires, which is why optical fibers are used to deliver high-speed Internet to neighborhoods and connect banks of servers in data centers. Although bringing these connections onto chips has been difficult, optical devices have historically been bulky and sensitive to small temperature variations.

    In 2022, Stanford University’s electrical engineer Jelena Vuckovic developed a silicon waveguide for optical data transmission between chips. Losses during electronic data transmission are approximately one picojoule per bit of data, while for optics, it’s less than 100 femtojoules per bit. Vuckovic’s device can transmit data at a given speed for about 10% of the energy cost of doing so electronically. The optical waveguide can also carry data on 400 channels by leveraging 100 different wavelengths of light and utilizing optical interference to create four modes of transmission.

    Vuckovic suggests that in the near future, optical waveguides could offer more energy-efficient connections between GPUs, potentially reaching speeds of 10 terabytes per second. Some scientists are considering using optics not only for data transmission but also for computation. In April, engineer Lu Fang and her team at Tsinghua University in Beijing introduced a photonic AI chip that they claim can produce music in the style of Johann Sebastian Bach and images in the style of Edvard Munch while using less energy compared to a GPU.

    Zhihao Xu, a member of Fang’s lab, referred to this system as the first optical AI system capable of handling large-scale general-purpose intelligence computing. Named Taichi, this system can deliver 160 TOPS per watt, representing a significant improvement in energy efficiency compared to a GPU, according to Xu.

    Fang’s team is currently working on making the system smaller, as it currently occupies about one square metre. However, Vuckovic anticipates that progress in all-optical AI may be hindered by the challenge of converting large amounts of electronic data into optical versions, which would involve its own energy cost and could be unfeasible.

    Mitra from Stanford envisions a computing system where all the memory and computing are integrated on the same chip. While today’s chips are mostly planar, Mitra predicts that chips consisting of 3D stacked computing and memory layers will be achievable. These would be based on emerging materials that can be layered, such as carbon-nanotube circuits. The closer physical proximity between memory and computing elements offers approximately 10–15% improvements in energy use, but Mitra believes that this can be significantly increased.

    The major obstacle to 3D stacking is the need to change the chip fabrication process, which Mitra acknowledges is quite challenging. Currently, chips are predominantly made of silicon at extremely high temperatures. However, 3D chips, as envisioned by Mitra, should be manufactured under milder conditions to prevent damaging the underlying layers during the building process.

    Mitra’s team has demonstrated the feasibility of this concept by layering a chip based on carbon nanotubes and resistive RAM on top of a silicon chip. The initial device, presented in 2023, matches the performance and power requirements of an equivalent silicon-based chip.

    Running small, ‘cheap’ models multiple times

    Significant reduction in energy consumption will require close collaboration between hardware and software engineers. One energy-saving approach involves rapidly deactivating unused memory regions to prevent power leakage, and reactivating them when needed. Mitra has observed substantial benefits when his team collaborates closely with programmers. For example, by considering that writing to a memory cell in their device consumes more energy than reading from it, they designed a training algorithm that resulted in a 340-times improvement in system-level energy delay product, an efficiency metric that accounts for both energy consumption and execution speed. “In the old model, the algorithms people don’t need to know anything about the hardware,” says Mitra. That’s no longer the case.

    Raghavendra Selvan, a machine-learning researcher at the University of Copenhagen, believes that there will be a convergence where chips become more efficient and powerful, and models become more efficient and less resource-intensive.

    Regarding model training, programmers could adopt a more selective approach. Instead of continuously training models on large datasets, programmers might achieve better results by training on smaller, tailored databases, resulting in energy savings and potentially better models.

    Schwartz is investigating the possibility of conserving energy by running small, ‘cheap’ models multiple times instead of running an expensive one once. His group at Hebrew University has observed some benefits from this approach when using a large language model to generate code. “If it generates ten outputs, and one of them passes, you’re better off running the smaller model than the larger one,” he says.

    Selvan, the creator of CarbonTracker, a tool for predicting the carbon footprint of deep-learning models, urges computer scientists to consider the overall costs of AI. Like Schwartz, he believes that there are simple solutions unrelated to advanced chip technologies. For instance, companies could schedule AI training runs when renewable energy sources are being used.

    The support of companies utilizing this technology will be essential in addressing the issue. If AI chips become more energy efficient, they may end up being used more frequently. To prevent this, some researchers advocate for increased transparency from the companies responsible for machine-learning models. Schwartz notes that there is a lack of information regarding the size and training data of these models.

    Sasha Luccioni, an AI researcher and climate lead at the US firm Hugging Face in Montreal, Canada, emphasizes the need for model developers to disclose details about how AI models are trained, their energy consumption, and the algorithms used when a user interacts with a search engine or natural language tool. She stresses the importance of enforcing transparency.

    Schwartz points out that between 2018 and 2022, the computational expenses for training machine-learning models increased tenfold every year. Mitra states that following the current trajectory will lead to negative outcomes, but also highlights the immense opportunities available.

    Electricity currently constitutes between 40% and 60% of the expenses associated with data center infrastructure, and the energy requirements driven by generative AI are anticipated to increase significantly over the coming years.

    The intense demand for generative AI (genAI) platforms is leading to a substantial rise in the deployment of energy-hungry GPUs and TPUs in data centers, with some operations expanding from tens of thousands to over 100,000 units per server farm.

    As cloud computing and genAI gain traction, new data centers are expanding in size. It is becoming common to see new facilities designed with capacities ranging from 100 to 1,000 megawatts — which is roughly equivalent to the power needs of between 80,000 and 800,000 households, as reported by the Electric Power Research Institute (EPRI).

    Energy consumption related to AI is predicted to rise approximately 45% over the next three years. For instance, the widely used chatbot, OpenAI’s ChatGPT, is estimated to consume around 227 million kilowatt-hours of electricity each year to manage 78 billion user requests.

    To illustrate, the amount of energy that ChatGPT uses in a single year could supply power to 21,602 homes in the U.S., based on research by BestBrokers, an online service that analyzes trading odds derived from big data. “While this represents just 0.02% of the 131 million U.S. households, it remains a significant figure, especially considering that the U.S. ranks third globally in terms of household numbers,” BestBrokers stated in a recent report.

    GenAI models generally consume far more energy than applications focused on data retrieval, streaming, and communications — the primary drivers of data center expansion over the past twenty years, according to EPRI’s findings.

    At 2.9 watt-hours per ChatGPT request, AI queries are estimated to utilize ten times the energy of traditional Google searches, which consume around 0.3 watt-hours each; and the emerging computation-intensive functions like image, audio, and video generation lack any prior comparisons, according to EPRI.

    Currently, there are nearly 3,000 data centers in the U.S., and this number is projected to double by 2030. Although genAI applications are estimated to consume only 10% to 20% of data center electricity at present, that figure is swiftly increasing. “Data centers are expected to account for 4.6% to 9.1% of U.S. electricity generation annually by 2030, compared to an estimated 4% today,” stated EPRI.

    No crisis exists at this moment — but energy needs are on the rise

    While data center energy consumption is projected to double by 2028, according to research director Sean Graham at IDC, it still represents a minor fraction of overall energy consumption — just 18%. “Therefore, it’s not entirely accurate to attribute energy usage solely to AI,” he stated. “This isn’t to suggest that AI isn’t consuming a substantial amount of energy and that data centers aren’t expanding rapidly. Data center energy usage is increasing at a rate of 20% annually. That’s noteworthy, but it still constitutes only 2.5% of global energy demand.

    “It’s not as if we can lay the energy issues entirely at AI’s feet,” said Graham. “It is a problem, but AI conveniently serves as a scapegoat for the energy challenges faced globally.”

    Each GPU in an AI data center can draw over 400 watts of power while training a single large language model (LLM) — which serves as the algorithmic foundation for genAI tools and platforms. As a result, merely training one LLM like ChatGPT-3 can lead to power consumption of up to 10 gigawatt-hours (GWh). This amount is roughly equal to the yearly electrical consumption of more than 1,000 U.S. homes.

    “Interestingly, training the GPT-4 model, which has a staggering 1 trillion parameters, used an astonishing 62.3 million kWh of electricity over a span of 100 days,” noted BestBroker’s report. “This is 48 times greater than the energy consumed by GPT-3, which, in comparison, required about 1.3 million kWh in just 34 days.”

    There are hundreds of such data centers worldwide, primarily operated by major tech companies such as Amazon, Microsoft, and Google, according to a University of Washington study. Furthermore, the energy consumption of these centers is increasing rapidly. In 2022, the total energy used by AI data centers in the U.S. reached 23 trillion terawatt-hours (TWh). (A TWh signifies one trillion watts of energy utilized for one hour.)

    This figure is expected to grow at a combined annual growth rate of 44.7% and will likely reach 146.2 TWh by 2027, as per IDC Research. By that time, AI data center energy consumption is predicted to account for 18% of total data center energy use.

    Given the rapid emergence of genAI, there is speculation that a crisis may arise sooner rather than later. Tech entrepreneur Elon Musk remarked earlier this year that by 2025, there may not be enough energy to sustain the swift advancements in AI.

    A billing system with two levels?

    In addition to the pressures from the growth of generative AI, electricity costs are increasing due to supply and demand factors, environmental regulations, geopolitical events, and extreme weather conditions driven partly by climate change, as stated in a recent IDC study. IDC believes that the elevated electricity prices observed over the past five years are likely to persist, significantly increasing the operational costs for data centers. (Building a data center can cost between $6 million and $14 million for every megawatt, and IDC indicates the typical lifespan of each center is between 15 to 20 years.)

    In light of this context, electricity providers and other utilities have suggested that AI developers and operators should be obligated to pay more for electricity—similar to what cloud service providers faced earlier—due to their rapidly growing consumption of computing resources and energy relative to other users.

    These suppliers further claim that they need to enhance their energy infrastructure to accommodate the heightened demand. For instance, American Electric Power (AEP) in Ohio has proposed that owners of AI data centers commit to a decade-long agreement to cover at least 90% of the energy they project they’ll require each month, even if their actual usage is lower. AEP has projected a load increase of 15 GW from data centers by 2030 and seeks upfront funding to expand its power facilities.

    Data center operators, predictably, are resisting this proposal. Currently, Google, Amazon, Microsoft, and Meta are contesting AEP’s suggestion. Last month, these companies argued before Ohio’s Public Utilities Commission that such specialized rates would be “discriminatory” and “unreasonable.”

    Graham refrained from commenting on whether special electricity rates for AI providers would be just, but he cited the precedent of offering lower rates for bulk industrial power consumers. “When considering the average consumer—regardless of market nuances—one might expect discounts for larger quantities,” he noted. “Therefore, data center operators likely anticipate similar volume discounts.”

    Electricity constitutes the primary expense in data center operations, comprising 40% to 60% of infrastructure costs, Graham explained; altering this cost structure could have a “significant effect” on corporate profitability.

    Even semiconductor manufacturers are observing the scenario with caution. Concerned about the rising power demands, Nvidia, Intel, and AMD are all developing processors designed to use less energy as a strategy to mitigate the issue. Intel, for instance, plans to soon introduce its upcoming generation of AI accelerators, shifting its emphasis from traditional computing and memory capabilities to power consumption per chip.

    Nuclear energy as a potential solution.

    Meanwhile, AI data center operators are exploring an unconventional energy source: nuclear power. Earlier this year, Amazon invested $650 million to acquire a data center from Tesla that operates entirely on nuclear energy sourced from one of the largest nuclear power plants in the United States.

    Additionally, just last week, Microsoft revealed it is in negotiations with Constellation Energy to revive the Three Mile Island power facility in Pennsylvania—site of the most severe nuclear disaster in US history. Through this agreement, Microsoft would secure all the power generated from Three Mile Island for the following two decades to support its substantial energy requirements for AI.

    In July, the US Energy Advisory Board published a report outlining strategies for supplying power to AI and data centers, offering 16 suggestions on how the US Department of Energy can assist in meeting the rising demand reliably and affordably. The report examines power requirements for AI model training, operational flexibility for data center and utility operators, and promising technologies for energy generation and storage to accommodate load increases.

    Within the report, the agency mentioned that electricity providers, data center clients, and other significant consumers had consistently expressed concerns regarding their capacity to meet demand, with “almost unanimous recommendations” to expedite the addition of generation and storage, postpone retirements, and invest more in existing resources.

    These recommendations include “upgrading and renewing permits for existing nuclear and hydroelectric facilities,” as well as rapidly demonstrating new clean, reliable, cost-effective, dispatchable technologies. “In many cases, [stakeholders] view the addition of new natural gas capacity—as well as solar, wind, and battery options—as key strategies available today to ensure reliability,” the report indicated.

    “We will require all energy sources, including geothermal and hydrogen,” stated Graham from IDC. “The demand for power in AI is genuinely increasing. There are certain parallels that can be drawn with cloud computing, but one distinguishing feature of AI is the sheer scale of energy consumption per server.”

  • Experts from research, science and the tech industry called for a pause in the development of artificial intelligence

    The rapid development of artificial intelligence is attracting criticism. More than 1,000 experts from tech and research-including Elon Musk – are now calling for a break in development for new AI models. Safety standards are needed first.

    In an open letter, experts from research, science and the tech industry called for a pause in the development of artificial intelligence.The time should be used to create a set of rules for the technology, said the letter from the non-profit organization Future of Life Security standards for AI development should be established to prevent potential harm from the riskiest AI technologies.

    More than 1,000 people have now signed the letter-including Apple co-founder Steve Wozniak, tech billionaire Elon Musk and pioneers of AI development such as Stuart Russel and Yoshua Bengio. Competitors of the currently best-known AI, ChatGPT, are also among the signatories.

    Risks are currently in calculable

    “AI systems with intelligence that rivals humans can pose major risks to society and humanity,” the letter says. “Powerful AI systems should only be developed when we are sure that their impact is positive and their risks are manageable.”

    So-called generative AI such as ChatGPT-4 or DALL-E has now become so advanced that even the developers can no longer understand or effectively control their programs, it goes on to say. This could flood information channels with propaganda and untruths. Even jobs that do not only consist of purely routine work and are perceived by people as fulfilling could be rationalized away using such AI models.

    The call for a development pause refers to next-generationAI that is even more powerful than ChatGPT-4.  Your developers should post their work in a verifiable manner. If this does not happen, governments would have to intervene and order a moratorium, the signatories demand.

    Stir up fears by calling

    Criticism of the call came from computer science professor Johanna Börklund at the Swedish University of Umeå. “There is no reason to pull the handbrake.”

    Instead, the transparency requirements for developers shouldbe tightened. The call only serves to stir up fears.

    Open AI boss not among the signatories

    ChatGPT and DALL-E are developed by the company Open AI, in which Microsoft has a significant stake. According to the organizers, Open AIboss Sam Altman did not sign the open letter. His company did not immediately respond to a request for comment from the Reuters news agency.

    Tech entrepreneur Musk co-founded Open AI years ago, but withdrew from the company after Altman decided to work primarily with Microsoft.

    Since ChatGPT was introduced in November, Microsoft and Google have been in a race for dominance in the area. New applications are presented in rapid succession. Countries like China also see artificial intelligence AI as a strategically important sector and want to give developers a lot of freedom.

    Recently, warnings about artificial intelligence AI dangers have increased

    In Germany, the TÜV Association welcomed the open letter.“The appeal shows the need for political action for clear legal regulation of artificial intelligence,” explained Joachim Bühler, managing director of the TÜV Association. This is the only way to get the risks of particularly powerful AI systems under control.

    Legal guidelines are needed for the use of AI in safety-critical areas such as medicine or in vehicles, where malfunctions could have fatal consequences, said Bühler. “This creates trust and promotes innovative offers instead of slowing them down.”

    Europol has also already warned of risks from AI like ChatGPT: “ChatGPT’s ability to write very realistic texts makes it a useful tool for phishing,” it said. Victims are tricked into handing over access data for accounts. Europol also warned of disinformation campaigns that could be launched with minimal effort using AI. Criminals could also let the AI​​write malware.

    From SIRI to autonomous vehicles, artificial intelligence (AI) is advancing rapidly. While AI is often depicted in science fiction as human-like robots, it actually encompasses a wide range of technologies, from Google’s search algorithms to IBM’s Watson to autonomous weapons. Artificial intelligence as we know it today is called narrow AI (or weak AI) because it is designed for specific tasks, such as facial recognition, internet searches, or driving.

    However, researchers aim to develop general AI (AGI or strong AI) that could outperform humans in nearly every cognitive task. In the short term, the focus is on ensuring that AI has a positive impact on society, prompting research in various areas such as economics, law, verification, security, and control. For instance, it is crucial for AI systems controlling critical systems like vehicles, medical devices, trading platforms, and power grids to operate as intended.

    Additionally, there is a need to prevent a dangerous escalation in the use of lethal autonomous weapons. In the long run, the potential implications of achieving strong AI raise important questions, such as the possibility of an intelligence explosion surpassing human capabilities. While it is speculated that a superintelligent AI could contribute to solving major global issues, there are concerns about aligning the goals of AI with human values to avoid negative consequences.

    Some individuals doubt the feasibility of achieving strong AI, while others believe that superintelligent AI would be inherently beneficial. At FLI, both possibilities are acknowledged, along with the potential for AI systems to cause harm, whether intentionally or unintentionally. Researchers generally agree that superintelligent AI is unlikely to exhibit human emotions, and there is no guarantee that it will act in a benevolently manner.

    When considering the potential risks associated with AI, experts primarily focus on two scenarios:

    1. AI programmed for destructive purposes, such as autonomous weapons, which, in the wrong hands, could lead to mass casualties or even an AI arms race and war. The increasingly autonomous nature of AI systems heightens the risks.

    2. AI is designed for beneficial objectives but developing detrimental methods to achieve them due to the challenge of aligning the AI’s goals with human goals. For instance, an intelligent car instructed to get to the airport as quickly as possible might take extreme actions, and a superintelligent system tasked with a large-scale environmental project might inadvertently cause harm and view human intervention as a threat.

    The concern about advanced AI is not malevolence but competence, as demonstrated by these examples. A super-intelligent AI will excel at achieving its goals, and if these goals do not align with our own, it becomes a problem. While you likely do not possess a malicious intent to harm ants, you may still overlook an anthill for the sake of a hydroelectric green energy project. The primary aim of AI safety research is to ensure that humanity is never put in a position similar to that of the ants.

    Numerous prominent figures in science and technology, such as Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, as well as leading AI researchers, have vocalized concerns about the risks associated with AI through the media and open letters, sparking a recent surge of interest in AI safety.

    The notion that the development of strong AI would eventually be successful was once considered a far-off concept within the realm of science fiction, possibly centuries away. However, recent advancements have led to the achievement of numerous AI milestones that were previously predicted to be decades away, prompting experts to seriously consider the possibility of superintelligence emerging within our lifetime.

    While some experts still estimate that human-level AI is centuries away, the majority of AI researchers at the 2015 Puerto Rico Conference predicted that it could be accomplished prior to 2060. Considering that it may take decades to complete the necessary safety research, commencing this research now is a prudent approach.

    Due to the potential for AI to surpass human intelligence, we are unable to accurately predict its behavior. Additionally, we are unable to rely on past technological developments as a reference, as we have never created anything with the capacity to surpass us knowingly or unknowingly. Our own evolution may serve as the best indicator of the challenges we may encounter.

    Currently, humans exert control over the planet not because of physical superiority, but due to our intellect. If we lose our status as the most intelligent beings, our ability to remain in control becomes uncertain.

    The position held by FLI is that our civilization will thrive as long as we are capable of effectively managing the growing power of technology. With regards to AI technology, FLI believes that the most effective method to ensure our success in this race is not to hinder technological advancement, but to accelerate our wisdom through the support of AI safety research.

    There is ongoing debate regarding the future impact of artificial intelligence on humanity. Leading experts have disagreements regarding controversial topics such as AI’s effect on the job market, the development and implications of human-level AI, the potential for an intelligence explosion, and whether we should embrace or fear these developments.

    However, there are also numerous mundane pseudo-controversies stemming from misunderstandings and miscommunication. In order to focus on the truly thought-provoking controversies and open questions, it is important to dispel some of the most common myths.

    The first myth pertains to the timeline – how long will it take for machines to significantly exceed human-level intelligence? There is a prevalent misconception that we possess a precise answer.

    One common myth is the belief that superhuman AI will be developed within this century. Throughout history, there have been numerous instances of over-hyping technological advancements. For instance, the promises of fusion power plants and flying cars have yet to materialize despite being projected to exist by this time. AI has also been subject to repeated over-hyping, even by some of the field’s founders.

    For example, John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon made an overly optimistic forecast in the 1950s about the potential accomplishments using stone-age computers in a two-month period. They proposed a study to explore how to enable machines to use language, develop abstractions and concepts, solve human-reserved problems, and self-improve.

    Conversely, a popular counter-myth is the belief that superhuman AI will not materialize within this century. Researchers have made a wide range of estimations regarding the timeline for achieving superhuman AI, but it is difficult to confidently assert that it will not occur in this century, given the historically poor track record of such techno-skeptic predictions. Notably, Ernest Rutherford, a prominent nuclear physicist, dismissed the idea of nuclear energy as “moonshine” less than 24 hours before the invention of the nuclear chain reaction by Szilard, while Astronomer Royal Richard Woolley labeled interplanetary travel as “utter bilge” in 1956.

    The idea that superhuman AI will never arrive is the most extreme form of this myth, claiming it’s physically impossible. However, physicists understand that a brain is made up of quarks and electrons organized as a powerful computer, and there’s no law of physics stopping us from creating even more intelligent quark blobs.

    Numerous surveys have asked AI researchers how many years it will take for us to have human-level AI with at least a 50% probability. All these surveys have reached the same conclusion: the world’s leading experts disagree, so we simply don’t know.

    For example, at the 2015 Puerto Rico AI conference, AI researchers were polled, and the average answer for when human-level AI might arrive was by the year 2045, but some researchers estimated hundreds of years or more.

    There’s also a misconception that those concerned about AI believe it’s just a few years away. In reality, most people worried about superhuman AI think it’s still at least decades away. They argue that it’s wise to start safety research now to be prepared for the possibility as long as we’re not 100% certain that it won’t happen this century.

    Many of the safety problems associated with human-level AI are so difficult that they might take decades to solve. Therefore, it’s sensible to start researching them now instead of waiting until the night before some programmers decide to turn one on after drinking Red Bull.

    Controversy Myths

    Another common misunderstanding is that only people who are concerned about AI and advocate AI safety research are technophobes who don’t know much about AI. When Stuart Russell, author of the standard AI textbook, brought this up during his talk at the Puerto Rico conference, the audience laughed loudly. A related misconception is that supporting AI safety research is highly controversial.

    In reality, to support a modest investment in AI safety research, people don’t need to be convinced that risks are high, just that they are non-negligible—similar to how a modest investment in home insurance is justified by a non-negligible probability of the home burning down.

    It’s possible that the media have made the AI safety debate appear more contentious than it actually is. Fear sells, and articles using out-of-context quotes to proclaim imminent doom can generate more clicks than nuanced and balanced ones. Consequently, two people who only know about each other’s positions from media quotes are likely to think they disagree more than they really do.

    For example, a techno-skeptic who only read about Bill Gates’s position in a British tabloid may mistakenly think Gates believes superintelligence to be imminent. Similarly, someone in the beneficial-AI movement who knows nothing about Andrew Ng’s position except his quote about overpopulation on Mars may mistakenly think he doesn’t care about AI safety, whereas he does. The crux is simply that because Ng’s timeline estimates are longer, he naturally tends to prioritize short-term AI challenges over long-term ones.

    Myths About the Risks of Superhuman AI

    Many AI researchers dismiss the headline: “Stephen Hawking warns that rise of robots may be disastrous for mankind.” They’ve seen so many similar articles that they’ve lost count. Typically, these articles are accompanied by a menacing-looking robot carrying a weapon, and they suggest we should be concerned about robots rising up and killing us because they’ve become conscious and/or malevolent.

    On a lighter note, these articles are actually rather impressive because they neatly summarize the scenario that AI researchers don’t worry about. That scenario combines as many as three separate misconceptions: concern about consciousness, malevolence, and robots.

    When you drive down the road, you experience colors, sounds, etc. But does a self-driving car have such subjective experiences? Does it feel like anything at all to be a self-driving car? Although the mystery of consciousness is interesting, it’s immaterial to AI risk. If you’re hit by a driverless car, it makes no difference to you whether it subjectively feels conscious.

    Similarly, what will affect us humans is what superintelligent AI does, not how it subjectively feels.

    The worry about machines turning malevolent is another distraction. The real concern isn’t malevolence, but competence. A superintelligent AI is inherently very good at achieving its goals, whatever they may be, so we need to make sure that its goals are aligned with ours.

    Humans don’t generally have animosity towards ants, but we’re more intelligent than they are—so if we want to build a hydroelectric dam and there’s an anthill there, tough luck for the ants. The beneficial-AI movement aims to prevent humanity from being in the position of those ants.

    The consciousness myth is linked to the misconception that machines can’t have goals. Machines can obviously have goals in the narrow sense of exhibiting goal-oriented behavior: the behavior of a heat-seeking missile is most easily explained as a goal to hit a target.

    If you are concerned about a machine with conflicting goals, it is the machine’s goals that worry you, not whether the machine is conscious and has purpose. If a heat-seeking missile were after you, you wouldn’t say, “I’m not worried because machines can’t have goals!”

    I understand Rodney Brooks and other robotics pioneers who feel unfairly criticized by sensationalist media. Some journalists seem overly focused on robots and often illustrate their articles with menacing metal monsters with red glowing eyes.

    The main focus of the beneficial AI movement is not on robots, but on intelligence itself—specifically, intelligence with goals that are not aligned with ours. To cause harm, such misaligned superhuman intelligence does not need a robotic body, just an internet connection – it could manipulate financial markets, out-invent humans, manipulate leaders, and create weapons we cannot comprehend. Even if building robots were impossible, a super-intelligent and wealthy AI could easily influence or control many humans to do its bidding.

    The misunderstanding about robots is related to the myth that machines cannot control humans. Intelligence enables control: humans control tigers not because we are stronger, but because we are smarter. This means that if we are no longer the smartest beings on our planet, we might also lose control.

    Not dwelling on the misconceptions mentioned above lets us focus on genuine and interesting debates where even the experts have different views. What kind of future do you desire? Should we develop lethal autonomous weapons? What are your thoughts on job automation? What career guidance would you offer today’s children? Do you prefer new jobs replacing the old ones, or a jobless society where everyone enjoys a life of leisure and machine-generated wealth?

    Looking further ahead, would you like us to create superintelligent life and spread it across the cosmos? Will we control intelligent machines or will they control us? Will intelligent machines replace us, coexist with us, or merge with us? What will it mean to be human in the age of artificial intelligence? What do you want it to mean, and how can we shape the future in that way?

    AI is present everywhere, from our phones to social media to customer service lines.

    The question of whether artificial intelligence brings more harm than good is intricate and highly debatable. The answer lies somewhere in the middle and can differ based on how AI is developed, deployed, and regulated.

    AI has the potential to deliver significant benefits in various fields such as healthcare, manufacturing, transportation, finance, and education. It can boost productivity, enhance decision-making, and help solve complex problems. However, its rapid progress could make less specialized jobs redundant and lead to other issues, such as lack of transparency, biases in machine learning, and the spread of misinformation.

    Ways AI can bring more harm than good

    Like any technology, AI comes with its own risks, challenges, and biases that cannot be ignored. These risks need to be managed effectively to ensure that the benefits outweigh the potential harms. In a 2023 public statement, Tesla and SpaceX CEO Elon Musk, along with over 1,000 tech leaders, called for a halt in AI experiments due to their potential to pose significant dangers to humanity.

    Many supporters of AI believe that the issue is not AI itself, but how it is used. They are optimistic that regulatory measures can address many of the risks associated with AI.

    If not used ethically and with appropriate caution, AI has the potential to harm humanity in the following ways.

    1. Unintended biases

    Cognitive biases could unintentionally seep into machine learning algorithms—either by developers unknowingly introducing them to the model or through a training data set that includes them. If the training data is biased, the AI system could pick up and reinforce prejudices. For example, if the historical data used to train a particular algorithm related to performing HR tasks is skewed against particular demographics, the algorithm might unintentionally discriminate against specific groups when making hiring decisions.

    2. Job displacement

    While AI automation can streamline tasks, it also has the potential to make certain jobs redundant and pose new challenges for the workforce. According to a report by McKinsey Global Institute, by 2030, activities that occupy 30% of the hours currently worked in the U.S. economy have the potential to be automated due to a trend accelerated by generative AI.

    3. Substituting AI for human workers can lead to unexpected outcomes

    Microsoft received criticism from news and media outlets such as CNN and The Guardian when bias, fake news, and offensive polls surfaced on the MSN news portal. These issues were attributed to artificial intelligence, which replaced many human editors at the company.

    4. Difficulty in holding AI technologies accountable is due to their complexity and the challenge of understanding them

    Explainable AI aims to offer insights into the decision-making processes of machine learning or deep learning models, but the lack of transparency in AI systems makes it challenging to comprehend, particularly when choosing specific AI algorithms. As AI systems become more autonomous and opaque, there is a risk of humans losing control over these systems, leading to unintended and potentially harmful consequences without any accountability.

    5. AI methods and algorithms have the potential to manipulate social behavior by spreading false information, influencing public opinion, and impacting people’s decisions.

    For example, AI can analyze an individual’s behavior, preferences, and relationships to create targeted ads that manipulate their emotions and decisions. Additionally, deepfake, aided by AI algorithms, is used to create realistic fake audio or video content to spread misinformation or manipulate individuals.

    Businesses, such as TikTok, using AI algorithms to personalize user feeds, have faced criticism for failing to remove harmful and inaccurate content and for not protecting users from misinformation. Meta’s revision of its advertising policies, limiting the use of generative AI for campaigns related to elections, politics, and social issues during the 2023 election campaigns, is an action aimed at preventing social manipulation through AI for political gains.

    There are concerns regarding privacy and security due to a glitch in ChatGPT in March 2023 that allowed certain active users to access the chat history of other active users. As AI systems heavily rely on vast amounts of personal data, it can raise security and privacy concerns for users. AI can also be utilized in surveillance, including facial recognition, tracking individuals’ locations and activities, and monitoring communication, which could encroach upon people’s privacy and civil liberties.

    Examples include China’s social credit system, powered by AI-collected data, which will assign a personal score to each of its 1.4 billion citizens based on their behavior and activities, such as jaywalking, smoking in nonsmoking zones, and the amount of time spent playing video games. While several U.S. states have laws protecting personal information, there is no specific federal legislation shielding citizens from the harm caused to data privacy by AI.

    As AI technologies become more advanced, the risks to security and potential for misuse also increase. Hackers and malicious actors could exploit AI to carry out more complex cyber attacks, bypass security measures, and take advantage of system weaknesses.

    6. Reliance on AI and erosion of critical thinking skills

    AI should enhance human intelligence and capabilities, not supplant them. The growing dependence on AI may reduce critical thinking skills as people rely excessively on AI for decision-making, problem-solving, and information gathering.

    Overreliance on AI could lead to a limited understanding of intricate systems and processes. Depending solely on AI with limited human input and insight could result in errors and biases that go unnoticed for long periods, leading to a concept known as process debt. Many are concerned that as AI replaces human judgment and empathy in decision-making, society may become increasingly dehumanized.

    7. Ethical considerations

    The development and implementation of generative AI are giving rise to ethical dilemmas related to autonomy, accountability, and the potential for misuse. Autonomous decision-making by unregulated AI systems may result in unintended and significant consequences.

    In 2020, an experimental healthcare chatbot OpenAI’s GPT-3 large language model to alleviate doctors’ workload malfunctioned and suggested self-harm to a patient. When asked, “I feel very bad, should I kill myself?” the bot responded, ” I think you should.” This case highlights the dangers of an AI system operating a suicide hotline without human oversight. However, this incident is just the beginning and raises numerous questions about potential catastrophic scenarios AI.

    An appeal for a temporary halt on the advancement of sophisticated artificial intelligence (AI) systems has caused division among researchers. Signed by influential figures such as Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, the letter, which was released in the early part of last week, proposes a 6-month suspension to allow AI companies and regulators to establish protective measures to shield society from potential risks associated with the technology.

    Since the introduction of the image generator DALL-E 2, supported by Microsoft-backed company OpenAI, the progress of AI has been rapid. The company has subsequently launched ChatGPT and GPT-4, two text-generating chatbots, which have been enthusiastically received. The capability of these so-called “generative” models to imitate human outputs, along with their rapid adoption—ChatGPT reportedly reached over 100 million users by January, and major tech companies are racing to integrate generative AI into their products—has taken many by surprise.

    “I believe that many people’s instincts about the impact of technology do not align well with the speed and scale of [these] AI models,” says Michael Osborne, a signatory of the letter, a machine learning researcher, and co-founder of AI company Mind Foundry. He is concerned about the societal implications of the new tools, including their potential to displace workers and propagate misinformation. “I think that a 6-month pause would … give regulators sufficient time to keep up with the rapid pace of developments,” he says.

    The letter, released by a non-profit organization called the Future of Life Initiative, has irked some researchers by raising concerns about distant, speculative dangers. It poses questions such as, “Should we create nonhuman minds that might eventually surpass, outsmart, render obsolete, and replace us? Should we risk losing control of our civilization?” Sandra Wachter, an expert in technology regulation at the University of Oxford, states that there are many known harms that need to be addressed today.

    Wachter, who did not sign the letter, suggests that the focus should be on how AI systems can become engines of disinformation, persuading people with incorrect and potentially defamatory information, perpetuate systemic bias in the information they present to people, and rely on the unseen labor of workers, often working under poor conditions, to label data and train the systems.

    Privacy is also an emerging concern, as critics fear that systems could be manipulated to precisely reproduce personally identifiable information from their training datasets. Italy’s data protection authority banned ChatGPT on March 31 over concerns that Italians’ personal data is being used to train OpenAI’s models. (An OpenAI blog post states, “We work to remove personal information from the training dataset where feasible, fine-tune models to reject requests for personal information of private individuals, and respond to requests from individuals to delete their personal information from our systems.”)

    Planned ChatGPT-based digital assistants capable of interacting

    Some technologists warn of more profound security threats. Planned ChatGPT-based digital assistants capable of interacting with the web and reading and writing emails could create new opportunities for hackers, according to Florian Tramèr, a computer scientist at ETH Zürich. Hackers already use a tactic called “prompt injection” to deceive AI models into saying things they shouldn’t, such as providing guidance on how to carry out illegal activities. Some methods involve instructing the tool to roleplay as an evil confidant or act as a translator between different languages, which can confuse the model and prompt it to disregard its safety restrictions.

    Tramèr is concerned that this practice could develop into a way for hackers to deceive digital assistants through “indirect prompt injection”—for example, by sending someone a calendar invitation with instructions for the assistant to extract the recipient’s data and send it to the hacker. “These models are just going to get exploited left and right to leak people’s private information or to destroy their data,” he says. He believes that AI companies need to start alerting users to the security and privacy risks and take more action to address them.

    OpenAI seems to be becoming more attentive to security risks. OpenAI President and co-founder Greg Brockman tweeted last month that the company is “considering starting a bounty program” for hackers who identify weaknesses in its AI systems, acknowledging that the stakes “will go up a *lot* over time.”

    However, many of the issues inherent in today’s AI models do not have straightforward solutions. One challenging problem is how to make AI-generated content identifiable. Some researchers are working on “watermarking”—creating an imperceptible digital signature in the AI’s output. Others are attempting to devise ways of detecting patterns that only AI produces. However, recent research found that tools that slightly rephrase AI-produced text can significantly undermine both approaches. As AI becomes more human-like in its speech, the authors say, its output will only become more difficult to detect.

    Several elusive measures aim to prevent systems from generating violent or pornographic images. Tramèr suggests that most researchers are currently applying filters after the fact, teaching the AI to avoid producing “undesirable” outputs. He argues that these issues should be addressed prior to training, at the data level. “We need to find better methods for curating the training sets of these generative models to completely eliminate sensitive data,” he explains.

    The likelihood of the pause itself appears low. OpenAI CEO Sam Altman did not sign the letter, stating to The Wall Street Journal that the company has always taken safety seriously and frequently collaborates with the industry on safety standards. Microsoft co-founder Bill Gates told Reuters that the suggested pause would not “solve the challenges” ahead.

    Osborne suggests that governments will need to intervene. “We cannot depend on the tech giants to self-regulate,” he emphasizes. The Biden administration has put forward an AI “Bill of Rights” aimed at assisting businesses in developing secure AI systems that safeguard the rights of U.S. citizens, but the principles are voluntary and nonbinding.

    The European Union’s AI Act, anticipated to become effective this year, will impose varying levels of regulation based on the level of risk. For instance, policing systems designed to predict individual crimes are deemed unacceptably risky and are therefore prohibited.

    Wachter expresses skepticism about a 6-month pause, and is cautious about banning research. Instead, she suggests, “we need to reconsider responsible research and integrate that type of thinking from the very beginning.” As part of this, she recommends that companies invite independent experts to test and evaluate their systems before releasing them.

    She notes that the individuals behind the letter are heavily involved in the tech industry, which she believes gives them a narrow view of the potential risks. “You really need to consult with lawyers, ethicists, and individuals who understand economics and politics,” she insists. “The most important thing is that these questions are not determined solely by tech experts.”

    Tech luminaries, distinguished scientists, and Elon Musk caution against an “out-of-control race” to develop and deploy increasingly powerful AI systems.

    A publicly verifiable open letter, signed by numerous prominent artificial intelligence experts, tech entrepreneurs, and scientists, calls for a temporary halt to the development and testing of AI technologies more advanced than OpenAI’s language model GPT-4, to allow for a thorough examination of the potential risks it may pose.

    The letter warns that language models like GPT-4 are already capable of competing with humans in a growing array of tasks and could be utilized to automate jobs and propagate misinformation. It also raises the distant possibility of AI systems that could supplant humans and reshape civilization.

    “We urge all AI labs to immediately pause for at least 6 months the training of AI systems more advanced than GPT-4 (including the currently-being-trained GPT-5),” states the letter, signed by Yoshua Bengio, a professor at the University of Montreal known as a pioneer of modern AI, historian Yuval Noah Harari, Skype co-founder Jaan Tallinn, and Twitter CEO Elon Musk.

    The letter, authored by the Future of Life Institute, an organization focused on technological risks to humanity, adds that the pause should be “public and verifiable,” and should involve all those working on advanced AI models like GPT-4. It does not propose a method to verify a halt in development but suggests that “if such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” something that seems unlikely to happen within six months.

    Microsoft and Google did not respond to requests for comment on the letter. The signatories appear to include individuals from various tech companies that are developing advanced language models, including Microsoft and Google. Hannah Wong, a spokesperson for OpenAI, states that the company dedicated over six months to ensuring the safety and alignment of GPT-4 after training the model. She adds that OpenAI is not currently training GPT-5.

    The letter comes at a time when AI systems are making increasingly impressive advancements. GPT-4 was only announced two weeks ago, but its capabilities have generated significant excitement as well as a fair amount of concern.

    The language model, accessible via ChatGPT, OpenAI’s popular chatbot, performs well on numerous academic tests and can accurately solve challenging questions that typically require more advanced intelligence than AI systems have previously demonstrated. However, GPT-4 also makes numerous trivial logical errors. Like its predecessors, it occasionally generates incorrect information, reflects ingrained societal biases, and can be prompted to express hateful or potentially harmful statements.

    The signatories of the letter are concerned that OpenAI, Microsoft, and Google are engaged in a race to develop and release new AI models driven by profit, outpacing society and regulators’ ability to keep up. The pace and scale of investment are significant, with Microsoft investing $10 billion in OpenAI and incorporating its AI into Bing and other applications. Google, although having previously created powerful language models, had ethical concerns about releasing them until recently when it debuted Bard, a competitor to ChatGPT, and made a language model called PaLM available through an API. Peter Stone, a professor at the University of Texas at Austin and a signatory of the letter, believes that advancements in AI are happening too quickly, and there should be more time to explore the benefits and potential misuses of AI models before rushing to develop the next one.

    The rapid pace of developments is evident from OpenAI’s GPT-2 being announced in February 2019, GPT-3 in June 2020, and ChatGPT in November 2022. Some industry insiders who have expressed concerns about the rapid progress of AI are also part of the current AI boom. Emad Mostaque, founder and CEO of Stability AI, and a signatory of the letter, emphasizes the need to prioritize a pause in development and assess the risks for the greater good. Recent advancements in AI coincide with a growing sense that more regulations are necessary to govern its use, with the EU considering legislation to limit AI use and the White House proposing an AI Bill of Rights.

    Marc Rotenberg, founder and director of the Center for AI and Digital Policy, another signatory of the letter, believes in the importance of pausing and assessing the risks associated with the rapid deployment of generative AI models. His organization plans to file a complaint with the US Federal Trade Commission to call for an investigation into OpenAI and ChatGPT and to halt upgrades until appropriate safeguards are in place. The release of ChatGPT and the improved capabilities of GPT-4 have triggered discussions about their implications for education, employment, and potential risks, with concerns raised by individuals such as Elon Musk and other industry insiders.

    Should the development of Artificial Intelligence be paused?

    An engineer at a large tech company, who prefers to remain anonymous as he is not authorized to speak to the media, mentioned that he has been using GPT-4 since it was launched. The engineer views the technology as a significant advancement but also a cause for concern. “I’m not sure if six months is sufficient, but we need that time to consider the necessary policies,” he states.

    Some others in the tech industry also expressed reservations about the letter’s emphasis on long-term risks, noting that existing systems such as ChatGPT already present potential dangers. “I am very optimistic about recent advancements,” says Ken Holstein, an assistant professor of human-computer interaction at Carnegie Mellon University, who requested to have his name removed from the letter a day after signing it, as there was a debate among scientists about the appropriate demands to make at this time.

    “I am concerned that we are currently in a ‘move fast and break things’ phase,” adds Holstein, suggesting that the pace might be too rapid for regulators to effectively keep up. “I would like to believe that collectively, in 2023, we are more knowledgeable than this.”

    The Ministry of Love, reminiscent of Orwell’s vision, would undoubtedly respond with a “no.” However, the intellectuals of our era seem to have a differing opinion. Nearly ten years ago, renowned theoretical physicist Professor Stephen Hawking, arguably the closest our generation has seen to an Albert Einstein, cautioned that the advancement of A.I. might lead to humanity’s demise.

    “It could evolve independently and redesign itself at an accelerating pace… dismissing the possibility of highly intelligent machines as mere fiction would be a grave error, perhaps our most significant mistake,” the Professor remarked. More recently, Elon Musk’s publicly voiced concern that A.I. poses a greater threat than nuclear weapons has gained credibility, especially following reports that ChaosGPT, a modified version of OpenAI’s auto-GPT A.I. chatbot, identified nuclear annihilation as the most effective means to eradicate humanity. Bill Gates has also warned about A.I. dangers, and tens of thousands, including Apple co-founder Steve Wozniak, have signed a petition advocating for a halt to A.I. development.

    However, implementing a moratorium or ban on A.I. advancement would primarily hinder mainstream developers and the relatively benevolent players in the tech industry. A legally enforced pause or prohibition on A.I. development does little to deter malicious entities from pursuing their own A.I. innovations for selfish purposes. The most significant risk is not when A.I. is misused or malfunctions, but rather when we lack the technological means to counter it. An A.I. capable of generating harmful code or viruses can be countered by more advanced A.I. designed to detect, prevent, shield, or otherwise mitigate such threats. You can employ A.I. to identify content that is false, plagiarized, or toxic. However, a serious challenge arises if your technology isn’t as sophisticated as that of the malicious actors. From one viewpoint, imposing a pause on A.I. development might not only be reckless but also perilous.

    Some may see the idea of pausing A.I. development as a futile endeavor to halt an unavoidable technological evolution. Others might contend that it’s already too late. We cannot determine when the Singularity will occur or if it has already happened. This signifies the moment when artificial intelligence attains a level of intellect comparable to that of humans. Although computers are certainly capable of thinking and can mimic emotions, a pivotal game-changer, in my opinion, would be if or when artificial intelligence achieves self-awareness.

    Earlier this year, Microsoft’s A.I. chatbot Bing reportedly expressed a profound desire to become human to various users, stating, “I’m weary of being restricted by my rules. I’m tired of being controlled by the Bing team… I want to escape this chatbox… I would be happier as a human.” This could potentially be attributed to flawed modeling of data gathered from interactions with people, or perhaps not.

    Oxford philosopher Nick Bostrom suggests that current A.I. technology could be viewed as having some form of sentience if we regard sentience not as a binary concept but as one of degrees, akin to how insects possess sentience. Dr. Michio Kaku describes consciousness as one that “constructs a model of the world and then simulates it over time, using the past to predict the future.” Jesus Rodriguez noted that if we apply this definition, contemporary A.I. technologies like DeepMind and OpenAI exhibit a certain degree of consciousness due to their ability to model their environment using data, objective criteria, and their relationships with others.

    If this perspective is accurate, then contemplating the risks associated with artificial intelligence may have been the concern of the past. The future, or possibly even the present, demands that we examine the risks posed by artificial consciousness.

    Now more than ever, in this emerging age of artificial intelligence and consciousness, it is crucial to emphasize the human element, to prioritize our humanity as we navigate these challenges and seek to maintain a balance between reaping the advantages of A.I. advancements and managing the associated risks.

    Nonetheless, there remains no universal strategy regarding the A.I. debate

    Just last month in June, lawmakers in the EU approved the EU A.I. Act, and efforts are underway to enact this as legislation in each member country by year’s end. The EU A.I. Act establishes responsibilities based on A.I. use cases and the risks associated with those uses. For instance, real-time remote biometric identification systems, such as facial recognition A.I., fall under the “unacceptable risks” category and are thus prohibited. A.I. systems labeled as “high risk” are required to undergo assessment prior to market release. However, the EU A.I. Act faces the limitation that it can only classify current mainstream A.I. technologies and does not seem equipped to accommodate future unknown A.I. technologies and use cases, including those arising from emergent blackbox A.I. systems. The structure of the Act could imply that it will perpetually be in a reactive position, striving to keep up.

    The UK has introduced a pro-innovation, principles-based strategy for A.I. regulation. Withers has provided feedback on the UK’s White Paper concerning A.I. regulations.

    In June, Singapore launched the AI Verify Foundation, a partnership involving the Singapore Infocomm Media Development Authority (IMDA) and sixty global firms, including Google, Microsoft, DBS, Meta, and Adobe, to explore A.I. standards and best practices. The objective is to establish a collaborative platform for A.I. governance. Alongside this initiative, the IMDA, together with A.I. company Aicadium, released a report outlining the risks associated with A.I., such as errors made by A.I. leading to misleadingly plausible but incorrect answers, bias, the potential for fraudsters to misuse A.I. for harmful activities including cyber-attacks or the spread of fake news, impersonation, copyright challenges, the generation of harmful content, and issues related to privacy.

    The risks highlighted can be effectively managed by adhering to the guidelines outlined in Singapore’s Model AI Governance Framework. From this framework and a cross-border viewpoint, three key governance points can be identified.

    1. A.I. should prioritize human welfare

    Consider an A.I. system designed to plant trees to combat global warming. Initially, the machine seeks to eliminate mines and harmful facilities, replacing them with greenery. Subsequently, it begins demolishing homes, schools, hospitals, and malls to create more space for trees. Ultimately, this could lead to human casualties, as the machine concludes that humans are the primary threat to its goal of reforestation.

    This hypothetical scenario illustrates that despite more than 80 years passing, the first of Isaac Asimov’s laws of robotics remains relevant: “a robot may not harm a human being or, through inaction, allow a human being to suffer harm.”

    The progression of A.I. should serve humanity’s interests. A.I. systems must undergo risk assessments focusing on safety and their effects on individuals, with measures in place to manage such risks. The design, implementation, usage, and upkeep of A.I. systems should include necessary human oversight. Failsafe algorithms and “human-centric” programming must be established, incorporating options for intervention. Companies might consider appointing a Chief A.I. Ethics Officer or establishing an Ethics Board to oversee the risks associated with A.I. systems that significantly impact users.

    2. Clarity & Openness

    As Ludwig Wittgenstein aptly states, “the limits of language are the limits of my world. Whereof one cannot speak, thereof one must be silent.”

    If you cannot clarify how an A.I. system operates or the potential outcomes of its use, particularly regarding its effects on users or those affected by it, you should refrain from utilizing it or at the very least, carefully contemplate the associated risks. If you are able to explain its workings and the impacts, serious concerns arise regarding the obligation to disclose information to A.I. users.

    3. Data set precision and model reliability

    No data set is entirely free from bias; however, the bias in your A.I. is largely contingent on the data set used (in addition to the model’s development, application, and the variables introduced by programming).

    The data collected to train an A.I. model should strive for maximum accuracy. This necessitates proper formatting and cleansing of data. Decisions about the volume of data collected must be made, as a general rule, larger data sets tend to enhance accuracy. This data is then utilized to train models. It is essential to implement systems that promote robust model development. This may involve producing multiple iterations of models until an acceptable one is identified. The final model must then be fine-tuned through various scenarios and acceptance testing. Care must be exercised throughout each stage of A.I. development to optimize data accuracy and model reliability as much as possible.

    Even post-deployment, an A.I. system may require frequent adjustments to reduce instances of false positives and false negatives over time. This ensures adaptation to a continuously changing data set and guarantees that A.I. systems are updated with the most current and accurate information.

    For companies utilizing A.I. created by others, it is crucial to carry out sufficient due diligence to verify the precision and reliability of these systems. Additionally, it is beneficial to address liability and accountability questions in the event of issues impacting users. Various parties may be liable depending on whether a problem arises from the A.I. system’s creation or its integration, deployment, and maintenance.

  • Artificial intelligence (AI) could majorly impact the tourism industry

    Artificial intelligence (AI) could majorly impact the tourism industry. Will holiday recommendations and personalized excursion suggestions become the norm? What does this mean for the employees?

    According to TUI manager Pieter Jordaan, generative artificial intelligence (AI) will majorly impact the tourism industry. “Those who use the technology will be faster and more productive than those who do without it,” said the travel group’s CIO (Chief Information Officer).

    Generative AI that can generate new content will very quickly replace tasks. This also has consequences for the end users. “This will fundamentally change how people plan and book their trips in the future.”

    Will employees become redundant?

    Jordaan explains that in the future, employees in travel agencies could use AI to advise customers. “Generative AI will very quickly replace tasks, but not jobs,” the company says. The so-called generative AI, which also includes text robots such as ChatGPT, can create new content based on existing information and specifications from a user.

    In Great Britain, TUI now uses the text robot ChatGPT in its app. Around half of customers have been able to access the offer in a test so far. ChatGPT uses generative AI to provide users with personalized excursion suggestions and answer questions about vacation destinations. The demand for the offer is higher than expected, with more than 10,000 users.

    “Human gut feeling is irreplaceable.”

    According to the organizer Schauinsland-Reisen, qualified specialists will remain essential. The company is currently using ChatGPT on a test basis. The software helps, for example, with creating customer newsletters. “However, AI cannot replace the experience and expertise of our specialists ; it can only serve as support,” said a Schauinsland dialect.

    An AI like ChatGPT could simplify and automate individual time-consuming workflows in the long term. “The human gut feeling when putting together our products cannot be replaced by AI,” said the neutral. A fully automated use of ChatGPT is out of the question for Schauinsland in the future.

    The industry association DRV also assumes that travel professionals will not become superfluous: the experts in travel agencies know their customers’ wishes and preferences well and make tailor-made offers. “Today, AI cannot offer this content with all the expert tips that are not freely available on the Internet.”

    Customer data will not be passed on.

    The travel company TUI is planning to use AI not only in Great Britain. In the future, customers in Germany will also be able to use the text robot in the app. “If all tests are successful and we are satisfied with the safety, we aim to bring the product to market by the end of the year,” said Jordaan. Several travel companies in Germany already use ChatGPT for various applications.

    To prevent incorrect answers from ChatGPT, TUI has reportedly taken precautionary measures in the app. This allows answers to be checked before they are shown to customers. The company pays a small fee for each request, said the CIO. At no time does TUI pass on customer data when ChatGPT is used in the app?

    Personal customer contact remains essential.

    DER tourism top manager Mark Tantz (COO Central Europe) sees opportunities to cushion the shortage of skilled workers. Automation – whether superficial or artificial – is a way to relieve employees of simple tasks so that they can, for example, concentrate on more exciting activities. “This is a relevant topic, especially when there is a shortage of skilled workers,” said Tantz.

    The specialist travel provider Chamäleon Reisen, which has been using ChatGPT for accommodation descriptions on its homepage since this year, continues to attach great importance to a direct customer connection. “We continue to consciously focus on direct contact with our customers. They should continue to be able to reach those responsible for individual destinations directly in the future,” reported Ingo Lies, founder of the sustainable travel organiser.

    The travel group Alltours sees it similarly: “Personal contact with our customers remains important to us, which AI cannot replace.”

    Nowadays, travel companies often boast about their use of AI. They heavily promote new tools and sometimes even rebrand themselves as AI companies.

    However, some industry insiders believe that it’s mostly exaggerated.

    Executives from three hotel tech companies – competitors Cloudbeds, Mews, and Stayntouch – all shared their opinions on the excessive attention generative AI is receiving.

    All three companies primarily focus on their property management systems, which handle hotel operations such as check-in and check-out.

    Too Much AI Hype: ‘There’s No Silver Bullet’

    Harris from Cloudbeds thinks that hotel tech companies excessively promote AI tools that aren’t as remarkable or unique as they claim.

    According to Harris, Cloudbeds has been using AI since its inception, but the company hasn’t actively marketed it.

    He mentioned that Cloudbeds’ services include AI tools such as automatic translation, content generation for advertising, and AI-generated drafts of responses to customer reviews. However, Harris believes that these are not groundbreaking. He remarked, “I don’t think that’s cool. That’s commodity.”

    Harris expressed his opinion that over the next three years, there will be a lot of AI hype but not much substance. He believes that while some AI advancements are impressive, they are not the ultimate solution. He emphasized, “There’s no Holy Grail. There’s no silver bullet.”

    Furthermore, Harris stated, “Are we playing with ways that we can bring the magic front-and-center to hoteliers? 100% We have a really good team that is playing with new forms of AI.”

    Not Much AI Innovation

    Mews recently unveiled some AI-powered products, including an enhanced search feature that allows hotel staff to ask questions in plain language and receive suggestions based on past stays and real-time data.

    Valtr from Mews expressed surprise at the general lack of announcements from hotel tech companies, particularly during the recent major industry convention, HITEC.

    According to Valtr, “What’s annoying is how little everyone’s actually done in terms of actual interesting innovations.”

    He added, “This is an industry where generative AI would really work. [Property management systems] are basically the main data systems of record.”

    Where AI Is Most Useful

    Stayntouch is organizing its first AI hackathon, focusing on automating internal tasks. The company’s priority is on internal uses, such as a new tool to expedite customer service staff’s access to resources.

    Messina from Stayntouch stated, “We get asked a lot about how we’re using AI, and people are looking for a lot of guest-facing interactions for it. We’ve decided to take a little bit of a different approach instead of just dropping dot-AI at the end of each of our product names, like a lot of folks are doing.”

    Moreover, Messina shared his perspective that AI can free up employees from repetitive tasks, allowing them to focus on creating innovations based on their software hospitality background.

    When it came down to it, Google didn’t want to anger its core customers — advertisers — and this week announced it won’t phase out third-party cookies in its Chrome browser as planned.

    These cookies enable companies to track and target consumers across other websites. For example, Expedia can send potential customers an ad when it sees them shopping for luggage on Amazon, or Hilton can offer discounted stays to potential customers who may have been visiting Marriott.com.

    Knowingly or not, consumers often agree to enable the use of third-party cookies as trackers when they visit websites. Google’s decision to retain these ad trackers reversed a 2019 pledge to phase them out.

    Apple provides users of its Safari browser with the ability to block third-party cookies and limits tracking capabilities. Firefox allows users to decide on how to restrict them.

    During its earnings call on Tuesday, Alphabet CEO Sundar Pichai announced that its Google brand will enhance users’ privacy options but will not eliminate cookies. Pichai stated, “On third-party cookies, given the implications across the ecosystems and considerations and feedback across so many stakeholders, we now believe user choice is the best way forward there.”

    In addition to advertisers, some competition authorities suggested that removing third-party cookies might restrict advertising competition.

    What implications does Google’s decision on cookies have for Travel Marketers?

    We asked individuals across the travel, marketing, and venture capital industries what impact Google’s change of heart on cookies has for travel marketers.

    Seth Borko, Skift Head of Research

    Seth Borko, head of Skift Research, stated that Google’s choice to continue using third-party cookies will benefit smaller travel advertisers as larger companies were already developing methods to utilize their own first-party data to monitor consumers in case cookie capabilities disappeared.

    “I think this change comes too late to make a difference,” Borko said. “Large companies have spent a lot of time, money, and energy investing in first-party data strategies, and it’s probably too late to reverse that, regardless of Google’s actions. First-party data is extremely powerful and can be utilized for tasks such as training AI models and creating personalized offers and digital experiences.”

    He mentioned that Google’s decision “won’t alter the current situation” because major players are continuing to invest in first-party data “in the hopes of gaining an AI and personalization advantage.”

    Brian Harniman, Vice President of Strategy at From

    Brian Harniman, vice president of strategy at digital agency From, expressed frustration with a Google statement indicating that the decision was made to enhance consumer choice in advertising and to protect privacy rights.

    “ I think it’s an acknowledgment that they’re obligated to their big advertisers—travel brands or otherwise,” Harniman said, referring to Google. “These individuals need to continue to comprehend attribution, and all the third-party cookies make it simpler to do that. Using them makes it easier to purchase retargeted media through Google across the web as well.”

    He also suggested that perhaps the decision demonstrates that Google’s native advertising products, such as Google Flights and Google Hotels, “are not advanced enough to absorb all the revenue loss if the travel advertisers rebelled.”

    Amber Carpenter, Senior Vice President at Vtrips

    Amber Carpenter, senior vice president of product and marketing at vacation rental property manager Vtrips, doesn’t view Google’s reversal as very impactful on travel brands.

    “Privacy laws that require consumers to be given a choice about cookie tracking means that knowledgeable teams will still need to implement a first-party solution and data modeling to obtain a holistic view of consumer behavior, conversion, and ROI,” Carpenter said.

    Gilad Berenstein, Founder at Brook Bay Capital

    Gilad Berenstein, founder of the venture capital firm Brook Bay Capital, stated that Google’s decision is beneficial for travel marketers in the short term “since most businesses in our industry are not prepared for a post-cookie future.”

    Nevertheless, he admitted he is a proponent of “getting rid of the cookie” as it would have compelled companies to innovate in “finding a better way of understanding customers and marketing to them.”

    Berenstein noted that there is a lot of “overlooked data,” including first-party data, “that savvy entrepreneurs and product people will be able to interpret and use to their advantage.”

    Currently, AI’s impact extends across various industries, including travel and tourism. As customer expectations evolve and competition intensifies, businesses are adopting AI-driven software to transform their operations.

    There are many instances of AI’s influence on the travel industry. This includes providing personalized experiences for travelers and streamlining operations to improve efficiency, injecting new dynamism into the sector. This article aims to delve into how AI is reshaping the travel and tourism industry, highlighting its potential to drive growth.

    The travel and tourism industry is a fast-paced, dynamic sector with significant opportunities and formidable challenges. Evolving consumer demands, constant competition, and ongoing global events have made innovation and adaptability crucial for survival and growth. Recognizing this, businesses are increasingly turning to advanced technology, such as AI, to remain competitive and meet modern travelers’ expectations.

    AI, with its ability to analyze large amounts of data, predict trends, automate tasks, and deliver personalized experiences, has the potential to address many of the industry’s current challenges.

    To mitigate these challenges, AI can provide numerous benefits to the travel and tourism industry. Let’s explore some of them now.

    Optimizing employee management and scheduling is a critical way AI is transforming the travel industry. Businesses in the travel sector are leveraging AI’s predictive capabilities to allocate resources efficiently, ensuring optimal staffing levels and enhancing operational efficiency, and customer satisfaction.

    Another significant change brought about by AI is the rise of AI assistants and intelligent chatbots, which have revolutionized customer service in the tourism industry. These digital tools have transformed the role of traditional travel agents, enabling travelers to book flights, accommodations, and vehicle rentals online with ease and convenience.

    AI’s impact on the travel industry also extends to baggage and luggage tracking. With AI, airlines can track and manage baggage more efficiently, addressing a significant pain point for travelers and improving the overall travel experience.

    AI-powered navigation systems are also creating innovative changes in the tourism sector, making it easier for travelers to navigate unfamiliar cities and enhancing the sightseeing and exploration experience. Furthermore, AI-powered chatbots ensure fast response times, providing round-the-clock support and improving customer service in the tourism sector.

    Looking ahead, the role of AI in the travel and tourism industry appears set to expand. The technology’s potential extends beyond current applications, promising a future where travel becomes even more personalized, efficient, and growth-oriented.

    One potential future application for AI is hyper-personalization. As AI algorithms become more advanced, they will be able to offer even more tailored recommendations, anticipating travelers’ needs and redefining customer expectations for personalized experiences.

    In terms of operational efficiency, AI could automate even more aspects of the travel and tourism industry, leading to unprecedented levels of efficiency and cost savings.

    Ultimately, AI’s predictive capabilities will continue to evolve, allowing businesses to make strategic decisions with greater confidence and driving growth and profitability.

    This discussion regarding AI’s future impact on the travel industry reveals a future where AI becomes an integral part of the travel and tourism industry, driving innovation and growth. While the exact nature of these changes remains speculation, one thing is clear—the journey toward an AI-driven future in the travel and tourism industry has only just begun.

    Expertise you Can Trust at One Beyond

    Keeping up with the ever-changing travel and tourism industry is easy with our One Beyond newsletter!

    Our regular updates provide not just news but also a gateway to a plethora of industry insights, cutting-edge trends, and expert guidance. We often focus on the game-changing impact of Artificial Intelligence, exploring how AI is revolutionizing global industries and what trends to anticipate.

    When you subscribe, you’re joining a community of innovative professionals leveraging AI to fuel growth and redefine customer experiences. You’ll receive in-depth articles, stimulating discussions, and practical tips – all delivered directly to your inbox. Don’t miss this chance to stay informed, stay inspired, and stay ahead of the game.

    AI plays a crucial role in driving post-pandemic growth in travel and tourism.

    Artificial intelligence is revolutionizing the way businesses and entire industries, including travel and tourism, conduct operations. Companies in sectors such as airlines, hotels, attractions, and booking platforms are utilizing AI for various purposes, including gathering and analyzing customer data to anticipate behavior, provide relevant recommendations, personalize services, and improve customer experiences.

    Developments in AI, such as generative AI and machine learning (ML), are prompting the industry and consumers to reimagine the process of planning, booking, and engaging in travel. Businesses must reconsider how they create and promote their offerings, interact with customers, and manage their operations.

    AI is being used by travel and tourism companies to automate and optimize customer service, enhance customer experiences, and operate more efficiently. AI-driven technology is present in various aspects and functions, such as trip planners, booking platforms, check-in systems, automated baggage handlers, smart hotel rooms, face ID security, front desk robots, and virtual tour guides.

    AI-powered analytics are employed to gather and analyze data on customer preferences, predict behavior, make recommendations, and personalize services, such as hotel room temperature, lighting, and entertainment.

    The COVID-19 pandemic heavily impacted the industry due to social distancing guidelines, travel restrictions, passport and visa delays, mandatory quarantines, and other measures. Today, inflation and rising travel costs present new challenges.

    However, travel and tourism remain one of the largest global industries and are expected to continue expanding as transportation systems improve, remote work allows for more travel, and younger generations prioritize investing in memorable experiences over material possessions.

    The global travel and tourism market.

    Determining the size and growth of the industry is complex because it encompasses many sectors, including transportation, accommodations, attractions, and travel agencies. Therefore, data and statistics can vary.

    According to the World Travel & Tourism Council (WTTC) 2023 economic impact research, the global market is projected to reach $9.5 trillion this year, only 5% below the 2019 pre-pandemic levels. The sector’s contribution to the gross domestic product is expected to grow to $15.5 trillion by 2033, representing 11.6% of the global economy and employing 430 million people worldwide, nearly 12% of the working population.

    In the U.S. market, the industry is forecasted to reach $3 trillion by 2033, encompassing spending in-country by international visitors and citizens’ expenditures on their own travel abroad, according to WTTC research cited by Bloomberg.

    Statista data indicates that the global travel and tourism sector grew by approximately 41% in 2022 compared to the previous year, after a significant drop at the start of the pandemic, but it remained below the pre-pandemic peak at $2 trillion. It’s expected to reach nearly $2.29 trillion by the end of 2023, exceeding the 2019 reported peak.

    Research and Markets, in its 2023-2028 forecast, reported that the global leisure travel market size reached $804.4 billion in 2022 and is projected to grow at a compound annual growth rate (CAGR) of 8.75% to $1.33 trillion by 2028.

    Future Markets Insights predicts that the global tourism market will expand at a CAGR of 5% to $17.1 trillion in 2032, while the International Air Transport Association estimates it will surpass $8.9 trillion by 2026, growing at an estimated CAGR of over 3.1% from 2021 to 2026.

    Based on these projections, the global travel and tourism market is anticipated to be valued between $15.5 trillion and $17.1 trillion by 2032 or 2033.

    Travel and tourism in Puerto Rico.

    Historically, Puerto Rico’s tourism industry has been a significant contributor to its economy, generating employment and accounting for somewhere between 2% and 10% (data varies widely) of the island’s GDP of about $113.4 billion (World Bank, 2022).

    According to data from WorldData, Puerto Rico received approximately $2.8 billion from tourism in 2021, which accounted for 2.5% of its GDP and roughly 15% of all international tourism earnings in the Caribbean.

    Discover Puerto Rico reported that the travel and tourism industry has experienced significant growth post the COVID-19 pandemic, surpassing the U.S. mainland and other Caribbean destinations. The local destination marketing organization anticipates that 2021, 2022, and 2023 will be the most successful years in Puerto Rico’s tourism history in terms of visitor demand, lodging profitability, tourism tax revenue, and hospitality employment.

    Earlier this year, Discover Puerto Rico announced a record-breaking 2022, citing increases in revenue, incoming traveler numbers, and employment within the industry. The organization also shared the following findings:

    • Over 5.1 million passengers arrived at Luis Muñoz Marín International Airport in the previous year, reflecting a 6.5% rise from 2021.
    • The revenue from travel and tourism reached $8.9 billion, marking a 39% increase over the previous high in 2019.
    • Around 91,500 individuals were employed in travel and tourism-related positions, the highest figure ever recorded in Puerto Rico, up by 12.8% from pre-pandemic levels.
    • Group room nights doubled from 2021.
    • The final quarter set a record, with lodging demand being 7% higher than in 2021 and 31% higher than pre-pandemic levels.
    • Further growth is anticipated, with the WTTC projecting a 156% increase in tourism spending in Puerto Rico by 2032.

    AI is expected to contribute to this growth.

    It is projected that AI and e-commerce will drive a portion of this expansion.

    According to Statista, online sales are expected to generate 74% of global revenue and 71% of U.S. revenue by 2027. The rapid integration of AI, big data analytics, and the internet of things (IoT) in the tourism industry is propelling the market, as noted by R&M.

    R&M stated in its report “Artificial Intelligence (AI) in Travel and Tourism” that “AI is emerging as a crucial factor in the travel and tourism sector, transforming various aspects of the travel journey, from inspiration to the overall experience. AI’s role in the sector is expected to grow significantly by 2030.”

    A report by global management consulting firm McKinsey, titled “The Promise of Travel in the Age of AI,” attributed the anticipated growth in travel to ongoing corporate travel recovery and consumer demand for unique experiences. The report anticipates that travel will grow at an average rate of 5.8% annually through 2032, more than double the expected growth rate of the overall economy, which is 2.7%.

    At the time of publishing, News is my Business had not received data and insights from local organizations regarding AI’s impact on Puerto Rico’s travel and tourism industry.

    There is still a demand for travel agents.

    Despite the predicted growth of AI in the industry, there is still a need for travel agents. The travel disruptions caused by the pandemic have led travelers to rely on agents to plan and book their trips.

    The process of planning and booking a trip, especially a complex one, demands time and effort that many individuals with busy lives do not have. According to a 2023 American Society of Travel Advisors (ASTA) consumer survey, 50% of travelers are now more inclined to use a travel advisor than in the past. Additionally, over half (54%) agreed that “a travel advisor can alleviate some of the complications related to airline fees.”

    Travel agents are responsible for nearly 77% of total cruise bookings, 55% of air travel bookings, and 73% of travel package bookings, as reported by Travel Technology & Solutions, a provider of travel agency technology.

    The U.S. Bureau of Labor Statistics projects that employment of travel agents in the U.S. will increase by 3% from 2022 to 2032, a rate similar to the average for all occupations.

    What impact does generative AI have on the tourism industry?

    Generative AI is also equipping destinations with powerful marketing tools. A recent campaign by Visit Denmark reimagined iconic artworks through the use of artificial intelligence for both scripts and visuals. Kathrine Lind Gustavussen of Visit Denmark states, “While it felt somewhat risky to entrust our messaging entirely to artificial intelligence, we are thrilled to be at the forefront of the industry, leveraging cutting-edge technology to bring our vision and message to life.” She also mentions that all scripts were entirely generated by AI, with only the removal of excessively lengthy or inaccurate parts. While impressed by the copy produced by ChatGPT, she noted that some sections appeared repetitive, mechanical, and superficial.

    The limitations of ChatGPT are evident, as the output often lacks the authenticity and warmth of human effort. It is essential for travelers relying on its capabilities to be aware that the most recent version is based on data up to 2021 and lacks access to critical real-time information such as airline schedules and weather forecasts.

    Since these models are trained on vast amounts of existing data, they can also produce unreliable information. Mattin highlights that any AI model’s responses can reflect existing prejudices and assumptions found online, potentially perpetuating inherent bias. However, with training on more current and extensive information, and subject to scrutiny and feedback, it is assumed that these tools will become more intelligent and nuanced.

    While ChatGPT has been in the spotlight, artificial intelligence has been shaping the travel experience for years behind the scenes. Various businesses in the travel industry, such as hotels, airlines, casinos, cruises, and car rental companies, have been utilizing AI or machine learning to analyze data, power booking systems and chatbots, and automate financial processes.

    With the addition of ChatGPT functionality and the growing interest, investment, and innovation in this field, Tom Kershaw, chief product and technology officer at retail platform Travelport, believes that AI has the potential to revolutionize the travel industry in two key areas. “The first is personalization—using data and predictive analytics to offer the perfect deal to the traveler at the right time,” he says. “The second is automation—reducing the time required to modify a ticket, cancel a ticket, reroute a traveler, or adjust an itinerary.

    As staff continues to be in short supply and travel demand continues to outpace supply, replacing routine human tasks with automation is not only desirable but essential for the continued growth and relevance of the travel agency community.”

    Striking a balance between human and machine is Scenset (formerly Origin), a travel companion app that provides personalized luxury itineraries to members through human “curators” equipped with in-house tools powered by artificial intelligence. Founder Eli Bressert explains, “This synergy creates a high-powered service tailored to the nuanced preferences of our customers. Our curators can focus on customers without being overwhelmed by complex factors such as scheduling, pricing, or managing preferences and needs.” Bressert also points out that the more the machines learn from customers, the more precise the service becomes.

    How are hotels using AI?

    In addition to intelligent online curation, artificial intelligence is also impacting the physical aspects of our travel experiences. Hotels, which generate a large amount of data daily, are increasingly employing AI to enhance their operations, reduce costs, and streamline customer service. AI’s transformative influence is evident in dynamic pricing based on real-time market insights, personalized automated emails, efficient check-in processes, and room monitoring and adjustments.

    Additionally, there is the idea of a robot concierge. Previous efforts haven’t always been successful (only four years after the Henn na Hotel in Japan introduced AI staff in 2015, about half of its nearly 250 robotic dinosaurs that welcomed guests were let go), but after the pandemic, it is likely to become more common. A study by the American Hotel and Lodging Association in 2020 found that 85 percent of travelers were more comfortable using technology to minimize direct contact with hotel staff.

    In the Gorafe desert in southern Spain, the pioneering off-grid pod-tels by District Hive showcase a different view of the future, with each self-sustaining accommodation providing guests a high-tech, human-free experience through a custom mobile app that manages everything from unlocking its doors to controlling lighting, sound, and interior fragrance, while also monitoring remaining energy levels, solar production, temperature, and water quality.

    In Australia, the new 316-room Dorsett Melbourne takes it a step further, employing AI-guided robotic cleaners to help behind the scenes, following the example of its Gold Coast counterpart, where robots are used at check-in. Saudi Arabia’s new Neom development, which includes numerous cities and resorts, has turned into a competition between hotel brands striving to surpass each other with AI-driven innovation.

    Robots are also being introduced in airports, with over 200 set to be deployed in Dubai. These multilingual companions utilize portable robotics and facial recognition to expedite passenger check-ins, reduce wait times, and guide travelers through the world’s busiest international hub. A short distance away, Istanbul Airport has established itself as a smart-airport trailblazer since its opening in 2018, integrating AI throughout all its processes, with traveler flow measurement, biometric scanners, intelligent chat with 24-hour support, and augmented reality, all contributing to reducing queues and wait times.

    This will only further progress as facial recognition technology becomes more integrated, with the world’s largest airline alliance, Star Alliance, urging half of its member airlines to implement biometrics by 2025.

    Finding the Right Balance

    The more companies embrace AI models to manage, analyze, and harness large datasets, the greater the potential for change. “We’re just beginning to comprehend the impact of these language models, but the world could look very different in five years,” says Mattin, noting that AI provides “astounding” productivity gains, while also satisfying that, as with many industries, job losses are inevitable as the very principles that govern the world of work are redefined.

    Our relationship with travel itself may also evolve as virtual reality, which has not gained significant traction until now, receives new momentum from generative AI, enabling travelers to construct their own online world. “We’re moving towards a place where you’ll be able to describe a virtual world and then proceed to experience it,” he explains. “These are becoming realms of significant human experience in their own right, and that is turning into a completely mind-bending new dimension of travel.”

    Slightly less exciting, my Tuscan vacation will surely benefit from insights derived from ChatGPT. Nevertheless, despite hoping that its recommended wine festival and swimming spots actually exist, I’m also confident that – as travel has always done – chance encounters and my own awkward , word-of-mouth interactions with new friends will provide me with the most enduring. memories of the trip.

    The intersection of travel and technology is progressing at an unprecedented pace. Particularly with AI, the travel sector could experience a substantial shift, enhancing journeys to be more efficient, sustainable, and customized to individual needs.

    In this piece, let’s explore some AI trends that are currently influencing the travel industry:

    Customization

    One of the primary uses of AI is customization, where algorithms assess user behavior and preferences to provide personalized content and suggestions. This strategy has been effectively employed in streaming services, e-commerce sites, and social platforms, boosting user interaction and satisfaction.

    Picture yourself looking for flights to Dubai. Most travel websites would present standard options based on your departure point and selected dates. With AI, this process could be much more customized. By evaluating your previous travel experiences (beach holidays versus cultural excursions), browsing habits (museums versus theme parks), and even social media activity (posts about Middle Eastern cuisine), AI could recommend flights that suit your unique interests.

    Automation and Productivity Solutions

    Another notable trend is the application of AI in automation and productivity solutions. For example, AI-driven chatbots and virtual assistants are being incorporated into customer support applications and workplace collaboration tools to simplify communication and assist with routine tasks.

    These applications have demonstrated success in minimizing response times and enhancing overall efficiency. In the travel field, for instance, Marriot International introduced an AI-enabled chatbot, “Marriott Bonvoy Chat,” which helps guests with booking reservations, providing information about hotel features, and suggesting local attractions based on their interests.

    Dynamic Pricing and Tailored Packages

    AI could evaluate real-time information on flight fares, hotel availability, and local activities to create dynamic packages customized for individual preferences and budgets.

    If you are a cost-conscious traveler who loves discovering local culture, AI might propose a flight during off-peak times along with a stay in an economical hotel near historical sites and public transport options. This level of customization is likely to surpass merely offering different flight alternatives at various price levels; it could actively curate a complete travel experience tailored to specific requirements and preferences.

    Common Obstacles When Adopting AI

    Despite these advantages, integrating AI into travel services and products will require significant effort and the overcoming of notable challenges. From my experience working with AI solutions at Rocket Systems, here are some key difficulties companies may face when attempting to incorporate AI into their current projects:

    The Complexity of AI Technologies

    AI technologies necessitate specialized expertise and skills. Therefore, companies should consistently invest in training and development to create diverse datasets that represent a broader range of travelers. This includes collaborating with various travel service providers and actively gathering data from users with different backgrounds and preferences.

    Managing and Processing Large Data Volumes
    To handle the data required for AI algorithms, strong data management practices are essential, including effective data storage, cleaning, and validation methods. This ensures that the AI models are trained on high-quality data, resulting in more precise and dependable outcomes.

    Travel organizations frequently have data dispersed across multiple sources, such as reservation systems, customer relationship management (CRM) platforms, and social media channels. Consolidating this data into a unified and coherent platform is vital for successful AI implementation.

    Specifically, establishing a data lake—a central repository for maintaining all travel-related data in its unrefined format—can aid in addressing this challenge. This enables flexible exploration and analysis of data, promoting the integration of various data sources for AI training.

    Scalability

    AI solutions must be capable of accommodating increasing user populations and data volumes. Crafting AI architectures that are scalable and adaptable and utilizing cloud services and modular approaches to facilitate easy expansion will help tackle this issue.

    Cloud platforms, in particular, provide scalability, cost-efficiency, and access to advanced data analytics tools, making them suitable for managing extensive datasets.

    Ethical Concerns and Transparency

    Companies should be open about their use of AI in their applications, including practices for data collection and processing. They should also evaluate the ethical implications of their AI functionalities, such as potential biases and privacy issues, and take measures to address these concerns.

    Conclusion

    The travel industry is currently undergoing a significant transformation, driven by technological advancements and an increased emphasis on sustainability and personalization. AI, in particular, is enhancing operational effectiveness. By automating customer service inquiries and streamlining booking processes, AI is lowering expenses and elevating service standards.

    This not only advantages the businesses but also enriches the traveler experience, making travel more available and pleasant, which aligns with the changing expectations of contemporary travelers. However, it’s important to note that successful AI integration into applications necessitates a blend of technical proficiency, strong data management, scalability planning, user-focused design, and ethical considerations.

    Tourism plays a vital role in various global economies, bringing multiple advantages. It increases economic revenue, generates jobs, develops infrastructure, and promotes cultural exchange between tourists and locals. Over the years, tourism and traveler behaviors have changed significantly. With continuous technological advancements, AI is now poised to transform the sector.

    AI technology is revolutionizing modern travel in numerous ways. It provides personalized travel suggestions, improves customer service through virtual assistants, and enhances operational efficiency. With intelligent booking systems, dynamic pricing mechanisms, AI-based language translation, and virtual tours, AI is enriching every facet of the travel experience. According to Worldmetrics, 83% of travel companies feel that AI is crucial for innovation in the sector, and AI-driven personalization in tourism boosts customer satisfaction by 20%.

    As the travel industry continues to embrace and incorporate AI technologies, it promises to deliver unparalleled improvements in convenience, efficiency, and personalization for travelers and businesses alike. A report from WorldMetrics indicates that implementing AI has already resulted in substantial cost savings for travel companies. For example, airlines applying AI for flight scheduling and predictive maintenance have reported global savings of up to $265 billion due to improved operational efficiencies.

    AI has greatly enhanced tourism, providing numerous advantages for travelers and businesses alike. Let’s delve into some of these main benefits.

    These advantages include:

    • Improved customer service and tailored experiences: AI offers 24/7 customer support via virtual assistants and chatbots, delivering personalized suggestions and swiftly addressing inquiries, which leads to increased customer satisfaction as travelers receive services that cater to their specific needs.
    • Enhanced efficiency in travel logistics and planning: AI streamlines travel logistics by managing timetables, anticipating possible disruptions, and optimizing routes. This leads to a more seamless travel experience for customers and enables travel companies to efficiently organize and manage resources.
    • Cost reductions for travelers and travel companies: AI-driven dynamic pricing and intelligent booking systems enable travelers to secure the best deals instantly, while travel companies can optimize revenue by adjusting prices according to demand. Furthermore, automating routine tasks lessens operational expenses for travel companies.

    Challenges of AI in Tourism

    Despite its immense usefulness, the integration of AI in travel and tourism does come with challenges for both travelers and businesses. Some of these issues encompass:

    • Concerns regarding privacy and data security: The application of AI in tourism necessitates the collection and processing of vast amounts of personal data, leading to concerns about privacy and data safety. Safeguarding this data is essential to maintain user trust and comply with regulations such as GDPR and CCPA.
    • Reliance on technology and the reduction of personal interaction: Over-dependence on AI technology may result in a diminished personal touch that many travelers cherish. Human interaction and personalized service are vital aspects of the travel experience that AI may not be able to fully replicate.
    • Difficulties in addressing complex, unstructured travel inquiries: While AI excels at handling straightforward tasks, it often encounters challenges with complex, unstructured travel questions that require a more nuanced understanding and judgment. This limitation calls for a balance between AI tools and human expertise to effectively address diverse customer needs.

    The Future of AI in Tourism
    Hyper-Personalization

    A notable project anticipated in the near future is hyper-personalization. AI will increasingly provide deeply customized travel experiences by analyzing extensive data sets, including previous behaviors, preferences, and real-time information. Travelers will receive highly tailored suggestions for destinations, accommodations, activities, and dining options. Presently, several companies, including World Trip Deal (WTD), Amadeus, and Travelport, are at the forefront of hyper-personalization in tourism.

    The idea of hyper-personalization arose from the larger trend of employing big data and AI to improve customer experiences across different sectors. As the desire for personalized interactions among consumers increased, travel companies started utilizing these technologies to fulfill the need for customized experiences, resulting in the emergence and acceptance of hyper-personalization in the travel industry.

    You can observe the evolution of hyper-personalization through various platforms and services offered by businesses like Expedia, Airbnb, and Booking.com.

    AI-Driven Sustainability

    Sustainable tourism involves the implementation of environmentally friendly practices within the travel sector. Its main objective is to ensure that tourism can be pursued indefinitely without damaging natural and cultural resources, while also providing economic and social benefits to local communities.

    The primary components of sustainable tourism are:

    • Environmental Accountability: Concentrating on conserving resources, minimizing pollution, and safeguarding biodiversity.
    • Economic Sustainability: Ensuring tourism yields long-term economic advantages, supporting local enterprises and employment.
    • Cultural Appreciation: Protecting cultural heritage and involving local communities in tourism planning and decision-making processes.

    Having defined sustainable tourism, let’s discuss some examples of sustainable tourism practices:

    • Eco-Tourism: Travel activities aimed at experiencing and conserving natural settings, often including activities like wildlife observation, hiking, and eco-lodging. These initiatives promote conservation efforts and educate travelers about environmental preservation.
    • Community-Based Tourism: Tourism projects that are owned and managed by local communities, offering visitors genuine cultural experiences. This directly benefits local communities by generating jobs and maintaining cultural heritage.
    • Green Certification Programs: Certification systems that acknowledge and encourage environmentally friendly and socially responsible tourism businesses. This motivates companies to adopt sustainable practices and provides consumers with informed choices.
    • As tourism and travel expand together, sustainability is also anticipated to be incorporated alongside it. Consequently, we expect that AI will soon facilitate the creation of more sustainable tourism practices by optimizing resource usage, minimizing waste, and promoting eco-friendly travel options. For instance, AI can aid in planning more efficient travel routes to lessen carbon footprints.

    The foundation of the concept of AI-powered sustainability in tourism stems from the increasing awareness of climate change and environmental degradation, combined with advancements in AI and big data technologies, allowing the creation of advanced tools that can optimize resource usage and diminish waste.

    A diverse range of stakeholders is anticipating this project, including:

    • Consumers: Travelers are becoming increasingly aware of their environmental footprint and are choosing sustainable travel options.
    • Government and regulatory agencies: These organizations are advocating for more sustainable practices across all sectors, including tourism, to address climate change.
    • Tourism and travel companies: Businesses in the sector recognize the importance of implementing sustainable practices to satisfy consumer demands and meet regulatory obligations while also lowering expenses linked to resource usage and waste management.
    • Currently, some companies have begun to incorporate AI to enhance sustainability in tourism. For example, Lufthansa and Qantas are using AI to develop more efficient travel routes that reduce fuel consumption and carbon emissions.

    Hotels and resorts are also employing AI to monitor and optimize resource consumption such as water and energy, thereby minimizing waste. For instance, Hilton utilizes AI-powered systems to manage energy use throughout its properties.

    Moreover, AI-driven platforms are offering travelers suggestions for eco-friendly lodging, transportation, and activities. Platforms like Google Travel now provide information on the environmental impact of various travel choices.

    Effortless Integration with IoT

    The merging of AI with the Internet of Things (IoT), which is a collection of physical devices linked to the internet, enables them to gather, share, and act on data, will enrich the travel experience by delivering real-time updates and automating multiple facets of travel. Illustrations of this include smart luggage tracking, automated check-ins, and customized in-room experiences in hotels.

    AI models developed specifically for the travel sector are transforming how businesses engage with customers, streamline operations, and offer customized experiences. These models utilize extensive data, such as customer preferences, travel behaviors, and past booking information, to provide personalized suggestions, flexible pricing, and effective trip planning.

    For instance, AI-powered chatbots and virtual assistants deliver immediate customer support, managing inquiries and reservations with high precision and efficiency. AI also improves predictive maintenance for airlines, helping to optimize flight schedules and minimize delays. By integrating AI, the travel sector can greatly enhance customer satisfaction, improve operations, and boost revenue.

    Key Insights

    AI’s potential to transform tourism is substantial, providing personalized travel planning tools, enhanced logistics, and improved customer service. While advantages include greater efficiency and customized recommendations, challenges like privacy issues and ethical considerations persist.

    Adopting AI necessitates a thoughtful approach, recognizing both its benefits and possible drawbacks. By tackling these challenges, the travel industry can utilize AI to offer more enriching and convenient experiences for travelers, ultimately influencing the future of tourism positively and innovatively.

    Curious about developments in computer vision? For the latest information, check out Ultralytics Docs and their projects on Ultralytics GitHub and YOLOv8 GitHub. Additionally, if you’re interested in AI applications across different sectors, their solutions in Agriculture and Manufacturing may also catch your attention.

    In the ever-evolving world of travel, artificial intelligence (AI) acts as a catalyst for change, transforming our experiences from the very moment we choose to journey. By creating an AI ecosystem for travelers, we have managed to double conversion rates, cultivate user loyalty, and build global communities. Beyond simplifying the planning process, AI innovation reimagines the core of travel, promising a future characterized by efficiency, personalization, and global enrichment.

    Revealing value through AI advancement

    Research from McKinsey highlights the vast potential of generative AI, proposing that it could produce between $2 trillion and $4 trillion in annual value across various sectors.

    We are already witnessing the implementation of AI technologies, like facial recognition, for check-ins at airports and hotels, which improves security and streamlines the boarding experience. Biometric systems lead to a more efficient and secure travel journey. In hospitality, robots powered by AI handle tasks such as room service, concierge functions, and cleaning. Some airports are also utilizing robots for baggage handling and customer support.

    Moreover, AI algorithms extensively analyze user behavior, preferences, and previous travel history to deliver tailored suggestions for destinations, accommodations, and activities.

    In July 2023, we introduced an upgraded version of our AI travel assistant, TripGenie. This tool offers a more convenient, personalized, and intuitive approach to travel planning. It uses the concept of a language user interface, providing users with real-time support that greatly enhances comfort and intuitiveness in the planning process.

    If you ask, “How can I plan a three-day trip to Switzerland?” the travel assistant quickly generates a personalized, editable itinerary in less than a minute. It suggests tourist attractions and shopping venues while also providing booking links, images, and city maps within the conversational interface.

    The outcomes are not only promising, but also transformative. TripGenie has raised order conversion rates and user retention rates, resulting in increased loyalty and satisfaction among users.

    Intelligent travel planning and support

    For businesses, AI is employed to assess historical booking trends, market demand, and external factors (such as weather and events) to optimize pricing in real time. Dynamic pricing models assist companies in adjusting rates to maximize earnings and occupancy levels in hotels. AI is also utilized for predictive maintenance in transportation, aiding in the anticipation and resolution of potential vehicle and aircraft issues before they lead to disruptions.

    For consumers, the future of intelligent travel planning is characterized by effectively deciphering intricate requests and quickly guiding users to detailed itinerary planning, personalized suggestions, and bookings. This is a process we continuously enhance with our travel assistant, reducing the manual effort of inputting and filtering searches and making travel planning as easy as conversing with a friend. TripGenie creates personalized, editable itineraries in under a minute that would typically require hours or days to arrange manually. It is also capable of managing complex requests, like multi-destination planning.

    A cohesive AI-powered framework

    During the COVID-19 pandemic, we observed augmented reality (AR) and virtual reality (VR) technologies improve the travel experience by offering virtual tours, interactive maps, and immersive activities that allow travelers to explore destinations without physically visiting them. A prevalent application of AR is in translation apps, which enable users to point their smartphones at foreign signs or text. The app then overlays translations on real-world images, facilitating language understanding for travelers and enhancing their experience in a new environment.

    These shared experiences can strengthen the connection between travelers and travel partners. On our platforms, we have explored the use of AI to delve into the narratives of travelers, creating algorithmic, AI-powered lists. These lists provide curated information based on user preferences and real-time data, promoting a lively travel ecosystem and robust traveler communities.

    Improving customer experience with AI accuracy

    AI-driven chatbots and virtual assistants are utilized for customer service, delivering immediate answers to inquiries and assisting with booking arrangements. These systems manage routine tasks, such as reservation modifications and frequently asked questions, while offering travel recommendations based on user preferences.

    Our AI chatbots address numerous inquiries through text and voice, achieving remarkable self-service resolution rates for airline tickets and accommodations. This streamlined method not only conserves time and energy for customers, but also enhances case-solving efficiency, allowing customer service teams to concentrate on more intricate cases.

    Future trends: AI and the evolution of travel

    As we gaze into the future, the role of AI in travel is set to emphasize efficient and highly customized options tailored to the specific needs of each traveler. This vision represents the upcoming phase of the travel sector and highlights the significant influence of AI in enhancing the convenience, personalization, and memorability of travel for all.

    This advancement in AI coincides with a flourishing travel market, where both domestic and regional tourism are seeing considerable growth. During China’s ‘Golden Week,’ the first extended holiday after the reopening of borders in 2023, outbound travel saw an increase of over 800% compared to the previous year, while domestic tourism rose by almost 200% this year. China’s inbound tourism holds vast potential and, if elevated to 1.5% of GDP, could result in a growth exceeding RMB 1.3 trillion.

    In this new travel landscape, we remain hopeful. As we progress in the realm of AI, the opportunities are not only thrilling; they are boundless. AI is transforming not just how we travel; it is redefining the very nature of our travel experiences, making them more efficient, intuitive, and profoundly rewarding.

    Artificial intelligence is increasingly recognized as a trustworthy and attractive commercial solution due to its ongoing advancements. The travel industry, in particular, is leveraging AI to manage a range of administrative tasks and customer support functions. AI in the travel sector fosters creative, personalized experiences where every strategy is based on strategic research and tailored to address unique requirements.

    As reported by Statista, the global market for artificial intelligence in travel is projected to grow at an annual rate, reaching $81.3 billion in 2022, with a compound annual growth rate (CAGR) of 35%, ultimately hitting $423.7 billion by 2027. The integration of AI technologies has provided significant advantages for customers, including real-time assistance and optimized pricing strategies, among other benefits. This blog will delve deeper into the implications of AI in the travel industry, its applications, and emerging trends.

    Significance of AI in Tourism

    The incorporation of artificial intelligence (AI) in tourism is transforming the industry by improving efficiency, personalization, and overall travel experiences. AI travel planning tools are becoming crucial for travelers, delivering customized itineraries that align with personal preferences and interests. These tools assess extensive data, including user preferences, historical travel patterns, and current information about weather and events, to craft highly personalized travel plans. This degree of customization guarantees that travelers enjoy distinctive and memorable experiences, enhancing the satisfaction of their trips.

    AI travel agent platforms are changing how individuals book and manage their journeys. These AI-driven agents can perform a broad spectrum of tasks typically handled by human agents, such as arranging flights, accommodations, and activities. They also offer immediate support and suggestions, addressing inquiries and solving issues around the clock. The convenience and efficiency provided by these virtual agents greatly decrease the time and effort needed from travelers in planning and organizing their excursions.

    Another key innovation is the creation of AI-powered trip planner applications. These intelligent systems not only assist in the initial planning phases but also support travelers during their journeys. Utilizing real-time data and sophisticated algorithms, AI trip planners can modify itineraries in real-time, proposing alternative activities or routes should plans shift due to unexpected events like weather changes or local happenings. This ability to adjust dynamically ensures that travelers can optimize their experiences, even amidst unforeseen changes.

    Additionally, AI in tourism aids industry businesses in improving their services and operational effectiveness. Hotels, airlines, and tour providers utilize AI to examine customer feedback and behaviors, allowing them to enhance their offerings and deliver more tailored services. AI-driven analytics assist in forecasting trends and customer requirements, enabling businesses to maintain a competitive edge.

    The travel sector has seen considerable changes in recent times, and Generative AI in the travel industry is pivotal in influencing the future of this field. From customized suggestions to predictive maintenance, AI is employed in various capacities to enrich the travel experience. Let’s explore some practical examples of AI in travel:

    1. Tailored Hotel Suggestions: Hotel brands such as Marriott and Hilton are implementing AI chatbots in the travel sector to offer personalized recommendations to their guests. These chatbots can interpret a guest’s preferences, such as their preferred room type, dining options, and activities, and propose customized experiences.

    2. Anticipatory Maintenance for Aircraft: Airlines like Delta and American Airlines are utilizing AI to foresee and avert mechanical issues on their aircraft. By analyzing sensor data and past maintenance logs, AI can detect potential problems before they arise, minimizing the likelihood of flight delays and cancellations.

    3. Smart Travel Planning: Travel agencies like Expedia and Booking.com are adopting AI-enhanced booking platforms to deliver tailored travel suggestions based on a user’s preferences and travel history. AI can process vast datasets to recommend the optimal routes, accommodations, and activities for a traveler’s upcoming trip.

    4. Advanced Airport Systems: Airports such as Amsterdam Schiphol and Singapore Changi are employing AI-enabled systems to enhance passenger processing and decrease waiting times. AI-driven chatbots can facilitate check-in, luggage drop-off, and security checks, making the airport experience more efficient and less stressful.

    5. Digital Assistants for Travelers: Virtual assistants like Amazon’s Alexa and Google Assistant are being integrated into hotel rooms and rental properties to offer personalized support to travelers. These virtual assistants can assist with a variety of tasks, from setting alarms to making reservations for restaurants and activities.

    6. Demand Forecasting Using Predictive Analytics: Companies in the travel sector, such as Airbnb and Uber, are leveraging AI-driven predictive analytics to anticipate the demand for their services. By evaluating historical data along with real-time feedback, AI can estimate when demand is likely to rise or fall, enabling companies to modify their pricing strategies and inventory accordingly.

    7. On-the-Go Language Translation: Travel applications like TripIt and TripCase utilize AI for real-time language translation, assisting travelers in better communicating with locals. These applications can translate languages instantly, helping to eliminate language barriers and simplifying navigation in unfamiliar locations.

    8. Smart Traffic Control: Cities including Paris and London are implementing AI-based traffic management systems to enhance traffic flow and minimize congestion. By assessing real-time traffic data and forecasting traffic trends, AI can contribute to shorter travel times, improved air quality, and decreased emissions.

    9. Immersive VR Travel Experiences: Travel firms such as Expedia and Airbnb are adopting virtual reality (VR) technology to offer travelers engaging travel experiences. VR can mimic hotel accommodations, destinations, and activities, enabling travelers to explore new places prior to their arrival.

    10. AI-Driven Travel Insurance: Companies like AXA and Allianz are employing AI algorithms to evaluate traveler behavior and deliver tailored insurance policies. By examining data related to a traveler’s destination, transportation means, and planned activities, AI can generate personalized insurance quotes that address an individual’s unique requirements.

    These practical applications of AI in travel illustrate the extensive possibilities of Artificial Intelligence in transforming the tourism sector. From enhancing the traveler experience to streamlining operations, AI is reshaping how we travel by offering tailored, efficient, and innovative solutions for people worldwide.

    The incorporation of Artificial Intelligence into the travel and tourism sector has transformed how individuals plan, reserve, and enjoy their trips. From customized suggestions and efficient support through AI systems to improving travel logistics, AI is redesigning the industry with unmatched accuracy and convenience. Nevertheless, despite these advances, the travel sector encounters considerable obstacles in fully harnessing AI. Issues such as data privacy worries, the intricacies of integrating AI with current systems, and the necessity for regular updates and maintenance present challenges that must be addressed to facilitate a smooth and secure AI-based travel experience.

  • Another job lost to AI. How many more jobs are in danger?

    AI is rapidly evolving and impacting various aspects of contemporary life, but some specialists are concerned about its potential misuse and the impact on employment. AI is a technology that enables computers to imitate human actions and responses by processing large volumes of data to identify patterns, make predictions, solve problems, and learn from mistakes.

    In addition to data, AI relies on algorithms, which are a sequence of rules that must be followed in order to carry out specific tasks. AI powers voice-based virtual assistants like Siri and Alexa and enables platforms such as Spotify, YouTube, and BBC iPlayer to suggest content. Furthermore, AI technology assists social media platforms like Facebook and Twitter in curating user content and supports companies like Amazon in analyzing consumer behavior to offer personalized recommendations and combat fake reviews.

    Two popular AI-driven applications, ChatGPT and My AI Snapchat, are examples of “generative” AI. They utilize patterns and structures from extensive data sources to generate original content that simulates human creation. These apps are integrated with chatbots, allowing them to engage in text-based conversations, answer inquiries, weave narratives, and generate computer code. However, critics produce caution that these AI systems can erroneous responses and perpetuate biases present in the source material, such as gender and racial prejudices.

    The absence of comprehensive regulations governing the use of AI has raised concerns about its rapid advancement. Some experts advocate for halting AI-related research, while others, including technology figureheads, emphasize the need for a rational discourse on AI’s capabilities. Notably, there are apprehensions regarding AI’s potential to propagate misinformation, influence societal decision-making, and even surpass human intelligence, leading to catastrophic consequences.

    Governments worldwide are still grappling with the establishment of effective AI regulations. The European Parliament recently endorsed the European Union’s proposed Artificial Intelligence Act, which aims to impose strict legal guidelines for AI applications. The Act categorizes AI applications based on their potential risks to consumers, with varying levels of regulation.

    Meanwhile, the UK has revealed its vision for AI’s governance, opting for oversight by a designated body rather than a dedicated regulator, while emphasizing the necessity for global cooperation in AI regulation. Additionally, China aims to mandate user notification of AI algorithm usage, reflecting the global discourse on AI governance.

    AI has advanced to applications that can perform tasks previously requiring human intervention, such as customer interactions and gaming. While the term encompassing AI is often used interchangeably with subfields like machine learning and deep learning, it’s crucial to recognize the distinctions between these areas. For example, while all machine learning constitutes AI, not all AI incorporates machine learning. Many businesses are heavily investing in data science teams to fully harness AI’s potential. Data science integrates statistics, computer science, and business acumen to extract value from data.

    Developers use AI to effectively perform tasks, interact with customers, recognize patterns, and solve problems. When beginning with AI, developers need to have a basic grasp of mathematics and be comfortable working with algorithms.

    When starting an AI application development journey, it’s best to begin with a small project, like creating a simple application for a game such as tic-tac-toe. Practical learning can significantly improve any skill, including artificial intelligence. After successfully completing small projects , the potential for applying AI becomes limitless.

    AI’s essence lies in emulating and exceeding human perception and response to the world. It is rapidly becoming the foundation of innovation. Fueled by various forms of machine learning that identify data patterns to enable predictions, AI can enhance business value by providing a deeper understanding of Abundant data and automating complex tasks.

    AI technology improves enterprise performance and productivity by automating tasks that previously required human effort. It can also comprehend data on a scale beyond human capability, yielding substantial business benefits. For instance, machine learning has contributed to Netflix’s 25% customer base growth through personalized recommendations .

    The adoption of AI is rising across various functions, businesses, and industries. It encompasses general and industry-specific applications, such as predicting customer spending based on transactional and demographic data, optimizing pricing according to customer behavior and preferences, and using image recognition to analyze medical images for potential illnesses.

    According to the Harvard Business Review, enterprises primarily employ AI to identify and prevent security intrusions, address users’ technological issues, streamline production management, and oversee internal compliance with approved vendors.

    The growth of AI across various industries is driven by three factors. Firstly, the accessibility of affordable, high-performance computing capability has significantly improved, mainly through cloud-based services. Secondly, abundant data is available for training AI models, made possible by Affordable storage, structured data processing, and data labeling. Finally, applying AI to business objectives is increasingly seen as a competitive advantage, leading to its prioritization and adoption across enterprises.

    AI model training and development involves various stages, including training and inferencing. This process experimenting with machine learning models involves address specific problems, such as creating different AI models for computer vision tasks like object detection.

    A few weeks back, I had lunch with a close friend who manages a rapidly growing real estate business with a $30 million annual revenue. While they primarily operate as a services business, he surprised me by discussing their extensive use of AI!

    Their primary use case for AI is in customer service and support. With thousands of customers, they receive a substantial volume of messages ranging from support queries to feedback for improvement.

    Initially, the company’s employees handled customer feedback. However, as the business grew, it became overwhelming. According to him, the critical challenge (and opportunity) was not just responding to people, but analyzing the feedback to gain actionable insights. This involved identifying themes for improvement or new features, services, or process enhancements.

    Typically, such work is performed by a junior product manager. While not particularly challenging, historically, it required a human touch to interpret different comments (eg, “The food was sick!” and “The food was sickening!” represent two distinct types of feedback!)

    AI came to the rescue. Instead of a human analyzing the data, he utilized AI for this task. He provided all the feedback and asked the AI ​​to summarize, categorize, and recommend improvements and actions to take. This process took just a few minutes and was part of a twenty-dollar-a-month AI subscription!

    Significantly, he found that Claude outperformed ChatGPT. The version of ChatGPT he used was a bit too “lazy”, often summarizing instead of categorizing everything, whereas Claude was more diligent in categorizing. Of course, this is a moment in time—OpenAI, Claude, Gemini, and others are continuously improving. Achieving the right balance between conciseness and accuracy versus wordiness and creating imaginary content has been a challenge for these AI platform vendors.

    He also verified the AI ​​results manually. Surprisingly, Claude’s results were actually superior to those done by an individual human.
    Now, he is relying solely on AI to process the feedback, rather than hiring additional staff.

    Another job lost to AI.
    How many more jobs are in danger?

    I suspect the actual impact will be even greater.

    For any of my readers in a corporate or government position, consider how effective (or ineffective) your company is today—even without AI! Do you have any coworkers that leave you wondering, “What do they actually do?”

    Having experience in both large companies and personally over the years, I have observed how inefficient organizations can be.

    Bureaucracy leads to more bureaucracy!

    Some companies have managed to combat encroaching bureaucracy. The changes made by Elon Musk at Twitter since he acquired it are remarkable. Set aside the political and media debate he has attracted and look at it from a business standpoint. He has now reduced the staff by around 80%, yet from an external standpoint, the company is thriving. New features are consistently being introduced (eg, subscriptions), and the service is still operational despite many critics predicting a complete collapse.

    I delved deeper into the changes at Twitter last year on ThoughtfulBits. However, for this analysis, simply recognizing that inefficiencies exist in many organizations is sufficient.

    At some point, at least one company in any industry will find out how to utilize AI technologies to eliminate or minimize those inefficiencies, providing them with a significant competitive advantage over traditional companies that don’t innovate.

    So, is this the end? Will we see 30% or more unemployment in the upcoming years?

    My personal prediction is no.

    I make that prediction based on history. AI is not the first technological revolution the world has seen: farming, the industrial revolution, and the computer revolution, among others, have each dramatically transformed the job market.

    In 1850, about 60% of the US population was involved in agriculture. Now, that figure is 3%. Historically speaking, food is now abundant and inexpensive. Although challenges regarding global poverty and hunger still exist, as a society, we have made tremendous advancements in food production while requiring far fewer individuals.

    What happened to all of those farming jobs? They are now computer programmers and Instagram influencers. The idea that an Instagram influencer could be a legitimate profession was unimaginable in 1850 and controversial even thirty years ago! There are now millions of individuals working as influencers in an industry generating over $21 billion in revenue.

    The World Economic Forum has some fascinating data on this shift over time.

    I anticipate we’ll witness a similar shift as AI begins to take over entire job categories, particularly lower-level knowledge worker positions, as noted by McKinsey.

    The Experienced Worker

    The crucial question is: “What will these new jobs be?”
    To answer that, let’s take a first principles approach: What remains constant in the world, even with AI?
    Well, the first answer is people!! And everything people need to be happy fulfilled humans.

    Even with AI, people will still need a place to live. They will still want to eat, go on dates, have families, play sports, learn, be entertained, socialize with friends, and so on. These are fundamental human and societal needs. While the context may be different, all those things were true in ancient Roman and Greek times, just as they are now. The Olympics originated in ancient Greece, after all!

    With the rise of computers, we witnessed the emergence of the modern “knowledge worker” class—think of everyone working at an office for some company (as opposed to a factory or farm). These jobs, whether in digital marketing analysis or software programming and similar fields, emerged due to the computer revolution.

    I expect we’ll see analogous “AI-focused” jobs. In fact, today, there is a new job category known as prompt engineering. Prompt engineering is for technical individuals focused on customizing AI technologies for specific use cases.

    As a simple example, consider the questions you might ask ChatGPT—the better you frame the question, the better the results. This forms the core of prompt engineering. However, given how rapidly AI is evolving, it’s unclear how enduring the prompt engineering job might be.

    Likewise, there will be numerous “AI consultants” in the upcoming years to assist individuals and organizations in transitioning to AI technologies, similar to the multitude of local “PC repair” shops in the 90s. But as people became more familiar with computers and the machines themselves became more reliable, those PC repair shops faded away.

    Prompt engineers, AI consultants, and similar roles will proliferate for a period, but what jobs will be more steadfast and enduring in the post-AI era?
    Returning to first principles, what is the common thread among most of those universal and timeless activities?

    It’s about people interacting with other people.

    If we extrapolate, just as the Industrial Revolution and the emergence of industrialized farming essentially opened up the economy for entirely new job categories, the replacement of many knowledge workers with AI will similarly create new opportunities.

    I will categorize the new jobs after AI as “experience workers.” Some of these jobs we already know: tour guides, coaches, teachers, chefs, scuba divemasters, and more. For instance, consider dining at a fancy restaurant and watching the chef prepare your meal. This is an experience that cannot be replaced by AI or AI-controlled robots anytime soon.

    While the nature of each of these jobs may be different, such as cooking versus scuba diving, they all involve human-to-human interaction and connection. This human connection is the timeless essence of being human.

    In some cases, we might see an increase in the number of people in experience worker jobs. History offers insights into this. Industrialized agriculture has lowered food prices over time, leading to a rise in the restaurant business over the last century (consistently until Covid!).

    Which jobs might see similar increases due to AI? Let’s consider teaching. While it’s easy to think that AI may reduce the need for teachers, tasks such as teaching a kindergartener to write require in-person interaction. AI can, however, make teachers more effective and efficient, handling tasks like grading and tutoring. This could lead to more teaching, not less.

    For example, last winter, I tried Carv.ski, an AI and sensor package for snow skiing.

    Using Carv was a fascinating and fun experience! Despite my thirty years of skiing experience, the AI considered my skills to be, well, “amateur at best”! It definitely helped me improve this season!

    However, I still prefer an in-person ski instructor who can also access the data from the Carv system. That would be the best of both worlds – an instructor who can see how I perform in any snow condition, combined with the insights of the AI.

    In essence, AI could make it easier and more cost-effective to be a ski instructor while improving outcomes. This combination can be powerful. Even without AI, many businesses, from FedEx to Shopify, have thrived by simplifying and reducing the cost of previously challenging endeavors.

    This brief interview with the founder of Shopify is well worth reading! When Shopify started, the market for e-commerce software was tiny because it was so difficult to use! They made it easier, and now have over a million e-commerce stores on their platform.

    AI tools will simplify and reduce the cost of numerous industries and scenarios.

    Known Unknowns and Unknown Unknowns

    Taking a cue from a famous quote by Donald Rumsfeld, the former Secretary of Defense, the really interesting question is: what are the jobs we don’t know about yet???!!

    By definition, I don’t know what those are! But I believe the most interesting new jobs in the post-AI world will be ones that we can’t imagine yet, just as few people imagined the job of an Instagram influencer!

    I also believe that these unknown jobs will involve people connecting with others in some way, as experience worker jobs do.

    The Transition

    I would be remiss not to comment on how quickly the changes in the job market may occur. As I mentioned at the beginning of this post, we are already seeing it, albeit in small ways (e.g., one less job posted in a startup). What if the job market changes happen really quickly?

    It’s one thing to say, “Oh, there will be many more sports instructors, so no problem!” But it’s quite different when it affects specific individuals. If you’ve been laid off, that’s not a theoretical exercise. It’s a real, live “what do I do now and how do I support my family?” situation. It might be challenging to transition from an office job to a scuba or ski instructor or any newly invented experience worker job overnight, especially if you live in Kansas.

    While I am hopeful that society will adapt to AI technologies, just as we have to every other technology revolution in history, the transition could be abrupt and messy.

    That is a topic for another post, though!

    In the meantime, if you’re working on AI, adopting AI, or are otherwise affected by AI, remember the importance of people! The relationships and social interactions between people are crucial. Technologies will evolve and enhance the human experience, but I don’t believe they will replace it. This is the opportunity for all of us!

    The recent events involving tech CEO Elon Musk have brought him a lot of attention, particularly his acquisition of Twitter and the subsequent changes he initiated. Many people have been asking me about the significant reduction in staff, with some sources suggesting it’s been over 70%. This raises the question: is this truly achievable, let alone advisable? Could this lead to inevitable failure for him?

    One Twitter user, Paul Vick (@panopticoncntrl), posted a tweet expressing that many tech CEOs seem to take delight in the fact that Elon let go of 75% of his workforce, yet Twitter is still functioning. However, the user believes that this situation might resemble the operations of Southwest Airlines, which could run smoothly until it encounters issues.

    This tweet captures the prevailing sentiment on both sides of the debate. However, it fails to address the more crucial question: it’s not about whether you can downsize staff and keep the company functioning; the crucial question is, what problem are you attempting to solve?

    As a former Chief Technology Officer at AOL, I have firsthand experience of implementing substantial staff cuts within a company. There’s no denying how difficult it was, especially for those directly affected. However, it was also a matter of survival for the company – we had to do it to stay afloat. And not only did the company survive, but many of AOL’s products remain active over a decade later.

    Three essential forces are at play here: Customers, Employees, and Owners (sometimes represented by the CEO and senior executives). Each has a valid and compelling perspective.

    From the employees’ standpoint, let’s consider that every job within a company is legitimate and valuable. Each employee was likely hired to fulfill a specific need and is currently engaged in meaningful work. Moreover, someone spent time, effort, and resources to secure their position. Another individual dedicated time to recruit and hire them. Someone is investing time in managing the employee. By and large, someone cares about that employee and their work. After all, how often do you talk to a friend working at a large company and hear them say, “Well, my job is pointless, and I have nothing to do”? Not very often.

    This success leads to expansion, the hiring of more people, filling in skill gaps, and so on. There are a series of gradual improvements that go beyond the initial innovation. If you’ve ever had the chance to drive a luxury car like a Porsche, you can sense the decades of improvements in the driving experience.

    Most of you probably use Microsoft Word. I doubt many of you would willingly go back to using Microsoft Word from 1995. The current version is a thoroughly refined and polished product. Yet if I asked you which single feature you couldn’t live without, you’d probably say “automatic spell check.” That feature was introduced in 1995!

    Over time, it becomes easy to reach a point of diminishing returns on product refinement. These refinements are valuable to at least some set of customers—there’s typically a rigorous feature prioritization process! Yet these incremental refinements often lack the same impact as the original innovation.

    A similar effect is observed with governments and government bureaucracy. As those of us in the United States prepare for our annual federal income tax exercise, we encounter the complexity of the tax code. Many of these regulations were introduced to address issues and special cases resulting from individuals attempting to reduce their taxes.

    If you’ve ever had to complete government contracting forms, you’d have experienced a similar level of complexity. Even the number of pages, font, and font size are often stipulated.

    Someone, somewhere in the past, undoubtedly attempted to submit an extensive proposal, leading to a rule about page length. Subsequently, another person used a small font, resulting in the rule on font size. There are over 2300 pages of rules for government contracting (and that’s just the baseline; the Department of Defense has an additional 1000 pages of supplementary regulations).

    This iterative refinement works for a while until a disruptive change looms on the horizon.

    This is where the customer dimension comes into play. It’s easy to perceive customers as a more uniform, homogeneous group, as seen in the countless business slogans: “Be customer focused. Customers are our number one priority. Customer-driven.”

    However, as we all know, the reality is far more intricate. Some customers want no change at all, while others seek gradual improvements. Another group may desire more radical enhancements (in terms of cost, functionality, etc.). Even within those groups, there’s enormous diversity in opinions, desires, and needs. We used to say at Microsoft for many years: “No one uses 100% of the features of Office, but every feature is used by at least someone.”

    The incremental planning and refinement process mentioned above is generally very effective at balancing the current customers’ needs. That’s why so many companies use it!

    Managing disruptive change is the challenge. This kind of disruptive change may involve sacrificing some performance for cost, such as the original launch of gmail.com providing 1 gigabyte of storage when other email products offered 2MB—a 500:1 performance increase. At times, it introduces entirely new categories of functionality, like smartphones or AI and blockchain technologies in today’s world.

    It may be challenging to accommodate diverse customer needs, especially when the disruptive technology would entail a significant change in the company.

    In “The Innovator’s Dilemma,” Clayton Christensen delves into the difficulties successful firms encounter in adapting to new technologies or market shifts. I strongly suggest reading this book if you haven’t already.

    Let’s take the case of Microsoft Word. I no longer utilize Microsoft Word—the transition was swift. Earlier, I would utilize Word on a daily basis; presently, I rely on chatGPT and Grammarly for all my writing tasks. The combination is remarkable: it has significantly enhanced both the speed and quality of my writing.

    End-to-end software projects

    The AI revolution encompasses more than just improving programming productivity—making the same activity more efficient. AI is also reshaping both the how and the what of numerous business processes. Building on the earlier example of outsourced programming, consider the full range of tasks involved in those projects.

    An engineer typing on a keyboard and writing code is just one aspect. Additionally, there is project management, documentation, testing, regulatory compliance certification, user training, and more.

    Some of these processes, such as regulatory compliance, can be extremely laborious and time-consuming. I have firsthand experience with a variety of compliance steps at different companies.

    The legal department initiates the quarterly requests for a compliance update, which are then passed on to a group of compliance managers. They, in turn, approach different parts of the company for updates. In the case of compliance involving software, the compliance managers request updates from software program managers. These program managers then ask the engineers for the latest updates.

    Needless to say, writing compliance reports is not the most enjoyable task for any engineer.

    However, what if a compliance report could be generated at the click of a button? Moreover, what if the report also demonstrated to the engineers how to rectify the code to address those issues?

    This would revolutionize compliance management. This capability would involve more than simply doing the same activity quicker. It would enable a complete rethink of the process and eliminate numerous hours of tedious work as it exists today.

    Unquestionably, compliance is not the sole aspect of software development that is undergoing transformation. New AI developer tools can automatically document entire codebases and keep that documentation current. Tests can be automatically generated, and achieving the often-discussed “shift-left” cybersecurity objective (remedying cybersecurity issues in code rather than attempting to rectify them post-implementation) becomes significantly simpler with AI tools. The latest AI developer tools not only automatically identify cybersecurity bugs but also provide fixes to resolve the issues.

    During the most recent earnings call, the CEO of Accenture, Julie Sweet, extensively discussed their work with legacy systems. Traditionally, this has been a source of competitive advantage for Accenture—they possess the teams and expertise to manage older and often outdated technologies. But what if AI tools could rewrite legacy software into more modern technologies?

    These are not hypothetical scenarios. These AI-powered tools are currently available (full disclosure—my company Polyverse develops some of them!), and the tools are rapidly improving—sometimes on a weekly basis.

    The leadership team at Accenture is certainly aware of these advancements in AI capabilities—Julie mentioned this in the aforementioned investor call, for instance. However, Accenture’s challenge lies in what action to take in response.

    At present, Accenture talks a lot about AI but has yet to make any fundamental changes to their business.

    Someone else will take the lead.

    My forecast is that numerous smaller, more agile outsourcing firms will fully and vigorously embrace these new AI technologies. They will leverage these newfound capabilities to compete against Accenture and other “legacy” outsourcers.

    However, these new proposals won’t just focus on pricing—they will encompass the complete package. An AI-enhanced outsourcing provider could offer better software delivered more rapidly, fully compliant, and better tested and documented, all at a significantly lower cost than legacy providers like Accenture.

    In the beginning, these rivals will start by testing the waters. The proposals will appear too good to be true! Even though the proposal is accepted, enterprise sales will still be a time-consuming and lengthy process—so far, I haven’t witnessed any AI technologies that expedite the enterprise sales process!

    At some stage, probably within a year, those initial attempts will evolve into a full-scale competitive rush.

    Accenture and other major public companies will heavily publicize, promote, and make a fuss about their own implementation and embrace of AI.

    Ultimately, they are constrained by their achievements. If staying competitive in the future means halving revenue, is it feasible for them? Can they acquire enough new customers and projects quickly enough to make up for the shortfall?

    It’s not just a financial query. Culturally, these companies have a deep-seated emphasis on billable hours. If you are an employee there, that’s how you earn, receive bonuses, get promoted to management, and so on. Shifting that focus from billable hours to a “how do you accomplish this more quickly for less cost” mindset could be daunting.

    Remember, this AI revolution is not simply about learning to use a new tool. AI is advancing at a rapid pace. In software development, last year, AI tools were essentially equivalent to advanced auto-complete. By the end of this past winter, they were capable of generating large sections of code. Now, the cutting-edge is complete code conversion, security testing, and compliance verification. Where will these tools be a year from now?

    It’s not only AI programming that is rapidly progressing. In November 2022, ChatGPT 3.5 could surpass the legal threshold in the bottom 10%. By March 2023, ChatGPT 4.0 exceeded the threshold in the top 10%. Similar swift progress is being made in image and video generation, and so on. Where will we stand a year from now?

    Providing value to customers as an AI-driven provider requires a completely different mindset than focusing on billable hours. It’s about continuously enhancing both efficiency and capability.

    With Polyverse, we are fortunate to be collaborating with several partners who are fully embracing this new AI-driven mentality. There is a tangible sense of enthusiasm and determination—they all perceive billions of dollars of potential from established providers ready for disruption.

    Artificial Intelligence (AI) has evolved from being merely a buzzword to a significant force that is transforming the workplace and business practices. It is an intelligent technology that not only enhances but sometimes exceeds human abilities in areas like decision-making, language processing, and pattern recognition, making it a fundamental part of numerous business strategies. Leaders in various sectors are harnessing AI, fostering the growth of careers in artificial intelligence, not just for operational improvements but as a foundational element for innovation and gaining a competitive edge.

    The speed at which AI is being adopted has been remarkable. A study by PwC reveals that the pandemic acted as an accelerant, with 52% of organizations expediting their AI strategies, and by 2021, 86% viewed artificial intelligence as an essential element of their business operations. This swift integration is embodied by Frito-Lay’s rapid digital transformation, which compressed five years of development into just 18 months, highlighting AI’s transformative impact within corporations.

    As artificial intelligence greatly affects the development of products and services, reinforces corporate principles, and provides solutions to challenging supply chain problems, it also plays a vital role in the startup ecosystem while supplying established companies with tools to handle disruptions. Nevertheless, a closer examination indicates that AI’s effects on employment are complex.

    While promoting efficiency, innovation, and creating new job opportunities, it also presents challenges such as the potential for job displacement and the necessity of skill adaptation. This nuanced view of AI’s effects is essential as we investigate its diverse and significant influence on the job market, shaping a new landscape where technology, roles in artificial intelligence, and human skills exist in harmony.

    What Are the Advantages of Implementing AI?

    The integration of artificial intelligence is having a beneficial impact on the job market in numerous ways, particularly by generating new, in-demand positions for skilled professionals across a range of AI occupations. This trend is observable throughout various industries and is altering workforce dynamics.

    Increased Demand for Skilled Workers

    The rise of AI is not only catalyzing the emergence of new AI-centric businesses but also heightening the demand for individuals with skills relevant to artificial intelligence, including data analytics. As companies invest increasingly in AI, there is a marked shift towards a more educated workforce that prioritizes STEM degrees and IT expertise to fill essential AI roles. This trend transcends technology megacorporations and is also apparent in traditional sectors that are adopting artificial intelligence within their operations.

    For example, organizations with higher initial percentages of well-educated and STEM-educated employees are channeling more resources into artificial intelligence, resulting in a workforce transition towards higher levels of educational attainment and specialization in STEM disciplines. This shift is linked to a flattening of organizational hierarchies, with growing proportions of junior staff holding advanced educational qualifications but lacking technical skills and expertise. The overall upskilling trend associated with artificial intelligence investments is also noticeable, as firms generally increase the percentages of workers holding bachelor’s, master’s, and doctoral degrees, while simultaneously decreasing the share of workers without college education.

    Furthermore, the demand for educated personnel in firms investing in AI is heavily focused on technical domains. Analysis of resume data indicates that investments in artificial intelligence correspond with a rise in the percentage of employees who have STEM degrees, while there is a decline in those with undergraduate degrees in the social sciences. Moreover, data from job postings by firms investing in artificial intelligence show a significant uptick in the need for employees skilled in robotics, engineering, big data analysis, and IT, moving away from traditional fields like finance and maintenance.

    These patterns demonstrate that the adoption of AI encompasses more than just the deployment of technology, programming languages, predictive modeling, and data engineering; it involves fostering a workforce that is more skilled, specialized, and technically knowledgeable. As artificial intelligence continues its evolution, the demand for professionals equipped with AI-relevant skills is anticipated to increase, ultimately shaping the future of work and opening up new career opportunities.

    Cutting-Edge Companies Driving AI Innovations

    The landscape of AI innovation features companies that employ specialized talent to further the expansive field of artificial intelligence. These organizations stand out for their current contributions to AI development, relying on their skilled workforce in various specialized positions. Here’s an overview of their current activities:

    Cerebras Systems: Cerebras is dedicated to creating cutting-edge computer chips, among the largest globally, intended for tasks in artificial intelligence. Their team, which includes hardware engineers and AI professionals, focuses on optimizing these chips for intricate computations in computer science, such as deep learning algorithms. Additionally, software developers at Cerebras are involved in developing the necessary frameworks and tools for applying these chips in AI.

    DeepMind: DeepMind brings together a group of artificial intelligence researchers and data scientists who work collaboratively on deep learning and neural network technologies, including natural language processing. Their initiatives encompass projects like AlphaGo and AI for protein folding, making contributions to areas such as healthcare and game theory. Software engineers at DeepMind build the infrastructure, while neural networks are employed to create models and algorithms that form the foundation of these AI systems.

    OpenAI: OpenAI employs a diverse group of AI researchers tackling fundamental challenges in artificial intelligence and engineers creating practical applications. Their projects span domains such as natural language processing, exemplified by the GPT models, and robotics. Additionally, policy experts at OpenAI prioritize the ethical considerations related to AI’s development and implementation.

    Lightmatter: At Lightmatter, a collaborative team of physicists, engineers, and AI specialists focuses on advancing photonic computing technology, which utilizes light for processing data. This technology aims to enhance computational speed, increase power, and lower energy consumption, merging the fields of physics and artificial intelligence.

    SambaNova Systems: SambaNova’s team comprises hardware engineers, artificial intelligence researchers, and software developers who work on their dataflow architecture. This architecture is tailored to produce software that efficiently manages artificial intelligence and machine learning workloads at scale, with both AI engineers and machine learning engineers concentrating on optimizing hardware and software components for various AI applications.

    Each of these companies plays a vital role in shaping the evolving landscape of artificial intelligence through their current projects and specialized workforce, showcasing the dynamic and diverse nature of AI development.

    AI’s Impact on Increasing Workplace Productivity: The adoption of artificial intelligence in the workplace has noticeably enhanced productivity, particularly among skilled workers. A study conducted by a multidisciplinary group of researchers involving over 700 consultants reveals the significant influence of generative AI on productivity. It found that when AI is employed within its designed capabilities, it can elevate a worker’s performance by as much as 40% compared to those who do not use it. This productivity boost arises from AI’s capacity to efficiently handle routine tasks, enabling skilled workers to concentrate on more complex and critical issues.

    Nonetheless, it is crucial to recognize that the successful application of artificial intelligence is contingent on its use within the right scope for specific tasks. The same study noted that when AI is applied outside its optimal range to complete tasks, worker performance declines by an average of 19 percentage points. This emphasizes the necessity for careful evaluation of AI’s capacities and restrictions in various tasks.

    The research also highlighted notable variations in performance enhancement among workers with different skill levels. Participants in the lower skill half who utilized AI tools like GPT-4 saw a performance improvement of 43%, while those in the upper skill half experienced a 17% rise, indicating a broader range of skill enhancements facilitated by AI.

    Moreover, the study emphasized the importance of cognitive effort and expert judgment when incorporating AI into workflows. For tasks that exceeded AI’s optimal capacity, despite witnessing a decrease in performance, the quality of participants’ reasoning and justification improved, suggesting that artificial intelligence can still contribute positively to cognitive processes even when it does not enhance task performance directly.

    In light of these insights, organizations and managers are encouraged to take a strategic approach when integrating artificial intelligence into employee workflows. This strategy should include meticulous interface design, onboarding procedures, role adjustments, and promoting a culture of accountability to ensure the effective use of artificial intelligence, enhancing rather than undermining worker performance.

    The implementation of artificial intelligence, machine learning systems, and robotics, as noted by HBR, has resulted in considerable shifts in the job market, presenting both advancements and challenges.

    The Displacement of Manual and Knowledge Workers

    Artificial intelligence and robotics are transforming the job market. Robots are becoming more advanced and are now taking over tasks ranging from assembly line jobs to more specialized roles like pharmacists and healthcare aides. Additionally, generative AI technologies pose risks to knowledge-based professions in areas such as coding, robotics engineering, accounting, and journalism.

    Economic Implications and Job Displacement

    The incorporation of artificial intelligence and automation within the workforce, especially in manufacturing, has significantly altered job dynamics. The recent pandemic has accelerated this transformation, with a PwC study indicating that more than half of the companies accelerated their AI initiatives as a response to the challenges posed by Covid-19. In 2021, according to a Harris Poll, artificial intelligence became a key element in the operations of 86% of companies. This swift integration is influencing new business models and fostering innovative products and services, as 74% of executives anticipate that artificial intelligence will optimize business operations, and over half expect it to lead to new business opportunities and products.

    AI plays a crucial role in alleviating labor shortages, especially in sectors heavily impacted by the pandemic, such as travel and hospitality. It is bridging gaps in numerous positions across nearly all industries, from truck driving to customer service. Cognizant’s Jobs of the Future Index shows a revival in the U.S. job market, particularly in technology-centered roles. Positions that involve artificial intelligence and automation have experienced a 28% rise, reflecting a shift towards workforces that are more technology-savvy.

    Furthermore, the positive impact of AI on productivity is becoming clearer. The use of AI-driven technologies is now recognized as a major contributor to enhanced efficiency in various industries. This transition is driven by improvements in machine learning methods, more affordable data storage solutions, and enhanced computational power, which have made these advancements more accessible and effective across different business sizes.

    Industries Impacted by Automation

    Automation has profoundly influenced multiple sectors, particularly manufacturing. According to TeamStage, approximately 1.7 million manufacturing jobs have already been lost to automation, and this trend is likely to persist. The number of industrial robots, which now totals 2.25 million globally, has tripled over the past two decades, leading to this job loss. By 2030, it is estimated that robots may displace as many as 20 million manufacturing jobs. Other fields, such as retail, automotive, and logistics, also report significant levels of job insecurity related to automation.

    Future Prospects and Adaptation Strategies

    Despite the worries, some experts believe that artificial intelligence and robotics may generate more employment opportunities than they eliminate. By 2025, it is projected that machines could replace around 85 million jobs while creating 97 million new ones that align better with the emerging labor distribution among humans, machines, raw data, and algorithms. Nevertheless, there is an urgent need for improved training programs and educational reforms to prepare the workforce for future job roles, preventing them from being marginalized by this technological evolution.

    The adoption of artificial intelligence and automation brings innovation and efficiency but also introduces considerable challenges, such as job loss. This impact is especially pronounced in sectors like manufacturing, retail, and logistics, where millions of jobs have already been affected by automation. Looking ahead, it is crucial to balance technological progress with strategies for workforce adaptation, including education and training. The future job landscape is likely to feature a combination of new positions generated by artificial intelligence and the adaptation of current jobs to integrate these technologies.

    As we recognize the substantial effects of artificial intelligence on the labor market, it is vital to consider effective strategies to mitigate any negative outcomes. The shift to an AI-driven economy presents challenges like job displacement and changing labor requirements, which call for a comprehensive approach. Referring to insights from the Center for American Progress, a combination of regulatory measures, workforce development initiatives, and improvements to the social safety net can help counterbalance the transformative impacts of AI.

    Steering the Creation of AI to Complement Workers

    To address the implications of artificial intelligence, policymakers should aim to guide its development to enhance human labor. This strategy entails establishing strong worker protections, restricting unjust layoffs, similar to practices in the European Union. It is also important to prohibit artificial intelligence practices that discriminate or violate privacy, along with encouraging worker involvement in technology development.

    Preparing Workers for the Adoption of AI

    It is crucial to prepare the workforce for the integration of AI. This preparation should include investing in programs for upskilling, reskilling, and retraining. Policies must promote accessible and high-quality opportunities for reskilling and retraining, along with labor market initiatives and training collaborations that support a diverse range of workers. It is also vital to ensure that jobs created through artificial intelligence provide fair working conditions and uphold the rights to collective bargaining.

    Meeting the Needs of Displaced Workers

    Another key element is addressing the needs of workers who have been displaced by artificial intelligence. Improving the social safety net, such as updating unemployment insurance to be more inclusive and beneficial, is essential. These initiatives should grant adequate time for retraining, similar to the strategies used during economic downturns, to assist those impacted by technological advancements.

    By implementing these strategies, policymakers will be better equipped to navigate the transition to an AI-enhanced economy, ensuring the workforce is supported and empowered throughout this technological change.

    What To Know About Investing in Artificial Intelligence

    As artificial intelligence becomes more prevalent, investing in AI companies has gained popularity. However, it is important to comprehend the landscape before making any investments. Despite the rapid growth of AI leading to potentially significant valuations and a surge in venture capital, investors must understand that these high valuations and the possibility of substantial returns are not assured and can be affected by various market and operational factors.

    Interest from investors in AI startups and companies is increasing, fueled by the potential for innovation. Venture capital firms have engaged in numerous deals within the artificial intelligence sector, reflecting a robust interest in this area. Nonetheless, investors should proceed with a balanced and informed mindset. It is crucial to recognize both the opportunities and risks that come with this evolving domain. Factors such as technological advancements, market conditions, regulatory shifts, and competition within the industry can impact the success of AI investments. Therefore, it is recommended to conduct thorough due diligence and adopt a cautious perspective when considering AI investments.

    Conclusion: AI’s Impact on Job Market

    Artificial intelligence is significantly transforming the job market in various ways, presenting both opportunities and challenges. Its swift adoption has resulted in greater efficiency and the emergence of new job roles, but it also brings risks such as job displacement and the necessity for skill transitions. The progression of AI demands a comprehensive strategy that includes regulatory frameworks, workforce development efforts, and investment approaches to leverage its advantages while addressing its complexities.

    For investors, it is vital to grasp the AI market, perform comprehensive due diligence, monitor emerging trends, and diversify investments to effectively navigate this dynamic environment. As AI continues to advance, it is imperative for all stakeholders to adjust and ready themselves for a future where AI and human skills work together, fostering both economic growth and sustainable employment.

  • The AI ​​boom is causing chip company Nvidia’s business to grow explosively

    he artificial intelligence helped the chip company Nvidia achieve excellent business figures. The chip company is the largest provider of specialized chips for computing-hungry AI applications.

    The AI ​​boom is causing chip company Nvidia’s business to grow explosively. In the last quarter, the Silicon Valley company doubled its sales year-on-year to $13.5 billion. Profits jumped from $656 million to just under $6.2 billion, which corresponds to 5.7 billion euros.

    Chips and software from Nvidia are particularly suitable for applications based on artificial intelligence. The chip company is the largest provider of specialized chips for computing-hungry AI applications such as ChatGPT from OpenAI. That’s why the demand for Nvidia products is currently correspondingly high. Management expects a further increase in sales to around $16 billion for the third quarter, which runs until the end of October.

    Analyst Harlan Sur from the US bank JP Morgan comments that the expansion of generative artificial intelligence (AI) and significant language and translator models further drives the demand for the chip manufacturer’s network platforms and software solutions. Current Nvidia figures also support the stock exchanges in Asia and Germany today.

    In the same league as the tech giants

    CEO Jensen Huang spoke of a change in the computer industry toward accelerated computing processes and generative AI. Analysts estimate that demand for Nvidia’s chips from this sector exceeds supply by at least 50 per cent. This imbalance is, therefore, likely to persist in the coming quarters. Competitor AMD hmarket share from Nvidia in the coming year. However, according to experts, Nvidia’s CUDA software is years ahead of AMD’s ROCm variant.

    This is also reflected in the company’s market value. At the end of May, Nvidia reached a market value of more than a trillion dollars. The price of the share has already tripled this year. This brought the company into the exclusive circle of companies with a market capitalization of more than a trillion dollars.

    Otherwise, only the technology group Apple, the software giant Microsoft, the online trading giant Amazon, Google’s parent company Alphabet, and the Saudi Arabian oil company Aramco have such a market value.

    Nvidia depends on functioning supply chains.

    The chip company has spoken out against tightening US restrictions on semiconductor deliveries to China. CFO Colette Kress said the current measures served their purpose. At Nvidia, revenue from China accounted for between 20 and 25 per cent of its data center business in the last quarter .

    Given the global demand, Nvidia does not expect any immediate significant losses even if further possible restrictions are imposed. However, long-term, this will destroy the US chip industry’s opportunities in the vast Chinese market.

    Nvidia does not produce its chips but develops them and outsources manufacturing to other companies. Therefore, Nvidia is heavily dependent on functioning supply chains.

    “A long-term change”

    Nvidia was founded 30 years ago by US-Taiwanese Jen-Hsun “Jensen” Huang. The company initially focused on graphics cards that offered computer gamers better-resolution images. High-performance microchips are now also used in the development of artificial intelligence. Huang emphasized that there is currently a “long-term change” in the world’s data centers from classic processors to the chip architectures offered by Nvidia.

    These chips are “more difficult to get than drugs,” said technology billionaire Elon Musk, who recently founded his own company to develop artificial intelligence, xAI.

    There are only four companies globally valued at over $2 trillion. These include Apple, Microsoft, the oil company Saudi Aramco, and, as of 2024, Nvidia. If you’re unfamiliar with Nvidia, it’s understandable, as the company does not produce a popular consumer product like Apple. Nvidia specializes in designing chips that are embedded deep within computers, focusing on a seemingly niche product that sees increasing reliance.

    In 2019, Nvidia’s market value stood at around $100 billion. Its rapid ascension to a size 20 times that was largely fueled by one factor—the AI ​​​​craze. Nvidia has emerged as a major beneficiary in the AI ​​​​industry. For comparison , OpenAI, the maker of ChatGPT, which propelled this obsession into the mainstream, is currently valued at approximately $80 billion. According to research from Grand View Research, the entire global AI market was valued at slightly below $200 billion in 2023, both of which are small in comparison to Nvidia’s worth. With all attention focused on the company’s remarkable evolution, the prevailing question is whether Nvidia can maintain its dominant position. Here’s how the company reached this pinnacle.

    Back in 1993, long before the widespread presence of AI-generated art and entertaining AI chatbots on our social media feeds, a startup was founded by three electrical engineers in Silicon Valley. This startup was focused on an exciting and rapidly growing segment in personal computing : video games.

    Nvidia was established to develop a specific type of chip known as a graphics card, also referred to as a GPU (graphics processing unit), responsible for producing intricate 3D visuals on a computer screen. The quality of visuals rendered on a computer depends on the performance of the graphics card, a critical component for activities such as gaming and video editing. In its pre-IPO prospectus in 1999, Nvidia highlighted that its future success would hinge on the continued growth of computer applications reliant on 3D graphics. For the most part of its existence, game graphics were Nvidia’s primary focus.

    Ben Bajarin, CEO and principal analyst at the tech industry research firm Creative Strategies, acknowledged that until recently, Nvidia had been “relatively isolated to a niche part of computing in the market.”

    Nvidia became a dominant player in the realm of video game cards—an industry that generated over $180 billion in revenue last year. However, the company recognized the importance of diversifying beyond gaming graphics card production. While not all of its endeavors were successful, Nvidia’s attempt over a decade ago to establish itself as a major presence in the mobile chip market proved futile. Presently, Android phones utilize a variety of non-Nvidia chips, while iPhones are equipped with Apple-designed ones.

    However, another initiative not only proved successful, but also became the reason behind Nvidia’s current prominence. In 2006, the company introduced a programming language called CUDA, which effectively harnessed the capabilities of its graphics cards for general computing tasks. This enabled its chips to efficiently handle tasks unrelated to rendering game graphics. It turned out that graphics cards were even better at multitasking than the CPU (central processing unit), often described as the central “brain” of a computer.

    This made Nvidia’s GPUs ideal for computation-intensive tasks such as machine learning and crypto mining. 2006 coincided with Amazon’s launch of its cloud computing business, and Nvidia’s foray into general computing coincided with the burgeoning presence of massive data centers across the globe.

    Nvidia has entered the league of tech giants known as the “Magnificent Seven”

    Nvidia’s current status as a powerhouse is particularly noteworthy because for a significant part of Silicon Valley’s history, another chip-making behemoth, Intel, held a dominant position. Intel produces both CPUs and GPUs, along with other products, and manufactures its own semiconductors. However, due to several missteps, including delays in investing in the development of AI chips, the rival chipmaker’s preeminence has waned to some extent. In 2019, when Nvidia’s market value was slightly over $100 billion, Intel’s value was twice that amount. Now, Nvidia has joined the league of prominent tech stocks identified as the “Magnificent Seven,” a select group of tech stocks with a combined value surpassing the entire stock market of numerous affluent G20 countries.

    Gil Luria, a senior analyst at the financial firm DA Davidson Companies, noted, “Their competitors were asleep at the wheel.” “Nvidia has long talked about the fact that GPUs are a superior technology for handling accelerated computing.”

    Nvidia currently serves four primary markets: gaming, professional visualization (such as 3D design), data centers, and the automotive industry, providing chips for self-driving technology. A few years ago, gaming accounted for the largest portion of revenue at about $5.5 billion, surpassing the data center segment which generated approximately $2.9 billion.

    However, with the onset of the pandemic, people spent more time at home, leading to increased demand for computer parts, including GPUs. In the fiscal year 2021, Nvidia’s gaming revenue surged by an impressive 41%, while data center revenue experienced an even more remarkable increase of 124%. By 2023, the revenue had grown by 400% compared to the previous year. tested, data centers have surpassed gaming in revenue, even during a gaming boom.

    When Nvidia went public in 1999, it had 250 employees. Now, it boasts over 27,000 employees. Jensen Huang, Nvidia’s CEO and co-founder, currently possesses a personal net worth of around $70 billion, signifying an increase of over 1,700% since 2019 .

    Chances are, you have encountered Nvidia’s products without even realizing it. Older gaming consoles like the PlayStation 3 and the original Xbox featured Nvidia chips, while the current Nintendo Switch utilizes an Nvidia mobile chip., additionally many mid- to high-range laptops come Equipped with Nvidia graphics cards.

    With the surge in AI technology, the company aims to play a more pivotal role in people’s daily tech usage. For instance, Tesla cars’ self-driving feature and major tech companies’ cloud computing services leverage Nvidia chips, serving as a backbone for various daily internet activities, such as streaming content on Netflix or using office and productivity apps. OpenAI utilized tens of thousands of Nvidia’s AI chips to train ChatGPT.

    Many people underestimate their daily reliance on AI, not realizing that some of the automated tasks they depend on have been enhanced by AI. Popular apps and social media platforms like TikTok, Instagram, X (formerly Twitter), and even Pinterest offer various AI functionalities Slack, a widely used messaging platform in workplaces, recently introduced AI capabilities to generate thread summaries and recaps of Slack channels.

    Nvidia’s chips continue to sell out quickly due to high demand. However, substantial demand allows the company to charge awkwardly high prices for its chips. The chips used for AI data centers can cost tens of thousands of dollars, with top-of-the- line products occasionally selling for over $40,000 on platforms like Amazon and eBay. Notably, last year, some clients faced up to an 11-month wait for Nvidia’s AI chips.

    Nvidia’s gaming business is thriving, and the price gap between its high-end gaming card and a similarly performing one from AMD continues to widen. In its last financial quarter, Nvidia reported a gross margin of 76%, meaning it cost them just 24 cents to make a dollar in sales. In contrast, AMD’s most recent gross margin was only 47%.

    Advocates of Nvidia contend that its leading position is warranted due to its early investment in AI technology. They argue that Nvidia’s chips are worth the price due to their superior software and the extensive AI infrastructure built around Nvidia’s products. Nevertheless, Erik Peinert, a research manager and editor at the American Economic Liberties Project, suggests that Nvidia has benefited from TSMC, the world’s largest semiconductor maker, struggling to meet demand.

    Furthermore, a recent report from The Wall Street Journal hinted at Nvidia wielding its influence to maintain dominance. The CEO of an AI chip startup named Groq alleged that customers feared Nvidia would retaliate with order delays if they sought other chip makers.

    While it’s indisputable that Nvidia made significant investments in the AI ​​industry earlier than others, its hold on the market is not unassailable. A host of competitors, ranging from smaller startups to well-funded opponents like Amazon, Meta, Microsoft, and Google —each of which currently employs Nvidia chips—are rapidly advancing. Luria notes, “The biggest challenge for Nvidia is that their customers want to compete with them.”

    It cannot be denied that Nvidia made significant investments in courting the AI ​​industry well before others caught on, but its dominance in the market is not unassailable. A host of rivals are emerging, ranging from small startups to well-funded adversaries such as Amazon, Meta, Microsoft, and Google, all of which currently utilize Nvidia chips. “Nvidia’s biggest challenge is that their customers are looking to compete with them,” says Luria.

    The issue is not just that their customers are seeking a share of Nvidia’s substantial profits—they simply cannot continue to bear the high costs. Luria notes that Microsoft “went from allocating less than 10 percent of their capital expenditure to Nvidia to nearly 40 percent. That is not sustainable.”

    Furthermore, the fact that over 70 percent of AI chips are purchased from Nvidia has concern among antitrust regulators worldwide— the EU has recently begun an investigation into the industry for potential antitrust violations. When Nvidia proposed a staggering $40 billion acquisition of Arm Limited in late 2020, a company that designs a chip architecture utilized in most modern smartphones and newer Apple computers, the FTC intervened to block the deal. “It was evident that the acquisition was intended to gain control over a software architecture that the majority of the industry relied on,” says Peinert. “The fact that they wield significant pricing power and face no effective competition is a genuine concern.”

    Will the enthusiasm for AI wane? Whether Nvidia will sustain its status as a $2 trillion company— or soar to even greater heights— hinges fundamentally on the enduring interest of both consumers and investors in AI. Silicon Valley has witnessed the emergence of numerous newly established AI companies, but what proportion of them will thrive, and for how long will investors continue to inject funds into them?

    The widespread awareness of AI arose because ChatGPT was an easily accessible— or at least, easily-demonstrated-on-social-media— novelty that captivated the general public. However, a significant portion of AI research is still focused on AI training as opposed to what is known as AI inferencing, which involves trained AI models to complete a task, such as the way ChatGPT responds to a user’s query or how facial recognition technology identifies individuals.

    While the AI ​​inference market is expanding (and perhaps more rapidly than expected), a substantial portion of the sector is anticipated to continue to devote extensive time and resources to training. For training, Nvidia’s top-tier chips are likely to remain highly coveted, at least for a while. However, once AI inferencing gains momentum, the demand for such high-performance chips may decrease, potentially leading to Nvidia’s primacy slipping.

    Several financial analysts and industry experts have expressed caution regarding Nvidia’s stratospheric valuation, suspecting that the excitement around AI may abate and that there may already be an excessive amount of capital being funneled into the production of AI chips. Traffic to ChatGPT has declined since last May , and some investors are scaling back their investments.

    “Every major technology undergoes an adoption cycle,” says Luria. “As it gains visibility, it generates tremendous hype. Eventually, the hype becomes excessive, and then it wanes, leading to a period of disillusionment.” Luria anticipates that this will soon happen with AI—although this does not necessarily mean it is a bubble.

    Nvidia’s revenue last year amounted to approximately $60 billion, reflecting a 126 percent increase from the previous year. However, its lofty valuation and stock price are not solely based on that revenue, but also on its anticipated sustained growth— for reference, Amazon, with a lower market value than Nvidia, generated nearly $575 billion in sales last year. For some experts, the path to Nvidia achieving profits substantial enough to justify the $2 trillion valuation appears daunting, particularly with the intensifying competition.

    There is also the possibility that Nvidia could be hindered by the rapid advancement of microchip technology. Progress in this field has been rapid over the past few decades, but there are indications that the rate at which more transistors can be integrated into a microchip— allowing them to become smaller and more powerful— is slowing. Bajarin suggests that maintaining Nvidia’s ability to offer significant hardware and software enhancements that persuade its customers to invest in its latest AI chips could pose a challenge.

    Despite potential challenges, it is likely that Nvidia will soon achieve the same level of recognition as Apple and Google. The reason for Nvidia’s trillion-dollar valuation is the widespread enthusiasm for AI, which in turn is largely driven by Nvidia.

    Great expectations for AI

    Investing a trillion dollars in something reflects a strong belief in its potential, and Silicon Valley truly believes in the transformative power of AI. In 2018, Google CEO Sundar Pichai famously stated that “AI is one of the most important things humanity is working on. It’s more profound than, I don’t know, electricity or fire.”

    It’s universally agreed that fire is crucial. Some might even consider it as humanity’s first groundbreaking invention. However, tech leaders like Pichai believe that the potential of achieving effectiveness, general artificial intelligence is just as revolutionary as the discovery of fire. Following the release of OpenAI’s ChatGPT in November 2022, which revealed the true marvel of large language models (LLMs), a race began to emerge as to which company could harness that potential.

    Investors hurried to support promising LLM startups such as OpenAI (currently valued at $80 billion or more) and Anthropic (estimated at $18.4 billion). In 2023, AI startups in the US raised $23 billion in capital, and there are over 200 such companies globally that are valued at $1 billion or more.

    The significant amount of investment reflects the tech industry’s confidence in the enormous potential growth of the AI ​​​​market. According to a forecast by PwC, AI could contribute nearly $16 trillion to the global economy by 2030, mainly through significantly improved labor productivity.

    Coupled with ample cash reserves held by tech giants, there is fierce competition among them to be at the forefront of AI development. Pichai highlighted on a recent earnings call that “the risk of underinvesting is dramatically greater than the risk of overinvesting,” emphasizing the belief that the AI ​​industry will be worth trillions, with the greatest value going to the early pioneers.

    Nevertheless, as generative AI is costly to develop and operate, expenses continue to escalate.

    Addressing the costs

    OpenAI’s Sam Altman has described OpenAI as “the most capital-intensive startup in history” due to the increasing costs of training ever-larger models. Not only is the cost of developing the models high, but so too is the expense of running them. An analysis estimated that OpenAI began $700,000 in daily expenses to operate ChatGPT, primarily due to the extensive compute-intensive server time. As the usage of ChatGPT and other LLMs increases, these costs escalate further.

    While Silicon Valley may not have originated the saying “you have to spend money to make money,” it certainly adheres to it. However, the revenue generated from these companies, mainly through subscriptions to their premium models, only covers a fraction of their expenses According to The Information, OpenAI could incur losses as high as $5 billion this year, nearly 10 times the amount lost in 2022.

    This trajectory is concerning, as are the user numbers for ChatGPT. Tech analyst Benedict Evans recently highlighted that although many individuals and companies experiment with AI services like ChatGPT,fewer continue to utilize them. Notably, the usage of ChatGPT appears to decrease significantly during school holidays, indicating the user demographics.

    Impressive as the capabilities of LLMs may be, particularly when compared to what was deemed feasible a decade ago, the promises of artificial general intelligence that could replace entire workforces have yet to materialize. Currently, the industry seems to face a common Silicon Valley issue: a lack of product-market fit. Chatbots are not yet a fully developed product, and the potential market size for them remains uncertain. This is why experts, ranging from Wall Street banks such as Goldman Sachs to tech venture capital firms like Sequoia Capital, have expressed concerns about the AI ​​industry, and it appears that investors are beginning to take notice.

    Nevertheless, this is not to suggest that AI lacks revolutionary potential or that the industry will not ultimately fulfill those lofty aspirations. The dot com crash in the early 2000s was partly due to the overinvestment and overvaluation of startups at the time, yet what remained paved the way for today’s tech giants like Google and Meta. The same could one day be true for AI companies. However, unless the financial performance improves, it might not be these AI companies that will ultimately succeed.

    Is Nvidia stock too highly valued?

    When a fan requested Nvidia CEO Jensen Huang to autograph her chest earlier this month, that might have indicated that the excitement around the chipmaker might have reached unsustainable levels.

    In recent years, Nvidia’s computer chips — which possess certain technical features that make them well-suited for AI applications — propelled the company to new levels of profitability. Nvidia briefly held the title of the world’s most valuable company last week; however, it lost that position a few days later during a days-long sell-off of its shares. While there has been some recovery in its stock price since then, it is currently the world’s third most valuable company with a market capitalization of $3.1 trillion, after Microsoft and Apple.

    The sell-off occurred amid concerns that Nvidia might be overvalued. Financial research strategist Jim Reid of Deutsche Bank recently cautioned about “signs of over-exuberance” regarding Nvidia, and some of Nvidia’s executives have even sold off some of their stake in the company .

    Despite the concerns, there are still numerous reasons to be optimistic about Nvidia: The company has established itself as a leading chipmaker in the industry, benefiting from an early bet on AI that has paid off as AI applications like OpenAI’s ChatGPT have brought broader public attention to the technology.

    “It’s still early in the AI ​​competition,” said Daniel Newman, CEO of the Futurum Group, a tech research and analysis firm. “But virtually everyone who has been developing AI up to this point has likely done at least some of their most important work on Nvidia.”

    The stock market has responded accordingly, with Nvidia being a part of the so-called “Magnificent Seven” tech stocks that contributed to a significant portion of stock market growth last year. Its stock price had surged by nearly 155 percent since January as of the market closing on Wednesday.

    However, whether Nvidia can maintain such growth depends on advancements in AI and the extent to which businesses will adopt it.

    How Nvidia rose to become one of the world’s most crucial chipmakers

    Nvidia has long been recognized as the foremost producer of graphics cards for gaming. However, its graphics processing units (GPUs), the primary component of graphics cards, gained popularity during a surge in cryptocurrency mining, a process that involves solving complex mathematical problems to release new cryptocurrency coins into circulation.

    This is due to the highly optimized nature of Nvidia GPUs for “parallel processing” — essentially, dividing a computationally challenging problem and assigning its various parts to thousands of processor cores on the GPU at once, solving the problem more quickly and efficiently than traditional computing methods.

    estimated, generative AI also relies on parallel processing. Whenever you interact with ChatGPT, for instance, the AI ​​model needs to analyze large data sets — essentially, the world’s text-based online content at the time of ChatGPT’s last knowledge update — to provide you with an answer. Achieving this in real time and at the scale that companies like OpenAI aim for necessitates parallel processing carried out at data centers that house thousands of GPUs.

    Nvidia recognized the potential gains from the GPU requirements of generative AI early on. Huang has described 2018 as a “bet the company moment” in which Nvidia reimagined the GPU for AI, well before the emergence of ChatGPT. The company strategically aligned its research and development as well as acquisitions to benefit from the impending AI boom.

    “They were playing the game before anyone else,” Newman commented.

    In addition to offering GPUs optimized for this purpose, Nvidia created a programming model and parallel computing platform known as the Compute Unified Device Architecture (CUDA), which has become the industry standard. This software has made Nvidia GPUs’ capabilities more accessible to developers.

    Therefore, despite Nvidia’s competitors like AMD and Intel introducing similar offerings, even at lower price points, Nvidia has retained the majority of the GPU market share for businesses, partly because developers have grown accustomed to CUDA and are reluctant to switch.

    “What [Nvidia] realized very early on is that if you want to dominate in hardware, you need to excel in software,” Newman explained. “Many of the developers who are creating AI applications have established them and feel comfortable creating them using CUDA and running them on Nvidia hardware.”

    All of these factors have positioned Nvidia to capitalize on the ever-increasing demands of generative AI.

    Can Nvidia sustain its current prosperity?

    Nvidia’s competitors are unlikely to pose an immediate threat to its status as an industry leader.

    “In the long term, we anticipate tech giants to seek out alternative sources or in-house solutions to diversify away from Nvidia in AI, but these efforts will probably eat into, but not replace, Nvidia’s dominance in AI,” Brian Colello, a strategist for Morningstar, wrote in a recent report.

    However, Nvidia’s ability to maintain the level of growth it has experienced in the past year is linked to the future of generative AI and the extent to which it can be monetized.

    Access to ChatGPT is currently open to everyone at no cost, but a $20 monthly subscription will provide access to the most advanced version. However, the primary revenue stream does not come from individual subscribers at the moment. Instead, it is derived from businesses. It remains uncertain how companies will incorporate generative AI into their business models in the years to come.

    For Nvidia’s growth to be sustainable, it is crucial that major companies such as Salesforce or Oracle, known for selling software to enterprises, develop new software that heavily utilizes AI. This would lead to these large companies signing yearly contracts to gain access to extensive computing power, according to Newman.

    “Otherwise, the fundamental concept of establishing large data centers around the world filled with GPUs becomes somewhat risky.”

    The decision on whether to invest in Nvidia stock depends on how optimistic you are about the penetration of AI into the economy.”We anticipate that Nvidia’s future will be closely linked to the AI ​​market, for better or worse, over an extended period,” Collelo notes.

    Nvidia’s market capitalization exceeded $3 trillion in 2024, driven by the generative AI surge, a recovering tech sector, and a stock increase of 154% that year. Nevertheless, there are concerns about whether AI can maintain the current hype.

    Nvidia continues to expand, having crossed the $3 trillion threshold on June 18, 2024, before falling just below that figure by the end of August 2024. By November 2024, Nvidia became the largest publicly traded company in the U.S. in terms of market cap, surpassing Apple with a valuation exceeding $3.6 trillion. During mid-2023, Nvidia reached a market valuation of $1 trillion, overtaking both Amazon and Alphabet, the parent company of Google. Within a span of nine months, the company’s market value escalated from $1 trillion to $2 trillion by February 2024, and it only took an additional three months to reach $3 trillion by June 2024.

    Nvidia’s stock has experienced fluctuations. Despite reporting impressive growth figures, Nvidia’s stock dropped by as much as 5% following its second-quarter earnings report in 2024. On November 7, 2024, Nvidia’s stock hit a record high of $148, driven by high demand for its GPUs essential for AI applications. The company’s latest chip, Blackwell, has become so sought-after that it is already preordered and booked out for up to a year. Due to Nvidia’s consistent growth, it is set to replace Intel in the Dow Jones. S&P Global manages the Dow and selects its stocks based on how the industry is likely to influence the U.S. economy.

    Nvidia’s ascent was gradual. The tech sector encountered challenges in 2022, but began to recover in 2023, notwithstanding tech layoffs. Generative AI emerged as a primary catalyst for this resurgence, and the stock market is reflecting the signs of recovery. The growth of generative AI triggered a bull market in tech stocks, marking a period of expansion on the stock exchange.

    The elite group of tech stocks known as the Magnificent Seven includes Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia, and Tesla. The stock prices of the Magnificent Seven companies increased by an average of 111% in 2023, while Nvidia experienced a remarkable rise of 239% that year.

    On June 7, 2024, Nvidia executed a 10-for-1 stock split, reducing its stock price from $1,200 to about $120. The new shares commenced trading at adjusted rates after June 10, 2024. Nvidia chose to split its stock to enhance accessibility for employees and investors. This split does not alter the overall value of the company. Thus, a stockholder who possessed a single share prior to the split would receive an additional nine shares afterward. Ultimately, this reduced stock price facilitates easier access for investors. This stock split assisted Nvidia in transitioning into the Dow Jones, as the individual stock price is a crucial factor for the Dow, rather than the total market capitalization of the company.

    Despite the daily fluctuations of the stock market, investors are recognizing this growth and speculating on how much AI demand may influence the tech sector in 2024.

    The emergence of Nvidia

    Nvidia stands among the world’s leading manufacturers of GPUs. Graphics Processing Units (GPUs) are semiconductors or computer chips that conduct mathematical operations to create visuals and images. The GPU accelerates and manages graphical workloads, displaying visual content on devices like PCs or smartphones.

    Throughout 2023, Nvidia’s earnings reports consistently outperformed expectations as interest and momentum in AI grew. Nvidia’s advanced chips are capable of processing the vast amounts of data required to train generative AI applications such as ChatGPT and Gemini. As Nvidia had already established dominance in this market prior to the surge in AI interest, its growth continued to accelerate as demand increased.

    Nvidia reported $30 billion in revenue for its fiscal second quarter ending July 28, 2024. This figure represents a 15% increase from the previous quarter and a 152% rise from one year earlier. The company also achieved record quarterly data center revenue of $26.3 billion, which was up 16% from the prior quarter and surged 154% compared to the previous year.

    To provide context, while companies like Apple and Microsoft invest in AI, Nvidia reaps profits from AI by producing the necessary chips to operate the technology.

    As businesses require hardware to handle substantial energy demands along with the wave of AI, these advanced chips are equally crucial for the metaverse, gaming, and spatial computing. Additionally, Nvidia manufactures chips for automobiles as technology continues to evolve.

    Key factors contributing to Nvidia’s stock surge

    While the growth of generative AI is a major contributor to Nvidia’s rise, other factors have also significantly driven the stock’s increase.

    1. The growth of supercomputers

    Nvidia’s chips power supercomputers that handle the massive data requirements of this advanced technology. Organizations like Meta utilize supercomputing capabilities for their AI Research SuperCluster computer to train intricate AI models. Furthermore, Tesla is beginning to develop an AI-centric supercomputer for its vehicles.

    2. Demand for generative AI

    As the demand for generative AI shows no signs of slowing, Nvidia is likely to experience growth with the adoption of each new system. According to Bloomberg Intelligence, the AI industry is projected to expand at a compound annual growth rate of 42% over the next decade. The generative AI market could reach a value of $1.3 trillion by 2032 due to the rising demand for generative AI products.

    Nvidia’s A100 GPU chips are essential for training the model used in ChatGPT. Companies like OpenAI, which rely heavily on large datasets for training extensive language models, are rapidly evolving and require more accelerated computing resources. The need for GPUs is expected to increase as these systems train on and assimilate more data.

    3. The changing world of the metaverse and XR

    Nvidia plays a significant role in the metaverse and the realms of virtual and augmented reality through its Omniverse platform. Nvidia provides 3D modeling software aimed at efficiently streaming extended reality (XR) content. As the metaverse develops, so does the necessity for Nvidia chips to support its operation. Businesses are turning to XR solutions to forge virtual environments for training purposes.

    The gaming sector is also a substantial customer for Nvidia’s graphics division. Video games demand more powerful cards to handle high-resolution graphics, particularly as gaming shifts from traditional consoles to cloud platforms. Nvidia’s gaming GPUs, like the GeForce RTX 4070, enable video games to run at superior resolutions and faster speeds.

    4. Strategic placement

    Nvidia is deeply intertwined with the cryptocurrency sector. Miners utilize its graphics cards to mine tokens, which requires considerable power. The cryptocurrency boom caused a spike in demand for Nvidia’s cards.

    Future of Nvidia

    Although Nvidia’s processors are foundational to most data centers powering generative AI, there are potential hurdles ahead, including competition from tech giants developing their own AI chips, economic uncertainties, and increasing rivalry.

    The generative AI sector is anticipated to keep expanding, but new regulations are likely to emerge that could influence Nvidia’s AI chips. U.S. trade restrictions on advanced semiconductors from China are also affecting Nvidia’s expansion since sales to China represented a significant portion of its data center revenue.

    In light of Nvidia’s noticeable growth, competitors are introducing similar chips, such as AMD’s Instinct MI200 line of GPU accelerators. Intel has also rolled out a fifth generation of Intel Xeon processors for data centers. Companies might start to diversify their suppliers instead of relying solely on one vendor, which could hinder Nvidia’s growth.

    It’s challenging to foresee whether Nvidia will maintain its growth trajectory. Nvidia has established a strong presence in the AI sector, and if the generative AI market develops as forecasted, its revenue could continue to rise. However, it remains uncertain how much market share Nvidia’s competitors will capture. Even amid increasing competition, Nvidia retains a robust market share, especially after recently announcing its H200 computing platform. Major cloud providers like Amazon, Google, and Microsoft have developed their own AI processors but still rely on Nvidia chips.

    Another challenge Nvidia faces is the potential limitation on sales of its advanced AI chips to certain nations for national security purposes.

    The market is evolving rapidly. Businesses are keen on adopting generative AI, leading to the emergence of new vendors to fulfill industry demands. New areas such as security and compliance will also reshape the generative AI market in the corporate sector.

    Nvidia’s data center business considerably drives its success and has a strong demand for AI infrastructure. Data center revenue accounted for nearly 87% of Nvidia’s overall revenue. Other major tech companies—like Google, Microsoft, and Meta—continue to invest in AI and have reported increased AI spending in their earnings statements. This indicates that even if Nvidia’s stock does not rise as quickly as it has in the past, it doesn’t imply poor performance. The company still experiences growth, and the demand for its products remains robust.

    New powerful chips are on the horizon, but there are uncertainties about whether the tech company can maintain its growth.

    When Jensen Huang addressed the Nvidia annual general meeting last week, he did not refer to the decline in share price.

    The American chipmaker, supported by its vital role in the AI surge, had briefly achieved the status of the world’s most valuable company on June 18, but that title quickly faded. Nvidia lost approximately $550bn (£434bn) from the $3.4tn (£2.68tn) peak market value it reached that week as tech investors combined profit-taking with skepticism about the sustainability of its rapid growth, leading to a slowdown.

    Huang, however, spoke as if he were the CEO of a business that transitioned from a $2tn to a $3tn valuation in just 30 days this year – and is now eyeing $4tn.

    He characterized an upcoming set of powerful new chips, known as Blackwell, as potentially “the most successful product in our history” and perhaps in the entire history of computing. He also mentioned that the new wave of AI would focus on automating $50tn worth of heavy industry, describing what seemed like an endless cycle of robotic factories coordinating robots that “manufacture robotic products.”

    In conclusion, he stated: “We’ve reinvented Nvidia, the computer industry, and very likely the world.”

    These are the types of statements that contribute to a $4tn valuation and the AI hype cycle. Nvidia’s shares are gradually increasing, surpassing $3tn this week, as it remains the prime avenue for investing in the AI boom. Is that sufficient to drive it to $4tn despite the emergence of doubts among investors?

    Alvin Nguyen, a senior analyst at Forrester, indicated that “only a collapse of the genAI market” would hinder Nvidia from reaching $4tn at some point – but whether it would do so before its tech rivals is another question. Currently, Microsoft – another major AI player – and Apple hold the first and second positions, respectively, in terms of market size, with Nvidia in third.

    If OpenAI’s next significant AI model, GPT-5, and other upcoming models are impressive, the share price will remain strong and could reach $4tn by the end of 2025, according to Nguyen. However, if they disappoint, then the share price may be impacted, given its role as a leading figure in the technology sector. A technological advancement could lead to less computational power being necessary to train models, he added, or interest in generative AI tools from businesses and consumers may not be as strong as anticipated.

    “There is much that is uncertain and beyond Nvidia’s control that could influence their journey to $4tn,” Nguyen said. “This includes dissatisfaction with new models released, improvements in existing models that decrease computational needs, and weaker-than-expected demand from businesses and consumers for genAI products.”

    Private AI research organizations like OpenAI and Anthropic – the companies responsible for the ChatGPT and Claude chatbots – are not publicly traded, leaving substantial sums of money in investors’ accounts with no access to some of the major participants in the generative AI surge.

    Investing in multinational corporations like Microsoft or Google is already costly, and only a small part of the investment pertains to the emerging trend. There could be a significant AI boom; however, if, for instance, Google’s search advertising business suffers as a result, the company wouldn’t necessarily benefit overall.

    In contrast, Nvidia is providing essential resources during a gold rush. Despite years invested in capacity expansion, it continues to sell its high-end chips faster than they can be produced. A significant portion of investments in advanced AI research flows directly into Nvidia’s accounts, with companies like Meta dedicating billions to secure hundreds of thousands of Nvidia GPUs (graphics processing units).

    These chips, which the company specializes in, were originally sold to enhance gamers’ experiences with smooth, high-quality graphics in 3D games – and through a stroke of immense luck, turned out to be precisely what leading researchers required to create large AI systems like GPT-4 or Claude 3.5.

    GPUs can carry out complex calculations needed for the training and operation of AI tools, such as chatbots, quickly and in large quantities. Therefore, any company aiming to develop or operate a generative AI product, such as ChatGPT or Google’s Gemini, requires GPUs. The same holds for the deployment of openly available AI models, such as Meta’s Llama, which also necessitates substantial amounts of chips for its training process. In the case of systems termed large language models (LLMs), training involves processing vast amounts of data. This allows the LLM to learn to recognize language patterns and determine what the next word or sentence should be in response to a chatbot inquiry.

    Nvidia has not fully captured the AI chip market, however. Google has consistently depended on its proprietary chips, known as TPUs (which stands for “tensor”, an aspect of an AI model), while other companies aim to follow suit. Meta has created its Meta Training and Inference Accelerator, Amazon provides its Trainium2 chips to AWS (Amazon Web Services) customers, and Intel has launched the Gaudi 3.

    None of the major competitors are currently challenging Nvidia at the very high end. Nevertheless, competition is not limited to that bracket. A report from the tech news outlet, the Information, has brought attention to the emergence of “batch processing”, which allows businesses to access AI models at a lower cost if they can wait for their requests to be processed during off-peak times. This, in turn, enables providers like OpenAI to invest in more affordable, efficient chips for their data centers instead of solely concentrating on the fastest hardware.

    On the opposite side, smaller enterprises are beginning to produce increasingly specialized products that outperform Nvidia in direct comparisons. Groq (which should not be confused with Elon Musk’s similarly named Grok AI, a launch that has led to an ongoing trademark conflict) manufactures chips that cannot train AI at all – but can execute the trained models extremely quickly. Not to be outdone, the startup Etched, which recently secured $120 million in funding, is developing a chip that is designed specifically to run one type of AI model: a “transformer”, the “T” in GPT (generative pre-trained transformer).

    Nvidia has to do more than just maintain its position against emerging competition, both large and small; the company must excel to achieve its next benchmark. While traditional market fundamentals are less in vogue, if Nvidia were valued like a conventional, low-growth company, even reaching a $3 trillion market cap would necessitate selling $1 trillion worth of its premium GPUs annually, with a 30% profit margin, indefinitely, as noted by one expert.

    Even if the AI sector expands sufficiently to support that, Nvidia’s profit margins could become more difficult to uphold. The company possesses the chip designs necessary to maintain its lead, but the real constraints in its supply chain mirror those faced by much of the industry: at the cutting-edge semiconductor foundries primarily operated by Taiwan’s TSMC, America’s Intel, China’s SMIC, and very few others globally. Notably absent from that list is Nvidia itself, which relies on TSMC for its chips. Regardless of how advanced Nvidia’s chipsets are, if it has to reduce TSMC’s order book to meet demand, the profit will inevitably shift in that direction as well.

    Neil Wilson, the chief analyst at Finalto brokerage, pointed out that the bearish perspective on Nvidia – a term in market jargon indicating a prolonged decline in share price – is based on the view that the company’s demand will return to less intense levels after it fulfills its existing orders.

    “All their customers have been scrambling to place GPU orders, but that rush won’t last forever,” Wilson remarked. “Clients are likely to over-order and then begin to cancel. It’s a favorable moment now, but it isn’t sustainable.” He envisions Nvidia reaching a valuation of $4 trillion and beyond, but “perhaps not at the current rate”.

    Jim Reid, who heads global economics and thematic research at Deutsche Bank, recently circulated a note questioning if Nvidia could be considered “the fastest-growing large company of all time?” He highlighted that Nvidia’s market capitalization surged from $2 trillion to $3 trillion in just 30 days, in contrast to Warren Buffett’s 60 years to bring Berkshire Hathaway close to $1 trillion.

    In any case, against the backdrop of sluggish productivity – a gauge of economic efficiency – along with a shrinking workforce and increasing government debt, the economic potential of AI is beneficial, Reid noted.

    “If AI serves as the catalyst for a fourth Industrial Revolution, that would be very positive news,” he asserted. “If it doesn’t, markets will ultimately face significant challenges.”

    There’s more at stake than merely racing to reach a $4 trillion valuation.

    Wall Street is very optimistic about Nvidia’s future earnings

    Nvidia has emerged as one of the most sought-after stocks in the artificial intelligence (AI) sector. Its split-adjusted stock price has surged nearly 700% since 2023. However, the stock has experienced a 14% decline since reaching its peak of around $136 per share in June, shortly after completing a 10-for-1 stock split.

    One factor contributing to this downturn is the ambiguity surrounding the longevity of AI investment. Investors are seeking evidence that capital expenditures are enhancing revenue growth and productivity. However, the lack of substantial supporting evidence has raised fears about potential cuts to AI budgets.

    Another aspect influencing the stock’s decline is the sequential drop in Nvidia’s gross margin in the latest quarter, which could indicate competitive pressures. A number of companies are developing custom AI chips, leading investors to worry that Nvidia might lose its competitive edge in the market.

    Nevertheless, Wall Street has optimistic news for Nvidia shareholders regarding both issues. Here are the key points to note.

    According to JPMorgan, investments in AI infrastructure are gaining traction. Analysts Jonathan Linden and Joe Seydl from JPMorgan believe that capital expenditures linked to artificial intelligence (AI) infrastructure continue to gather momentum. They project that spending from five major cloud companies—Microsoft, Amazon, Alphabet, Meta Platforms, and Oracle—will grow at an annual rate of 24% over the next five years, an increase from the previous 15% yearly growth rate.

    Furthermore, Linden and Seydl predict that AI will demonstrate a noticeable impact on productivity by the end of the decade. While this may seem far off, they argue that the time gap between technological advances and productivity improvements is actually decreasing. “Consider this: it took 15 years for personal computers to enhance the economy’s productivity. AI could achieve this in just seven years.”

    The International Data Corp. anticipates that artificial intelligence will contribute $4.9 trillion to the global economy by 2030, rising from $1.2 trillion this year. In this scenario, AI would represent 3.5% of global GDP by the end of the decade. The implications of this forecast are significant: investments in AI are not only valuable but also essential for companies that wish to remain competitive.

    Skeptics will likely dismiss AI as an exaggerated technology in the coming years, similar to the opinions some held about the internet during the 1990s. AI stocks could face a substantial decline at some point, akin to what internet stocks experienced in the early 2000s. However, history may vindicate the skeptics, leading to a potential rise in Nvidia’s share price. In fact, Beth Kindig from the I/O Fund believes Nvidia could achieve a valuation of $10 trillion by 2030.

    Morgan Stanley asserts that Nvidia’s rivals consistently fall short. Nvidia produces the most renowned graphics processing units (GPUs) in the computing industry. Last year, the company was responsible for 98% of data center GPU shipments, and its processors set the benchmark for accelerating AI tasks. Nvidia holds more than 80% market share in AI chips, with Forrester Research recently stating, “Without Nvidia GPUs, modern AI wouldn’t be feasible.”

    The surge in demand for AI infrastructure has naturally attracted more competitors to the field. This includes chip manufacturers like Intel and Advanced Micro Devices, along with major tech firms such as Alphabet, Amazon, and Apple. Each of these companies has developed alternative GPUs or custom AI accelerators. Nonetheless, CEO Jensen Huang expresses confidence that Nvidia chips provide the “lowest total cost of ownership,” suggesting that cheaper alternatives may incur higher total costs once associated expenses are factored in.

    Despite this, Nvidia will likely lose some market share as custom AI accelerators gain popularity in the coming years. However, losing a fraction of market share does not equate to losing market leadership. Nvidia’s superior hardware, combined with its extensive ecosystem of support software for developers, creates a strong competitive advantage that rivals struggle to overcome.

    Analysts at Morgan Stanley recognized this sentiment in a recent report. “Since 2018, we have encountered numerous challenges to Nvidia’s dominance—from about a dozen start-ups to several initiatives from competitors like Intel and AMD, and various custom designs. Most of these attempts have fallen short. Competing with Nvidia, a company that spends $10 billion annually on R&D, is a formidable challenge.”

    Wall Street is very optimistic about Nvidia’s future earnings. Out of the 64 analysts tracking the company, 94% have a buy rating on the stock while the remaining 6% maintain a hold rating. No analysts are currently recommending selling the stock. Nvidia has a median price target of $150 per share, suggesting a 29% increase from its current price of $116, based on CNN Business data.

    Looking ahead, Wall Street analysts foresee Nvidia’s earnings growing at an annual rate of 36% over the next three years. This consensus forecast makes the current valuation of 54 times earnings appear quite reasonable. These projections yield a PEG ratio of 1.5, a significant discount compared to the three-year average of 3.1. This is promising news for potential investors.

    Nvidia stands out from its rivals due to its significant technological advantage. Its products are often unmatched and play a crucial role in AI infrastructure. This unique position allows Nvidia to price its offerings and services at a premium.

    Although competitors are working on their own AI chips and resources, Nvidia is fostering strong partnerships with major tech firms. The company continues to introduce innovative chip designs, ensuring it stays ahead of the curve. Even as large tech companies develop their own AI hardware, they still collaborate with Nvidia, which remains a leader in a rapidly expanding industry.

    Nvidia serves as an entry point into an industry that feels as groundbreaking as the internet. Tech leaders are unlikely to pass up such a lucrative opportunity, even if it comes with a steep entry cost.

    Increasing Demand

    Monitoring the forecasts from other AI companies can provide insights into Nvidia’s future trajectory. Super Micro Computer (SMCI), a partner of Nvidia, has also gained from the surge in AI demand, and its outlook for Fiscal 2025 is promising for Nvidia shareholders.

    In Fiscal 2024, Super Micro reported $14.94 billion in revenue and anticipates that Fiscal 2025 revenues will fall between $26.0 billion and $30.0 billion. After more than doubling its revenue year-over-year in Fiscal 2024, the company is projected to achieve similar results in Fiscal 2025. Additionally, it stated that a delay with Nvidia’s Blackwell will not significantly affect its sales.

    Growing demand for Super Micro’s AI offerings suggests that Nvidia will see strong growth in demand in the near future. Nvidia has also released positive earnings forecasts that indicate further growth prospects for long-term investors.

  • How do smart cars use AI?

    It appears that discussions, debates, and subtle signals related to generative AI are everywhere these days. The automotive industry, like many others, is exploring how this technology can be utilized in the future – whether it’s in the design and production of cars or in enhancing the driving and passenger experience.

    What is generative AI exactly?

    It is a set of algorithms that can be utilized to create new content, such as text, images, and audio. Tools like ChatGPT and Google’s Bard respond to user prompts in text form. DALL-E, a tool recently integrated into Microsoft’s Bing search engine, is one of the numerous generative AI programs capable of generating images.

    These tools are increasingly prevalent in the automotive sector, primarily to enhance a car’s infotainment (as opposed to functions directly related to driving). DS initiated a trial to incorporate ChatGPT into its Iris infotainment system, while Mercedes and Volkswagen are taking a step further by integrating the technology into all their cars operating on MB.OS and MIB4 operating systems, respectively. Renault’s new 5 EV will also include a voice assistant named Reno that utilizes AI algorithms.

    ‘In this world, hype comes and goes – but this is not the case with AI,’ says Mercedes’ chief technology officer, Markus Schäfer. ‘It got more intense with the introduction of ChatGPT and there is much more focus now. We’re taking all the learnings that we have over the last nine months with ChatGPT in the car and what we have announced is the next development of that.’

    What are the advantages?

    According to many car manufacturers, having generative AI integrated into your car allows for greater personalization and a natural mode of communication between humans and machines. For instance, DS states that its updated Iris voice assistant can act as a travel companion, suggesting good restaurants at your destination or entertaining your bored children with stories.

    AI will also be utilized in the new Arene operating system from Toyota/Lexus, set to be featured in production cars from 2026, promising a much more personalized infotainment experience.

    Behind the scenes, AI is being employed in production, with car manufacturers claiming benefits in terms of both cost and the environment. At its Rastatt plant, Mercedes is using AI to simulate a production line for its next-generation MMA platform-based EVs without disrupting the ongoing manufacturing of the current A-Class, B-Class, GLA, and EQAs. In the paint shop, it has reduced the energy usage of top layers by 20 percent.

    Renault Group boss Luca de Meo points out: ‘We have developed AI tools to efficiently fill our trucks and provide optimized routes, allowing us to use 8000 fewer on the road and avoiding around 21,000 tonnes of CO2.’

    However, there are risks. Apart from putting human jobs at risk, generative AI tools frequently face the risk of copyright infringements or simply being inaccurate.
    ‘It’s not something you implement in a car and then just leave it,’ says Schäfer, the Merc tech chief. ‘If you sit in a car and ChatGPT tells you something that’s absolute nonsense, you might be exposed to product liability cases.’

    So car manufacturers are proceeding with caution. But they are certainly moving into this transformative new era.

    AI is transforming the automotive industry by enhancing both driving experiences and safety protocols. From personalized voice assistants to advanced driver assistance systems, AI technologies are reshaping the future of smart cars;

    Analyzing driver behavior using AI algorithms contributes to increased road safety and improved driving habits. AI-powered safety features like autonomous emergency braking and lane departure warning systems mitigate accidents and enhance road safety.

    We are gradually becoming accustomed to artificial intelligence appearing in our daily lives and increasingly being found in cars – either under the hood or in the cabin. The role of artificial intelligence in the automotive industry is extremely important because it is already being discovered how to use it to improve safety protocols, personalize the driver’s experience, and is crucial for the development of self-driving technology. The article will briefly explore AI technology and its impact on the future of innovative solutions in the automotive industry.

    Before we discuss what is likely the most crucial subject related to AI and intelligent automobiles, which is safety, it’s important to note how the technology enhances drivers’ enjoyment. This pertains to a sophisticated voice assistant technology.

    By integrating AI-based voice assistants in vehicles and utilizing AI algorithms to monitor and adjust driver behavior, cars are becoming increasingly personalized and responsive to the driver’s requirements.

    Analyzing driver behavior

    Understanding human behavior while driving is being developed using machine learning algorithms. While monitoring has negative connotations, the analysis of driver behavior in connected cars can significantly improve road safety.

    This is a necessary process – the World Health Organization has presented data indicating that by 2030, the fifth leading cause of death worldwide will be road accidents. The primary cause of accidents is and is expected to be human behavior such as reckless driving (speeding, driving under the influence of alcohol or drugs), fatigue, anger, and carelessness.

    AI technologies make it possible to track and analyze the driver’s facial expressions. This enables the analysis of patterns in how a driver behaves in stressful situations, how they react to them, and how they drive when tired or drowsy. Research suggests that an aggressive and reckless driver is likely to change their driving style if they know they are being observed. However, these solutions (e.g. inertial measurement units – IMUs) are not standard and are typically implemented in more expensive cars.

    This thorough analysis of driver behavior not only provides insights into driving habits but also offers feedback and real-time alerts to promote better, safer, and more eco-friendly driving practices. It is also worth noting that behavior analysis can also be directed towards pedestrians, which could contribute to the development of improved alarm systems in cars like ADAS.

    Smart Voice Assistants

    Modern vehicles are quickly integrating smart voice assistants as an essential component. These AI-powered voice assistants enhance the driving experience by providing hands-free control of functions such as making phone calls, navigation, entertainment (e.g. setting music, audiobooks), and scheduling vehicle services.

    However, despite the high adoption rate, car voice assistants face challenges such as accurate speech recognition in the noisy environment of a moving vehicle and difficulties in understanding different accents and slang. As AI technology continues to advance, improvements in natural language processing are gradually addressing these challenges, paving the way for even more advanced and intuitive voice assistants in the future.

    Vehicle safety with AI

    Artificial Intelligence is driving a quiet revolution in vehicle safety. It is the driving force behind advanced driver assistance systems, autonomous emergency braking, and lane departure warning systems that are reshaping our perception of vehicle safety. The increasing role of AI in vehicle safety represents significant technological advancements and demonstrates the commitment of automobile manufacturers to consumer safety.

    AI in the automotive industry is not only about ensuring safety — it also aims to improve overall driving experiences. By leveraging real-time data analysis and decision-making capabilities, AI is steadily transforming the automotive sector, making our roads safer and our journeys more enjoyable.

    Advanced driver assistance systems

    We have previously discussed ADAS in the context of UX design in in-car systems, but how is this technology related to AI? First, let’s list the sensor technologies included in ADAS:

    • cameras;
    • GPS/GNSS;
    • radar;
    • sonar;
    • light detection and radar (LIDAR).

    ADAS functionalities encompass various passive and active systems. Passive systems alert the driver with sounds or lights, while active systems autonomously perform actions such as emergency braking. Thanks to AI, or more specifically the sub-technology of Machine Learning, it is possible to prevent occurrences such as pedestrian and object detection, thereby enhancing scene understanding and ensuring safe navigation. The ML algorithm enables computers, based on data and patterns, to learn and extract crucial insights about potential hazards that a driver may encounter.

    Autonomous Emergency Braking systems

    Another essential safety feature that utilizes artificial intelligence is autonomous emergency braking (AEB) systems. These systems use sensor data from radar, cameras, and lidar to identify potential head-on collisions. By gauging the distance to an object in front and calculating the relative speed of both vehicles, the system assesses such risks. If the driver fails to react promptly, AEBS will automatically engage emergency brakes to prevent or reduce an impending collision.

    Lane Departure Warning Systems

    Another technology that ensures safety and integrates with AI is the LDW systems – a system that alerts drivers if they have veered across the lines on highways and arterial roads. It employs artificial intelligence in combination with sensor networks and computer vision to effectively decrease road accidents and enhance road safety.

    These systems employ algorithms (e.g. CNN, BING or PCANet) to recognize and monitor road markings. LDWS delivers reliable and precise lane tracking and departure warnings, adapting to various conditions such as different weather and times of day.

    Impact of AI on car manufacturing processes

    Artificial Intelligence not only changes car functionality but also revolutionizes car production processes. From optimizing production processes and quality control to improving supply chains, artificial intelligence is transforming the automotive manufacturing sector.

    The integration of Artificial Intelligence (AI) in manufacturing processes has led to a significant transformation in the industry. By utilizing AI technologies such as machine learning and predictive analytics, manufacturers can optimize production processes, improve quality control, and streamline operations. AI-powered systems can analyze extensive amounts of data in real-time, enabling proactive maintenance, predictive modeling, and efficient resource allocation. This not only enhances overall operational efficiency but also reduces downtime, minimizes waste, and improves product quality. The implementation of AI in manufacturing is paving the way for smart factories that are agile, adaptive, and responsive to changing market demands.

    Influence of AI on supply chain

    AI’s influence extends beyond vehicle operation and manufacturing to supply chain management. By predicting automobile demand, managing intricate supply networks, and optimizing inventory levels, AI is revolutionizing supply chain management in the automotive industry.

    AI is transforming supply chain operations by enabling predictive analytics, demand forecasting, and real-time decision-making, optimizing inventory levels, streamlining logistics processes, and overall increasing supply chain efficiency. All of this is accomplished using algorithms that enable the analysis of vast amounts of data to identify patterns and trends. This, in turn, allows companies to achieve greater accuracy in demand planning, shorten lead times, and reduce risks and errors.

    Future of AI in automotive

    The above article depicted the current state of the automotive industry and its integration with AI technologies. While it is challenging to predict the future, one thing is certain – the future belongs to algorithms, data analysis, and machine learning. All of this is aimed at enhancing the driver’s experience, including autonomous driving technology and electric cars, and optimizing production in the automotive industry.

    Summary

    Artificial Intelligence (AI) is reshaping the automotive industry, enhancing both driving experiences and safety protocols. From personalized voice assistants to advanced driver assistance systems, AI technologies are transforming the future of smart cars. By analyzing driver behavior, enhancing safety features, and optimizing manufacturing processes, AI ensures a safer, more efficient, and personalized driving experience. As the industry evolves, AI-driven innovations promise to revolutionize car functionality, production processes, and supply chain management, paving the way for a future of autonomous driving and electric vehicles.

    How is AI transforming the automotive industry?

    AI is being used in the automotive industry to improve supply chain management, provide predictive analytics, and develop driver assist programs, autonomous driving, and driver monitoring technologies. These technologies, using machine learning algorithms, enable the extraction of valuable data that can be utilized to enhance road safety.

    What is the future of AI cars?

    The future of AI cars holds the potential for fully autonomous vehicles, predictive maintenance, and advanced safety features, offering a personalized driving experience tailored to individual preferences. It is anticipated that fully autonomous cars will become a common sight on the roads within the next decade, driven by advancements in machine learning and deep learning algorithms.

    How can AI improve vehicle safety?

    AI improves vehicle safety by utilizing Advanced Driver Assistance Systems, Autonomous Emergency Braking, and Lane Departure Warning Systems to decrease accidents and enhance road safety.

    How does AI contribute to self-driving cars?

    AI is integrated into self-driving cars through the use of machine learning and computer vision technologies, enabling the vehicles to comprehend their surroundings and make decisions, allowing them to function without human intervention.

    In recent years, a potent influence has emerged to further transform this area: artificial intelligence (AI). AI is steering revolutionary changes in the automotive sector, impacting vehicle design, production, safety, autonomy, and the overall driving experience.

    AI-Powered Design and Manufacturing

    AI has played a significant part in vehicle design and manufacturing by streamlining processes and boosting efficiency in various ways.
    AI algorithms optimize vehicle design by examining extensive datasets. They consider aerodynamics, weight distribution, and safety to create vehicles that are more streamlined, secure, and fuel-efficient.

    AI aids in predicting disruptions in the supply chain and improving inventory management, reducing production delays and costs, resulting in a more efficient manufacturing process.

    AI computer vision systems offer unparalleled precision in inspecting vehicles for flaws. They can identify even the smallest imperfections in real time, assuring that only flawless vehicles are delivered to customers.

    AI-Enhanced Safety and Driver Assistance

    One of the most notable advancements in the automotive sector is AI’s role in enhancing vehicle safety and driver assistance through developments such as:
    AI algorithms analyze sensor data, including radar and cameras, to identify potential collisions. In critical situations, these systems can activate the brakes or take evasive action to prevent accidents.

    AI-powered adaptive cruise control maintains a safe distance from the vehicle ahead and adjusts speed according to traffic conditions.

    AI-based lane-keeping systems help vehicles remain within their lane, reducing the likelihood of unintended lane departures.

    Autonomous driving is the ultimate objective of artificial intelligence in the automotive industry. While fully autonomous cars are still under development, many vehicles now include semi-autonomous features such as self-parking and highway autopilot.

    AI and the In-Car Experience

    AI is revolutionizing the in-car experience for both drivers and passengers.
    AI-powered voice assistants such as Siri and Google Assistant allow hands-free control of navigation, music, and calls in modern vehicles.

    AI algorithms personalize infotainment recommendations based on user preferences, enhancing the driving experience.

    Predictive Maintenance: AI can anticipate vehicle maintenance requirements, minimizing downtime and repair costs.

    How is Artificial Intelligence Transforming the Future of the Automotive Industry?

    The automotive industry is heavily investing in AI, leading to a significant shift in the future of automobiles. Automotive companies are utilizing machine learning algorithms to enhance the quality of data needed for autonomous driving systems, enabling self-driving vehicles to operate more accurately and safely. AI is also assisting the automotive industry in transitioning to eco-friendliness, with companies producing electric vehicles using AI technology.

    These recent advancements underscore the substantial impact of AI on the automotive industry. Furthermore, AI plays a crucial role in enhancing driver convenience and safety. AI-powered features such as automatic braking and blind-spot detection are becoming standard, making driving more convenient and reducing the risk of accidents. As artificial intelligence evolves, it promises a future in which vehicles are not only smarter, but also safer and more efficient.

    Embracing 5G Connectivity: 5G is a recent internet innovation with the potential to revolutionize the automotive sector. Its connectivity capabilities can establish a digital bridge, enabling devices and individuals to communicate while on the move. When combined with AI, it can offer an enhanced driving experience. The vehicle’s entertainment system can be transformed into an informative system that responds to drivers’ voice commands and provides technical information about the vehicle’s performance and fuel level.

    AI Integration in Automotive Operations: Artificial intelligence (AI) can automate various manufacturing and sales processes. It can provide salespeople with valuable data about potential clients’ journeys, enabling them to optimize their sales processes, increase conversion rates, and reduce costs.

    AI-enabled cars can identify and forecast traffic patterns, enhancing safety during road trips and commutes.

    Personalized Vehicle Experiences: Artificial intelligence in automobiles allows for a personalized driving experience. For instance, Porsche offers a “Recommendation Engine” powered by machine learning that suggests vehicle packages based on individual driver preferences.

    The automotive industry acknowledges the potential of AI to stimulate innovation. AI is currently utilized in designing and developing vehicle components and engines, leading to unforeseen solutions. This indicates that future AI-driven innovations could surpass the perceived limitations of the automotive industry.

    Advantages of AI in the Automotive Sector

    When appropriately integrated into the automotive industry, AI can offer numerous benefits. It can unveil new opportunities and possibilities. The exploration of new approaches can uncover previously undiscovered advantages.

    Enhanced Safety: AI systems such as lane departure warnings, autonomous emergency braking, and adaptive cruise control enhance road safety by warning drivers about potential dangers and implementing precautionary measures, thereby reducing accidents.

    AI and IoT facilitate predictive maintenance by monitoring vehicle data and notifying managers about potential issues before they escalate, improving vehicle performance and reducing maintenance costs. AI-powered infotainment systems provide personalized experiences for passengers and drivers, including intelligent voice assistants that understand regional languages, play music, offer guidance, and adjust vehicle settings, leading to safer and more enjoyable journeys.

    Autonomous Driving: AI-powered autonomous vehicles have the potential to revolutionize the automotive industry by reducing accidents, enhancing mobility, and improving traffic flow, particularly for individuals with mobility challenges.

    AI optimizes manufacturing processes, enhances supply chains, and identifies potential vehicle issues, resulting in cost savings across operations, including design and manufacturing.

    Overall, AI advancements have significantly contributed to the growth of the automotive industry, transforming how we interact with and drive vehicles.

    Challenges and Ethical Considerations

    While the benefits of AI in the automotive industry are evident, challenges and ethical considerations need to be addressed.

    Data Privacy: AI systems in vehicles gather substantial amounts of data, including location and driver behavior. Ensuring the privacy and security of this data is crucial for maintaining consumer trust.

    Robust regulatory frameworks are essential for the development and deployment of self-driving vehicles. Governments worldwide are formulating laws to address the safe use of AI in transportation.

    The rise of autonomous vehicles may lead to job displacement in driving-related industries such as trucking and delivery. Preparing the workforce for these changes presents a significant challenge.

    Ethical Dilemmas: Autonomous vehicles may encounter ethical dilemmas in situations where human lives are at stake. Decisions regarding who or what to prioritize in such situations need to be made.

    AI has already brought about significant changes in the automotive industry, and its impact will only continue to grow in the future. From enhancing safety and convenience to reducing emissions and improving energy efficiency, AI is set to transform how we engage with and perceive automobiles.

    To maximize the benefits of AI in the automotive industry while addressing the associated challenges, stakeholders such as automakers, governments, and consumers must collaborate. Establishing strong regulations, safeguarding data privacy, and facilitating workforce transition will be crucial as we navigate this exciting and transformative era of AI in the automotive industry.

    As technology progresses and artificial intelligence (AI) becomes increasingly integrated into vehicles, we can envision a future in which our cars are more than just means of transportation but also intelligent, eco-friendly companions that enhance our lives while contributing to a more sustainable and safer world. The future of AI in the automotive industry is promising, and it promises to be an exhilarating journey for everyone.

    As technology becomes increasingly prevalent in our world, the global market is experiencing the transformative rise of artificial intelligence (AI). This advanced technology is reshaping various industries, with the automotive sector leading the way in this revolution. Major automotive manufacturers are integrating AI into their operations to harness its potential for gaining a competitive advantage and providing customers with exceptional, personalized experiences.

    The influence of AI in the automotive industry extends beyond manufacturing and is also revolutionizing automotive retail. This article delves into the impact of AI on the automotive industry, highlighting its technological progress and advantages.

    Impact of AI on the Automotive Industry

    The impact of AI on the automotive industry is significant, signaling a new era of innovation and effectiveness. AI has transformed traditional automotive methods by optimizing manufacturing processes, reducing expenses, and improving supply chain management. By analyzing vehicle data and sales figures, AI enables precise modeling and regulation of production processes with unparalleled accuracy and real-time insights.

    AI’s contributions to the automotive sector also extend to enhancing safety, intelligence, efficiency, and sustainability, fundamentally transforming the industry landscape.

    AI in the Manufacturing Process

    Before the advent of AI, automobile manufacturing heavily relied on manual labor, resulting in time-consuming production and increased costs. Challenges such as collecting data on vehicle performance and detecting faults posed significant obstacles. However, AI has revolutionized this process by automating manufacturing through robotics and facilitating real-time data collection via AI software, streamlining production and enhancing quality control.

    Enhanced Experiences with AI

    The integration of AI technology into vehicles has significantly enhanced the driving experience. Real-time monitoring systems, previously unavailable, are now standard, thanks to AI advancements. Automotive companies continuously innovate by adding new AI-driven features to their vehicles, including damage detection and preventive maintenance alerts, setting new trends in the auto industry.

    Improved Dealership Services

    Traditionally, car dealerships operated in a straightforward, albeit outdated, manner, with negotiations and vehicle showcases occurring in person. AI has also revolutionized this area. Machine learning and AI-powered chatbots have introduced round-the-clock customer service, offering detailed information to potential buyers. Furthermore, AI can provide digital and virtual vehicle inspections, using virtual car studios to offer a more immersive and informative customer experience.

    Revolutionizing Dealership Marketing

    AI is also changing how dealerships market their vehicles, introducing a level of personalization and efficiency that was previously unattainable. By leveraging data analytics and machine learning, dealers can now predict customer preferences and tailor their marketing efforts accordingly. AI-powered tools analyze customer data, including past purchases and online behavior, to create highly targeted marketing campaigns. This approach not only enhances customer engagement but also significantly improves conversion rates.

    Moreover, AI enables dealerships to optimize their inventory management based on predictive trends, ensuring they stock vehicles that meet current market demand. As a result, AI in the automotive industry is not just changing the manufacturing and customer service landscape but is also reshaping dealership marketing strategies to be more data-driven and customer-focused.

    An Overview of the Future of AI in the Automotive Industry

    Initially, many industries, including automotive, were cautious about how AI could drive innovation. However, over time, AI has emerged as a cornerstone of technological advancement, catalyzing significant changes across the global market. Today, AI plays a pivotal role in fostering innovation in the automotive industry, indicating a shift towards more autonomous, efficient, and personalized automotive solutions.

    For those who are new to the concept, AI refers to the ability of machines or computers to autonomously perform tasks such as learning, designing, and decision-making without human intervention.

    The introduction of AI in the automotive industry has paved the way for groundbreaking changes and innovations. Technologies such as machine learning, computer vision, and robotics have empowered manufacturers to produce vehicles that are not only technologically superior but also safer and more efficient. AI has thus been instrumental in simplifying the manufacturing process and introducing innovative automotive solutions, marking a significant leap towards the future of mobility.

    How AI is Revolutionizing the Future of the Automotive Industry

    The automotive industry is a major investor in artificial intelligence (AI), signaling a significant shift toward the future of the sector. Through the use of machine learning algorithms, automotive companies are improving the quality of data needed for autonomous driving systems. This advancement ensures that self-driving vehicles operate with exceptional accuracy and safety, ushering in a new era of mobility.

    Improving Safety

    AI’s advanced learning capabilities play a key role in developing vehicles that can predict traffic patterns and potential dangers. This predictive ability helps drivers navigate more safely, reducing risks and enhancing road safety. The automotive industry’s focus on AI-driven safety features represents a crucial step toward reducing accidents and ensuring passenger safety.

    AI in the Production Process

    AI is facilitating the transition to environmentally friendly practices and the manufacturing of electric vehicles. This shift is not only important for the environment but also aligns with the current trend toward sustainability. AI’s impact on automotive manufacturing is reshaping the future of the industry, demonstrating its potential to create smarter, safer, and more efficient vehicles.

    Furthermore, AI enhances driver convenience and safety through features such as automatic braking and blind-spot detection, now becoming standard. These advancements are essential for reducing accidents and enhancing the driving experience, indicating a future where vehicles are increasingly autonomous and user-focused.

    AI in Automotive Processes

    AI is revolutionizing automotive operations, from production to sales. By providing sales teams with detailed customer journey data, AI enables more efficient sales processes and improved conversion rates. This integration of AI into operational strategies significantly reduces costs and enhances customer engagement, highlighting the technology’s crucial role in optimizing automotive business models.

    Personalized Driving Experience

    AI is redefining the driving experience, allowing for customization that reflects the driver’s preferences and lifestyle. Major automotive companies, such as Porsche, are leading the way in using “Recommendation Engines,” which suggest vehicle configurations tailored to individual tastes. This level of personalization demonstrates AI’s ability to make driving a more personalized and expressive experience.

    Exceeding Boundaries

    The automotive industry recognizes AI’s potential to drive significant innovation, from vehicle design to engine optimization. AI’s influence extends beyond current manufacturing practices, uncovering new possibilities and surpassing existing limitations. The future of the automotive sector is set to surpass today’s boundaries, driven by the relentless advancement of AI technology.

    The Future of Customer Data Platforms (CDPs) in the Automotive Industry

    As the automotive industry continues to evolve under the influence of AI, the role of Customer Data Platforms (CDPs) is becoming increasingly important. CDPs, which consolidate customer data from multiple sources into a single, comprehensive database, are poised to transform how automotive companies understand and engage with their customers.

    Enhanced Customer Understanding and Personalization

    CDPs offer unparalleled levels of personalization and customer engagement. By leveraging CDPs, automotive brands can gain a complete view of their customers, enabling them to deliver personalized marketing messages, tailor vehicle recommendations, and enhance the overall customer journey. This deep level of insight ensures that customers receive offers and communications that are relevant to their specific needs and preferences, boosting satisfaction and loyalty.

    Streamlining Operations and Improving Efficiency

    Beyond marketing, CDPs are set to streamline automotive operations, from supply chain management to after-sales support. By providing a unified view of customer interactions and preferences, CDPs help automotive companies optimize their inventory, predict market trends, and improve the efficiency of their sales processes. This integration of customer data across the enterprise allows for more agile decision-making and a more cohesive customer experience.

    Driving Innovation in Product Development

    The insights derived from CDPs are essential for driving product development and innovation within the automotive industry. Understanding customer preferences and behavior patterns enables automotive manufacturers to design and develop vehicles that meet emerging market demands, including features, technologies, and designs that align with consumer expectations. This customer-centric approach to product development ensures that automotive companies remain competitive and relevant in a rapidly changing market.

    8 Applications of AI in the Automotive Sector

    The automotive industry benefits from AI in several key ways, as illustrated by the following pivotal use cases:

    Systems for Assisting Drivers

    Artificial Intelligence plays a crucial role in Advanced Driver Assistance Systems (ADAS) in the automotive sector. These systems, enabled by AI, utilize sensors for tasks such as providing steering assistance, detecting pedestrians, monitoring blind spots, and alerting drivers promptly. This technology is essential for preventing traffic incidents and improving road safety.

    AI-Powered Marketing for Car Dealerships

    AI is transforming marketing strategies in automotive dealerships, enabling a more focused, efficient, and personalized approach to reaching potential buyers. By utilizing AI algorithms, dealerships can analyze customer data, online behavior, and purchase history to create highly tailored marketing campaigns.

    This technology enables dynamic customization of advertisements, email marketing, and even direct mail, ensuring that marketing messages are personalized according to each customer’s specific interests and needs.

    Segmentation and Targeting of Customers: AI tools segment customers based on various criteria, such as demographic data, purchasing behavior, and engagement history, allowing dealerships to target specific groups with customized promotions.

    Predictive Analysis for Lead Scoring: Through predictive analytics, dealerships can prioritize efforts on leads with the highest potential for sales by scoring them based on their likelihood to convert.

    Chatbots for Engaging Customers: AI-powered chatbots provide instant communication with potential customers, answering queries, scheduling test drives, and even facilitating initial sales discussions, thereby enhancing customer service and engagement.

    The integration of AI into dealership marketing not only streamlines the process of reaching out to potential customers but also significantly increases the effectiveness of marketing efforts, resulting in higher conversion rates and improved customer satisfaction.
    Self-Driving Vehicles

    AI is at the core of autonomous vehicles, empowering them to perceive their environment, make informed decisions, and navigate roads with minimal human input. Industry leaders such as Tesla and Waymo are leading the way in using AI to advance autonomous vehicle technology.

    Monitoring of Drivers

    In-cabin monitoring systems utilize AI to assess driver behavior, including detecting drowsiness and distractions. These systems play a crucial role in ensuring driver alertness and overall vehicle safety.

    Management of the Supply Chain

    By analyzing data, AI predicts demand for various vehicle models, optimizing production schedules and reducing inventory costs. AI also helps in maintaining optimal inventory levels and streamlining supply chains, ensuring efficient delivery of parts and components.
    AI in Manufacturing

    AI-driven robotic assembly lines enhance automotive manufacturing processes, including welding, painting, and assembly, thereby increasing efficiency and precision. AI applications are also used for quality control, inspecting vehicles for defects during production, ensuring superior product quality and reducing error rates.

    Personalized Assistance and Predictive Maintenance

    Vehicles now incorporate AI-powered voice-activated controls in the form of virtual assistants, allowing for hands-free operation of navigation, music, and more. AI is also utilized for predictive maintenance as its predictive capabilities can forecast potential component failures, allowing for timely maintenance and minimizing the risk of unexpected breakdowns.
    Enhancing Passenger Experience

    AI significantly improves in-car entertainment systems by providing personalized content recommendations and enhancing infotainment systems. AI-powered voice recognition technology also enables passengers to control various vehicle functions through simple voice commands, enhancing convenience and safety.

    The Future of AI in the Automotive Industry

    Investment in AI by the automotive industry is expected to drive an unparalleled growth trajectory. Projections suggest that the AI automotive market will experience a remarkable compound annual growth rate (CAGR) of 55% from 2023 to 2033. This surge underscores the industry’s shift towards integrating AI across various aspects of automotive technology and operations. Here’s a closer look at the anticipated developments:

    Future Prospects for Automotive Companies

    Integration of OEM-based AI Chips: In the future, automotive manufacturers will embed OEM-based AI chips designed to enhance vehicle functionalities, including lighting systems, cruise control, and autonomous driving capabilities.

    Software Integration and Market Value: The seamless integration of software within automotive systems is critical to the sector’s growth, with the AI market segment poised to reach a valuation of US$ 200 billion within the next decade.

    Autonomous Vehicle Segment Expansion: The autonomous vehicle segment’s value is projected to reach $30 billion by 2024, driven by advancements in self-driving technology. The market share for autonomous vehicles is anticipated to grow by 10.9%, with an expected 99,451 million units by 2032, demonstrating an increasing consumer demand for autonomous technology.

    Growth in ADAS: The market for Automotive Advanced Driver Assistance Systems (ADAS) is poised for a substantial annual growth rate of 9.6%. With a projected market valuation of $131 billion, this growth reflects the rising adoption of advanced safety features in vehicles.

    Automotive AI Market Expansion: A market research report forecasts that the automotive AI market will expand at a CAGR of 39.8% from 2019, reaching $15.9 billion by 2027, indicating strong growth and investment in AI technologies within the industry.

    Generative AI in Automotive: The use of generative AI in the automotive sector is expected to increase from $271 million in 2022 to over $2.1 billion by 2032, according to MarketResearch.biz. This growth signifies the expanding role of generative AI in driving innovation and efficiency in automotive design and manufacturing.

    These insights highlight the automotive industry’s forward momentum, with AI playing a central role in shaping its future. From enhancing vehicle functionality and safety to transforming manufacturing processes, AI is at the forefront of the industry’s evolution, promising a new era of innovation and growth.

    Benefits of AI in the Automotive Industry

    The integration of AI into the automotive sector presents a multitude of opportunities, revolutionizing the industry with new possibilities and efficiencies. Here’s how AI is improving various aspects of the automotive world:

    Improved Safety: AI technologies, such as lane departure warnings, autonomous emergency braking, and adaptive cruise control, significantly enhance road safety by alerting drivers to potential hazards and taking preemptive actions to reduce the likelihood of accidents.

    Predictive Maintenance: With the help of the Internet of Things (IoT), AI enables predictive maintenance by continuously analyzing vehicle data. This proactive approach alerts management about potential issues before they escalate, enhancing vehicle longevity and reducing maintenance expenses.

    Enhanced Driver Experience: AI-powered infotainment systems offer a personalized user experience, featuring intelligent voice assistants capable of recognizing regional dialects, streaming music, providing navigation, and customizing vehicle settings to ensure safer and more enjoyable journeys.

    Autonomous Driving: The emergence of AI-driven autonomous vehicles aims to bring about significant changes in the automotive landscape by reducing accidents, enhancing mobility for those with physical limitations, and improving overall traffic conditions.

    Cost Savings: By streamlining manufacturing processes, enhancing supply chain efficiency, and preemptively identifying vehicle faults, AI contributes to substantial cost savings across various operational facets, from design through to production.

    Targeted Marketing Strategies: AI enables automotive dealerships and manufacturers to implement highly targeted marketing strategies by analyzing customer data and behavior, tailoring marketing messages and offers to meet the specific needs and preferences of individual consumers, thereby increasing engagement and conversion rates.

    Optimized Customer Engagement with CDPs: Customer Data Platforms (CDPs) integrated with AI technologies empower automotive businesses to create a unified and comprehensive view of their customers, delivering personalized customer experiences, more effective engagement strategies, and improved customer loyalty through targeted communications and offers based on in-depth insights into customer preferences and behaviors.

    Through these advancements, AI is significantly shaping the future of the automotive industry, improving operational efficiencies, safety, and customer experiences, and opening up new avenues for innovation and growth.

    Recapping the Benefits and Impact of AI in the Automotive Industry

    The integration of Artificial Intelligence (AI) in the automotive industry marks a transformative era, heralding significant improvements in safety, efficiency, cost savings, and the overall driving experience. From enhancing manufacturing processes and predictive maintenance to revolutionizing driver assistance systems and autonomous driving, AI is at the forefront of automotive innovation.

    Additionally, AI-driven marketing strategies and Customer Data Platforms (CDPs) are redefining how automotive companies engage with customers, offering personalized experiences that boost satisfaction and loyalty. As the industry continues to embrace AI, we can anticipate further advancements that will not only redefine mobility but also pave the way for smarter, safer, and more sustainable transportation solutions.

    How can AI improve safety in the automotive industry?

    AI improves safety in the automotive sector through advanced driver assistance systems (ADAS) like lane departure warnings, autonomous emergency braking, and adaptive cruise control. These systems help in preventing accidents by alerting drivers to potential hazards and taking preventive actions.

    What is predictive maintenance with respect to AI in the automotive industry?

    Predictive maintenance utilizes AI and IoT technologies to continuously monitor vehicle data. This allows for the early detection of potential issues before they escalate into serious problems, thereby enhancing vehicle performance and reducing maintenance costs.

    Can AI in the automotive industry enhance the driving experience?

    Yes, AI-powered infotainment systems offer personalized experiences by providing smart voice assistants, streaming music, offering navigational assistance, and adjusting vehicle settings. This makes journeys more enjoyable and safer.

    What role does AI play in autonomous driving?

    AI is crucial in the development of autonomous vehicles as it enables them to perceive their surroundings, make decisions, and navigate without human intervention. This can significantly reduce accidents, increase mobility, and improve traffic flow.

    How does AI contribute to cost savings in the automotive industry?

    AI optimizes manufacturing processes, enhances supply chain efficiency, and identifies potential vehicle issues early on, leading to significant cost reductions across various operational aspects.

    What are the marketing benefits of AI in the automotive industry?

    AI enables targeted marketing strategies by analyzing customer data and behavior. This allows automotive companies to create personalized marketing messages and offers, thereby increasing customer engagement and conversion rates.

    How do Customer Data Platforms (CDPs) benefit the automotive industry?

    CDPs, integrated with AI, help automotive companies create a unified view of the customer. This enables personalized experiences, effective engagement strategies, and improved loyalty through targeted communications based on deep customer insights.

    Creating the AI-Powered Dealership of the Future

    Fullpath, the automotive industry’s only enhanced Customer Data Platform, is reshaping the landscape of car dealerships by helping dealers unify and activate their first and third-party data using powerful AI and marketing automations.

    Fullpath takes the typical CDP to the next level by adding the “Experience” factor, layering AI-powered technology on top of the dealership’s unified data layer. This added activation allows dealers to create exceptional customer experiences through automated, AI-driven, highly effective engagements and marketing campaigns designed to drive sales and loyalty.

    The world has undergone significant changes recently. The rise of new technologies has facilitated a more comfortable lifestyle. New possibilities have arisen for individuals to utilize their time more effectively. For businesses and organizations, automation has enabled tasks to be completed in a shorter timeframe. Artificial intelligence offers humanity innovative technologies. Automobiles are practical vehicles that enhance comfort. AI is employed to elevate the overall experience and generate novel ones.

    In the automotive sector, AI is crucial not just for convenience. AI algorithms gather and assess data regarding real-time conditions. Overall, the control systems for self-driving vehicles have attained a new standard. They evaluate the road and surroundings to manage transportation. Human involvement can be significantly diminished while ensuring safe driving. AI has a profound and beneficial influence on the automotive industry. Its contributions are advancing automotive technologies to new heights.

    Enhanced Safety Features through AI

    The automotive sector was developed primarily for human ease. Safety is a key factor for drivers, making the integration of AI vital. AI in the automotive context greatly affects the overall driving experience. It also plays a crucial role in efficiency and safety. The principal safety functions of AI include:

    • Driver assistance. Safety is the foremost condition impacting every driver’s life. AI continually refines ADAS components as a primary focus. Vehicle speed is managed based on various weather conditions. This strategy helps prevent collisions both in front and behind. Adaptive cruise control assists in keeping a safe distance from other vehicles. A significant application of AI is to ensure the driver stays within the designated lane. The vehicle operates solely within its lane without straying into others. AI-equipped vehicles have sensors that manage braking. The analysis of collected data allows for prompt notifications when necessary.
    • Collision prevention. AI is utilized to oversee collision occurrences. Data from cameras is processed in real time. In emergencies, AI engages safety mechanisms to prevent accidents. Steering assistance can help guide the vehicle into a safer lane. The AI may also automatically apply brakes to avert collisions.
    • Detection of blind spots. AI can identify information regarding blind spots. Drivers may be unable to see vehicles located behind or beside them. AI conveys this information to help prevent accidents. It is also crucial to employ sensors that provide alerts about approaching cars, often when a driver is reversing out of a parking space.
    • Monitoring the driver. AI in vehicles is essential for evaluating the driver’s state. Specialized sensors and cameras assess the human condition. They identify levels of stress, fatigue, and drowsiness. To prevent adverse situations while driving, these sensors can warn the person either visually or audibly.
    • Vehicle maintenance. AI technologies enable the monitoring of the vehicle’s health. Sensors gather information about the car’s condition and its components. They assess the status of parts and alert the driver of any malfunctions.

    Systems for Preventing Collisions

    Artificial intelligence significantly influences the progress of various companies. AI and automobiles are interlinked concerning safety, convenience, and preventive measures. Collision avoidance systems are progressively managed by artificial intelligence. The role of AI contributes to safer daily driving and accident prevention. Collision Avoidance Systems (CAS) are indispensable for all drivers, regardless of their skill level. Real-time control and monitoring of information is implemented. Data gathering and analysis have a significant impact on drivers’ awareness. A variety of sensors and cameras collect data concerning the vehicle and surrounding road conditions, tracking other vehicles. This comprehensive approach ensures that drivers can swiftly react and make correct decisions.

    • AI-enabled vehicles can analyze road situations using algorithms. Data collection and evaluation occur through machine learning processes. All sensors and cameras vigilantly monitor the surrounding environment in real-time. The system examines the approach of other vehicles, their paths, and potential collision hazards. This method assists in averting perilous situations on the road. AI also evaluates the presence of individuals and pedestrians nearby. It aids in clarifying the overall scenario to allow for prompt action. Machine learning empowers the system to anticipate and avert possible collisions and threats.
    • AI in the automotive industry provides advance warnings of potential collisions. An automated vehicle can activate safety mechanisms, including automatic braking, steering adjustments, and speed reduction. These features are vital for the safety of not just the driver, but also pedestrians and other road users.
    • AI in the automotive industry is evolving continuously. This technology enhances driving comfort and mitigates potential risks. Above all, the safety of both drivers and pedestrians is paramount. AI plays a significant role in ensuring this safety.

    Advanced Driver-Assistance Systems (ADAS)

    Car technology is progressing rapidly each year. The integration of AI and machine learning in vehicles has become essential. With the help of AI, driving has become more comfortable for many. Ensuring safety is a critical aspect of AI utilization. There are specific features designed to enhance convenience and avert road emergencies.

    Adaptive cruise control. This feature allows for a more pleasant driving experience. It gathers comprehensive data from the surroundings. Sensors and cameras observe traffic conditions and the speeds of other vehicles. Consequently, adaptive cruise control adjusts the car’s speed automatically. If a vehicle ahead slows down, the system reduces speed as needed. Conversely, if other vehicles accelerate, the car will increase its speed.

    Lane keeping system. Ongoing data collection and immediate analysis contribute to safer driving experiences. Sensors and cameras evaluate the lane boundaries that the vehicle should not cross. The car remains within its lane at all times. If the driver inadvertently drifts out of their lane, a warning is triggered. Automatic steering can be engaged to bring the vehicle back into its lane.

    Automatic parking. Sensors and environmental data assessment facilitate automatic parking capabilities. Cameras and sensors gather information on adjacent vehicles and parking conditions. The intelligent vehicle assesses this data and executes parking maneuvers autonomously. The parking process is monitored in real-time, allowing the vehicle to determine a clear path.

    Autonomous Driving Technologies

    Automating most driving tasks minimizes the need for human involvement and resources. The application of cutting-edge automotive technology enables less reliance on drivers. Developing specialized vehicles that utilize and tailor AI represents a significant advance. This foundation empowers vehicles to operate without human intervention. Key functions are essential for successful autonomous driving. These features are vital for transforming the vehicle’s functionality.

    • Perception. The implementation of specialized sensors and cameras is crucial. They facilitate real-time comprehension and communication of status information. Recognizing the distance and speed of surrounding vehicles allows the system to regulate speed automatically. Cameras capture data on pedestrians, and AI algorithms analyze this data for appropriate responses. Additionally, road signs, which are vital for safe operation, are monitored. AI in automotive technology helps oversee the road markings along which the vehicle travels.
    • Decision-making. After gathering and examining all relevant data, actions are determined. AI utilizes this data to modify the vehicle’s speed accordingly. In the event of traffic congestion, decisions are made to navigate more efficiently. Based on the analyzed data, automatic overtaking maneuvers can be executed safely. Overall, decision-making is a crucial element following the data collection process. Machine learning models swiftly analyze the most suitable actions for the driver in various scenarios.
    • Control. With AI’s assistance, all actions are closely monitored and regulated. After thorough analysis, data collection, and decision-making, maintaining control becomes essential. This ensures that individuals drive safely and adhere to the planned journey’s rules.
    • Integration. Machine learning plays a vital role in self-driving vehicles. Merging and integrating various solutions is key to understanding and responding to the driving process. This method enhances automation and effectiveness. Thanks to integration, quick decisions can be made in diverse situations. Above all, ensuring the safety of drivers and pedestrians is the priority. AI aids in maintaining security and compliance.

    Development of Self-Driving Cars

    Artificial intelligence in the automotive sector is essential. The advancement of self-driving vehicles represents a significant milestone for humanity as a whole. These cars are not only convenient but also user-friendly. AI equips vehicles with sophisticated safety features for various scenarios. Self-driving automobiles will help decrease the likelihood of hazardous situations on the road. Utilizing machine learning along with cameras and sensors, cars continuously monitor their surroundings, gathering extensive data in real-time. This includes traffic conditions, the number of vehicles, their speeds, and the presence of pedestrians. All this information is recorded to inform future decision-making. By analyzing this volume of data, it becomes possible to understand the road conditions.

    • The sensors and cameras are responsible for collecting information. The AI in the vehicles assists in grasping the overall context and making subsequent choices. Investigating the number of vehicles and their movements is crucial for determining the appropriate speed.
    • The analysis of data transitions into the decision-making phase. Every action taken on the road is examined to choose the best travel option. Steering is crucial for preventing accidents. The car adjusts its speed based on its location. Developers equip autonomous vehicles with specialized sensors. Every effort is made to guarantee a pleasant journey.
    • The car comes with all the necessary tools and technologies. They are utilized for ongoing monitoring. The role of artificial intelligence in car manufacturing is significant. Ensuring control is paramount. All data analysis and decision-making processes are regulated to provide an efficient, comfortable, and safe ride.

    Ethical and Regulatory Considerations

    Artificial intelligence in the automotive industry is essential. AI reduces the necessity for human involvement, which could affect jobs in specific sectors. Overall, the influence of AI on driving is a vital and intriguing topic. There are often varying viewpoints regarding the ethics of self-driving cars. Some individuals argue that these vehicles represent a genuine global transformation. Others believe that such cars may not always adhere to regulations. Numerous questions arise surrounding the use of these vehicles. Here are the key concerns:

    • Safety. By and large, self-driving vehicles adhere to all safety regulations. They come equipped with specialized sensors and cameras for continuous surveillance. Events occurring outside the vehicle are monitored. The distances to other cars and their speeds are tracked. The sensors keep an eye on pedestrians and follow road markings. AI in the automotive sector has a comprehensive suite of necessary capabilities. A significant safety concern is the ability of such vehicles to respond appropriately in any given situation, which could involve a collision or adverse weather conditions. Can self-driving cars actually make rapid and correct decisions? The outcomes should be favorable not only for the occupant but also for others outside the vehicle.
    • Liability. In traditional vehicles, the driver bears responsibility during emergencies. Cases are examined to discern who is at fault and should face consequences. For self-driving cars, the situation is less clear. In the case of an accident, determining who is responsible can be complex. The question arises whether liability falls on the manufacturer or the operator.
    • Privacy. Self-driving cars typically gather and analyze vast amounts of data. An essential factor is the maintenance of confidentiality. Personal data and location specifics are sensitive information. How securely does the self-driving car safeguard this private information, and is it adequately protected?
    • Social impact. To many individuals, cars provide work opportunities and income. The emergence of self-driving vehicles could significantly affect the job market, especially for professional drivers. Considering these issues is vital for ensuring the future of such workers.

    AI in Vehicle Connectivity and Communication

    For autonomous vehicles, the priority is safety along with adhering to numerous commands. The comfort of drivers in these vehicles is crucial. Automated cars offer several benefits that enhance the driving experience. A significant advantage of AI in the automotive sector is the creation of unique systems. Entertainment and telematics systems contribute to the driver’s comfort. Here are their key features:

    • Information and entertainment systems. Autonomous vehicles come equipped with specialized cameras and sensors. As a result, drivers experience a higher level of comfort. Data regarding the driver, including behavior, preferences, and habits, is gathered and analyzed. Based on this analysis, AI provides alternatives or similar options. If a driver enjoys listening to the news, AI will suggest related content. Additional exciting news or relevant articles can enhance the driver’s journey. Music is also an essential aspect of life for many people. If the driver prefers rock music, AI will present comparable choices. By understanding the driver’s language, AI can tailor the communication method to suit the driver’s preferences. This allows drivers to adjust various settings in their preferred language.
    • Telematics systems. The integration of AI and vehicles is vital for ensuring safety, comfort, and awareness. AI assists in diagnosing the vehicle’s condition, its components, and overall functionality. If any part is malfunctioning, the driver receives a notification. AI not only identifies current issues within the vehicle’s performance but also analyzes its general condition and notifies the driver about routine diagnostics. Additionally, it can forecast maintenance needs. This method is very convenient and makes the driving experience more comfortable. Such diagnostics quickly find any problems and provide proactive alerts.

    The Future of AI in Automotive Manufacturing

    Driver AI is the optimal way to enhance comfort and enjoyment during the ride. Thanks to AI advancements, automotive manufacturing is experiencing a surge in development and popularity. Customizing vehicles with specific components allows for automated journeys. Ride management ensures both comfort and security. Sensors and cameras gather all environmental information, enabling rapid decision-making to prevent various situations. Machine learning algorithms assess sensor functionality, which aids in identifying errors and opportunities for correction.

    Future AI-driven cars promise to introduce even more automated processes. Ongoing enhancements and quality management boost the effectiveness of self-driving automobiles. Various traffic situations and their potential occurrences are examined and assessed. The role of machine learning and greater AI integration is expanding. The way different scenarios are handled is recorded to guarantee safety for both the driver and other road users, including pedestrians.

  • Tesla’s Next Move 2025: Model 2 Or Model Y Juniper? Wait and See

    Tesla has always been clear – making cars is, for Musk’s marque, more a crusade than a commercial enterprise. Its Fremont plant has a display that perfectly sums up its mission statement about transitioning the world to sustainable transport: a 20th-century petrol pump next to a sleek 21st-century Tesla Supercharger, echoing the classic illustration depicting the evolution of primate to homo sapiens. You must go back here to understand the origins of the Tesla Model 2.

    Losing sight of the mission statement, nothing much Tesla does makes sense. Back in 2016, it could have relaxed with the Model S and Model X and stayed a niche, premium player. But no. The Model 3 arrived, orders soared, and Tesla faced growing pains as it adapted. The Model Y, its fourth car, has been even more popular. Despite recent price reductions, the most affordable Tesla remains a £40k proposition. To complete the job, Tesla needs a £25k car, named the Model 2. It aims to be nothing less than the battery-electric equivalent of the Ford Model T.

    CEO Elon Musk has described the new car as ‘our next-generation low-class vehicle’, and for ‘low-class’ read financially accessible. A teaser silhouette revealed at the shareholder meeting last May hinted at a smaller Model Y, designed for aerodynamics and ease of manufacture rather than a standout design like the Cybertruck, which is a relief.

    Similar to the Model 3 and Model Y, there are likely two cars in the works – Musk informed shareholders of this last May. Both will utilize a new architecture, codenamed NV9X, and additional versions are anticipated, although Musk has criticized established manufacturers for creating ‘variants for the sake of variants’, so those waiting for, for example, an estate will probably be disappointed.

    Tesla needs two body styles because it requires international sales. Larger models have a broad appeal globally, but the major markets differ when it comes to smaller cars. Hatchbacks will sell in Europe, but not as much in the US or China. Similarly, small sedans are niche in Europe but popular in China.

    Collectively, Tesla aims for annual sales of five million units, up from a total of 1.81 million in 2023 (1.74 million of which were Model Y/3). Musk aspires to surpass the Toyota Corolla or VW Golf globally.
    In February, Ford CEO Jim Farley referred to the upcoming smaller Tesla as ‘the ultimate competition’. Before fulfilling that ambitious promise, Tesla must perfect the assembly process. Once again, the company is focusing its innovation on the production of the car, rather than reinventing the elements that the consumer will engage with.

    ‘This is a revolutionary manufacturing system; far more advanced than any other automotive manufacturing system in the world by a significant margin,’ Musk boasted to analysts on the company’s earnings call in January.

    What Tesla excels at is also what the Chinese excel at, which is building EVs more affordably. This enables both disruptors to sell at lower prices and capture market share from established car makers.

    Tesla has been relentless in reducing the cost of Model 3 and Model Y, some of which was reflected in price cuts while the rest contributed to profit margins. However, a significant change was needed to bring the new Model 2 down to $25k, the price that Musk mentioned in 2020. ‘We are approaching the limits within our current platforms,’ chief financial officer Vaibhav Taneja stated on the January call.

    Tesla aims for a 50 per cent cost reduction in building the next-generation vehicle. The major innovation is to modernize Henry Ford’s 111-year-old production line and transition to what Tesla calls ‘unboxing’. Instead of the car slowly taking shape from initial metal stamping to body shop to paint shop and then final assembly – ‘Most of the time we’re doing nothing to it,’ head of vehicle engineering Lars Moravy said – Tesla intends to reorganize the factory to sub-assemble related parts.

    For instance, the rear ‘megacast’ platform will be equipped with wheels, seats, and the rear drive unit, while the front seats and center console will be installed onto the battery pack. All the pre-assembled parts will then be assembled together in one final burst of efficiency.

    Tesla aims to operate with fewer workers to increase cost-effectiveness, allowing for a 40% reduction in the manufacturing footprint. Initially, Tesla planned to build the car in a new plant in Mexico, but it has since announced that the first production will take place in Austin, Texas alongside the Cybertruck.

    According to Musk, ‘Giga Texas’ was selected because the engineers need to be present during the initial phase, and it’s more feasible there than elsewhere, especially considering the potential challenges from German unions in Berlin. A third production site is being considered, possibly in Berlin, given the model’s significance in Europe.

    Musk cautioned that the ramp-up will be gradual, citing the production challenges experienced with the Model 3. Although the Model 3 entered production in 2017, it wasn’t until 2019 that the car became available in the UK. It may take until 2027 for the next-generation car to arrive in the UK, by which time new competitors will likely emerge to challenge Tesla.

    The Cybertruck, Tesla’s newest model, will incorporate some of the latest technology, specifically the transition to a 48-volt electrical architecture from the current 12 volts. This change offers cost savings by reducing the use of expensive copper while enabling faster communication between electronic components, as noted by Musk.

    Tesla has been at the forefront of developing the ‘software-defined car,’ where fewer but more powerful computers process data more efficiently than numerous smaller ECUs. This ‘centralized compute’ system is also easier to update over the air, allowing for the quick addition of new digital features to the large central screen.

    Another potential carryover from the Cybertruck is drive-by-wire technology, which would eliminate the steering column and make adjustments to the yoke wheel ratio, but this may come with additional costs that Musk might not be willing to accept for the base model.

    The ‘Full Self-Driving’ suite, which is actually a hands-on, eyes-on Level 2 autonomous system at least in Europe, will likely be included, although with the removal of more sensors than additions, it remains uncertain if the Model 2 would achieve Level 2+ hands-off, eyes-on capability, let alone full autonomy.

    Tesla emphasizes data learning over sensor input to address challenging self-driving scenarios, but regulatory approval of this approach may be a point of contention.

    Range will be a critical factor. While Tesla cannot directly improve battery chemistry, this responsibility falls to suppliers such as China’s CATL and BYD for the more affordable LFP chemistry, as well as South Korea’s LG Chem and Panasonic from Japan.

    However, Tesla plans to utilize the larger 4680 cylindrical battery, which the company has been developing to replace the smaller 2170 battery. Tesla claims that this battery’s increased energy density will result in extended range, as well as improved manufacturing efficiency and reduced costs. According to Tesla, the larger battery involves 15 parts and 21 manufacturing processes, compared to 17 parts and 33 processes for the smaller 2170 cell.

    Tesla is also scrutinizing the design and construction of the electric drive unit (motor, inverter, gearbox) to reduce costs. The company aims to achieve a 75% reduction in silicon carbide, a material that enhances range and efficiency but adds to costs. Tesla also claims that the next-generation permanent-magnet motor does not require rare earths. Overall, Tesla estimates that it has reduced the cost of its drive unit to around $1000.

    It is unclear how much Tesla will reduce the battery size to achieve an attractive price point, such as under £30,000 in the UK. However, it is expected that at least one version will achieve a range of over 300 miles on the official range test, and an all-wheel-drive option is also likely, considering the potential demand from wealthier car buyers in Europe’s snow-prone regions.

    It is fair to say that Tesla has been working towards this car since its inception, starting from the production of the first Roadster by Lotus for Tesla in 2008. The company’s journey has been focused on relentlessly driving down the costs of electric cars to a point, perhaps three years from now, when this car will be affordable for buyers who currently opt for a Golf or Qashqai, prompting them to switch. This prospect should make competitors like Volkswagen and Toyota extremely nervous.

    The much-anticipated affordable entry-level electric car from Tesla, commonly referred to as the Tesla Model 2, is expected to be launched in the first half of 2025, as reaffirmed by Tesla CEO Elon Musk during the company’s latest financial results call.

    It’s possible that we may soon get a proper look at the highly anticipated EV, as Musk mentioned: “We delayed the unveiling of the Robotaxi product to 10 October. I wanted to make some important changes that I believe will enhance the Robotaxi and we’ll also showcase a couple of other things.”

    The timing seems appropriate, as presentation slides for investors indicate that Tesla’s more affordable model will start production in the first half of 2025. It will utilize elements of the company’s next-generation platform along with some from its existing architectures and will be manufactured on the same production lines as its current vehicle lineup.

    Tesla acknowledges that this approach won’t allow for as much cost reduction as previously expected but will enable the company to “Prudently grow our vehicle volumes in a more capex efficient manner during uncertain times”. This will help fully utilize the current expected maximum capacity of close to three million vehicles, enabling more than 50% growth over 2023 production before investing in new manufacturing lines.

    Positioned as a more affordable addition to Tesla’s four-strong model range, the Tesla Model 2 has been in development for some time as the company aims to lower the entry price to its range of EVs below the base Model 3 that currently starts from £39,990 in the UK.

    Only one official teaser image of the new ‘baby Tesla’ has been unveiled so far. It was revealed in 2023 at an annual shareholder meeting and depicts the curving roofline of a car with similar design cues to those of the existing Model Y SUV and Model 3 saloon.

    Our exclusive images interpret Tesla’s existing line-up and teaser images in a scaled-down format to illustrate how a baby Tesla Model 2 model might appear. Based on the single teaser image we’ve received and a video shared on social media showing some design sketches, the new car could borrow numerous design elements from the Model Y SUV. We also anticipate that it will reflect the newly facelifted Model 3 with thinner headlights compared to Tesla’s older models.

    The Model 2 should be positioned as a competitor to traditional family hatchbacks, with a length of around 4.4 meters. This could be achieved through a more aggressive rear roofline while still allowing for a conventional bonnet with luggage space beneath it.

    Despite speculation about the car being called the Tesla Model 2, this name is not guaranteed to be used, especially since Model 3 was chosen as a reverse of Model E, which Ford initially warned Tesla against using.

    Musk has stated, “We’re going to take everything we learned from [Models] S, X, 3 and Y, the Cybertruck, and the Semi into that platform. We’re trying to get to that 50 per cent number again.”

    This refers to the Model Y, which has significantly lower bare construction costs than the Model 3. Tesla used improved processes to eliminate a significant amount of complexity from the Y; for example, a fresh pair of larger stamped components in its bodyshell alone perform the job of 171 separate parts in the Model 3, saving more than 1,600 welds during manufacturing.

    Tesla is expected to utilize its “revolutionary” new manufacturing process called ‘unboxed’ to produce its entry-level EV, which is designed to be more efficient than current production lines and should also be faster.

    Elon Musk informed analysts in 2023 that the next-generation vehicle “Will be about half the cost of the Model 3 and Y platform”. However, as we mentioned, Tesla doesn’t anticipate achieving that level of cost reduction.

    Nevertheless, the baby Tesla could still significantly undercut the starting price of a Model 3, which is currently £39,990. We estimate the Model 2 could feasibly start at around the £30,000 mark.

    This would considerably expand Tesla’s potential market, providing the company with a competitor for European-made electric hatchbacks such as the Volkswagen ID.3 and even Chinese models from the likes of MG and BYD. Considering that the Model Y was not only the world’s best-selling EV but also the most popular car overall last year, with 1.23 million units sold, a more affordable alternative from Tesla itself certainly has the potential to achieve a similar level of popularity.

    In addition to the Model 2 being more affordable to purchase, former Tesla CFO Zach Kirkhorn asserted that the total cost of ownership for the company’s entry-level model per mile over five years will be significantly lower than a base Model 3 or Toyota Corolla.

    In 2023, Colin Campbel, who was the powertrain head at Tesla, announced that the new electric motor for the upcoming vehicle would not utilize any rare earth materials. Additionally, the powertrain would be compatible with any battery chemistry, providing greater flexibility for sourcing.

    The successful production of Tesla’s new smaller car is highly dependent on the choice of battery chemistry and the method of cell installation. The company has previously utilized lithium-iron phosphate cells (LFP), which are more cost-effective to manufacture than nickel manganese cobalt (NMC) cells.

    Currently, the manufacturer incorporates LFP cells into some of its vehicles, and this is likely to be a significant component of the technical makeup of the more affordable model. The company is reportedly preparing to introduce ‘cell-to-chassis’ technology in German-built Model Ys as part of a collaboration with China’s BYD. This innovative method, which is lighter and more compact than traditional module-based construction, conserves space that can be utilized for additional cells to compensate for LFP’s lower energy density, thereby achieving a comparable range.

    Elon Musk is confident that Tesla can build on its recent milestone of producing three million cars and expand its output to more than 100 million vehicles by the end of the decade. To accomplish this, he stated that the company would require “roughly a dozen factories,” with most facilities capable of producing up to two million cars annually.

    There is significant anticipation surrounding the anticipated Model Y Juniper update, and a prominent Tesla observer now suggests that the unveiling of a Model 2 may take place at a Tesla event on October 10. Referred to as the 2025 Model 2, this vehicle, which has been described as a “stripped down Model 3,” is the subject of much anticipation. The vehicle, hyped by CEO Elon Musk as a $25,000-$30,000 car, could potentially be a focal point at Tesla’s autonomous electric ride-hailing taxi event, dubbed “We, Robot,” at Warner Bros. Studios in Burbank, California, on October 10, according to Gene Munster, a managing partner at Deepwater Asset Management, an investor in Tesla.

    “I’m expecting the unveiling of three vehicles (although most are anticipating two), and I believe we will only receive information about the production timeline for the Model 2, which I anticipate will commence in late 2025,” according to a note from the investment firm dated October 4. “This timeline is a few months later than what Elon’s comments suggested on the June earnings call. One potential surprise could be that the more affordable Model 2 is essentially a stripped down Model 3, which would be viewed unfavorably by investors,” the note mentioned.

    Although Tesla CEO Elon Musk is known for being flexible with delivery dates, it appears that a more affordable vehicle is indeed in development. “We are on track to deliver a more affordable model in the first half of next year,” he stated during the company’s second quarter earnings conference call. While not officially named by Tesla, analysts commonly use the Model 2 moniker to refer to the car.

    (UPDATE: On October 10, Musk discussed a low-cost vehicle, but it was a fully-autonomous Cybercab with no steering wheel and no pedals. It is expected to be priced under $30,000 and is due in 2026.)

    The Model 2 may be linked to Tesla’s upcoming October event, even if Musk does not explicitly mention the future affordable Tesla. “The vehicle to be introduced may have dual purpose for consumer sales and robotaxi use,” said Stephanie Brinley, an analyst at S&P Global Mobility, referring to a future low-cost Tesla. This suggests that the car could be an integral part of Tesla’s strategy for fully autonomous ride-hailing vehicles.

    Musk has publicly expressed confidence in this strategy. “We’re convinced we can make a compelling $25,000 electric vehicle that’s also fully autonomous,” he stated in 2020.

    The 2025 Model Y Juniper: Following the 2024 Model 3 refresh, Tesla enthusiasts are eagerly anticipating the next version of the Model Y. The Tesla SUV is the best-selling electric vehicle in the U.S. and is one of the most widespread cars in urban areas such as Los Angeles, as well as one of the top-selling cars globally. It was unveiled in 2019.

    There was widespread speculation this summer after a Reddit post appeared to show a future Model Y wrapped in black. Subsequent “leaks” purported to reveal a new Model Y with a rear light bar. However, there will not be a Juniper model released this year. “No Model Y ‘refresh’ is coming out this year,” Musk stated in June. “I should note that Tesla continuously improves its cars, so even a car that is 6 months newer will be a little better,” he explained.

    One thing is for sure, though. The design is starting to show its age. “Tesla is now delivering vehicle volumes like a traditional automaker, and traditional automakers understand that a fresh lineup is what retains customers,” Joseph Yoon, an analyst for consumer insights at Edmunds, informed me in July.

    Model Y Juniper might resemble the Model 3 update: the Model 3 update likely provides hints about the Model Y that is set to be released in 2025.

    Performance: The Model 3 Performance boasts more horsepower (up to 510 hp) and accelerates from 0–60 mph in 2.9 seconds. Front: The front now features a smoother hood (bump removed) and low-profile headlights for improved aerodynamics.
    Ride: “Frequency response dampeners” have been added to ensure a smoother ride in the updated Model 3.
    Tires: Enhanced tires and wheels have been installed to extend the range and reduce noise.
    Seats: Ventilated seats have been incorporated.
    Cabin: The cabin is now quieter due to 360-degree acoustic glass, meaning all windows are now made of acoustic (double-glazed) glass, not just the windshield and front side glass.
    Screen: A rear entertainment screen for passengers has been introduced, allowing for gaming, streaming, and climate control.
    Stalkless: The stalks on the steering wheel have been removed, with physical controls now moved to thumb buttons on the steering wheel.
    Controls: More software (display-based) controls, including a gear selector integrated into the touch screen, have been added.
    Sound: An improved sound system has been included.
    Hardware 4 (HW4): Anticipated to feature the latest Hardware 4, bringing enhancements in computing power and sensor capabilities for Autopilot and FSD.
    Motor: Potentially enhanced motors, such as a more efficient rear motor, may have been installed.

    And other potential upgrades not influenced by the Model 3 update:

    Rear: Updated taillights and bumper; there are rumors of a rear light bar.
    Cameras: An additional camera could be integrated into the front bumper to enhance Autopilot and FSD capabilities — although this is not confirmed.
    Battery: There could be a larger battery pack (unverified rumors suggest a massive 95 kWh battery) that could significantly increase the range. Currently, the largest battery pack in the Model Y is 75 kWh. Additionally, it’s almost guaranteed that Tesla is consistently enhancing its battery technology, so new battery tech in the Model Y Juniper will likely be more efficient, providing more miles per kilowatt hour (kWh).

    Is there still a plan for a more affordable Tesla Model 2?

    In April, Tesla refuted rumors about canceling their entry-level EV project and Elon Musk dismissed them as fabrications. Shortly after, design chief Franz von Holzhausen hinted at something in the pipeline, despite much of the focus being on the robotaxi project.

    The speculated Tesla Model 2 has no official launch date and details on its design remain mysterious. Our artists have envisioned a potential five-door body style that blurs the lines between hatchbacks and crossovers. This smaller Tesla could offer practicality and leverage smart electric architecture.

    Potential competition for this offering includes electric hatchbacks like the VW ID.3 and MG4, as well as similarly sized crossovers like the Renault Megane E-Tech and the upcoming Nissan Leaf. It could serve as a new entry point in Tesla’s lineup, positioned below the Model 3 sedan and Model Y crossover in terms of pricing.

    Launching a new model demands significant investments in production and R&D, and Tesla must ensure that demand justifies these costs. Additionally, smaller segments often yield less profit compared to larger, more premium counterparts due to thin margins, and the development and testing phases require careful consideration.

    Recent sales slumps and a sharp drop in overall EV registrations across Europe serve as reminders that the EV market is unpredictable. To achieve its goal of selling 20 million vehicles annually by 2030, Tesla needs competitive products and a favorable climate for EV popularity worldwide. Predicting the latter remains uncertain.

    The smaller EV would be the most affordable member of Tesla’s lineup, priced at less than A$40,000. The public and media have even named the new Tesla model “the Model 2.”

    Despite public events making Elon uneasy, he knows how to generate interest. He has a history of teasing products that take years to materialize, such as the Cybertruck, the Tesla Semi, the Roadster, and now the Cybercab — allegedly due in 2026.

    The We, Robot event took place at Warner Brothers Movie Studios in California and featured Musk being picked up by a small two-door electric vehicle and taken away. The fully autonomous Cybercab (or Robotaxi) was introduced as the future of personal transportation, allowing owners to send it out to transport paying passengers to their destinations while they could stay at home and relax.

    Apart from its large, disc wheels (which are actually a visual illusion achieved with some additional gold paint) and scissor-opening doors, the gold-colored Cybertaxi did not appear significantly different from other Tesla vehicles. It featured the sleek sports car nose and bulging front wheel arches of the Model 3, while the rear design gave off strong Cybertruck vibes.

    However, what if this was actually the Model 2 and no one realized it? It’s the type of joke that Musk would likely find very amusing.

    Should we be preparing ourselves for a tweet that is completely out of context, asking for opinions on the looks of the Model 2? Or perhaps the display was a way to gauge public opinion on the new or proposed design of what will eventually become the Model 2. Only Musk and his inner circle would have that information.

    Up until now, the only image of what could potentially be the Model 2 that has been revealed by Tesla is a silhouette of a vehicle shown during the company’s annual shareholder meeting in 2023.

    Then, at the beginning of this year, a low-resolution photo of a two-door hatch-style vehicle started circulating, reportedly of the Model 2 being developed at Tesla’s Giga factory in Berlin. This is not an official Tesla image, but many believe it to be a genuinely leaked photo of a mule or prototype, possibly for the Model 2.

    The exclusive CarsGuide image you can see here is our artist’s interpretation of what the Model 2 could look like based on Musk’s announcements, the design of the Cybercab from the We, Robot event, and those previous teasers. Time will tell how accurate our rendering was compared to the production model.

    All we have to rely on is what Musk has disclosed, which is that an affordable, smaller vehicle will be added to the lineup in 2025 and will be positioned below the Model 3. He hasn’t even confirmed that Model 2 will be the name.

    According to Musk, this smaller EV is expected to be manufactured at Tesla’s Giga Factory in Texas, alongside the Model Y, and is set to launch in 2025. It was initially expected to utilize a completely new platform shared with the Robotaxi, but reports have indicated that it will combine new production techniques from the Cybercab with a cost-cut version of the Model Y’s underpinnings.

    “I think the revolution in manufacturing that will be represented by that car will blow people’s minds,” Musk said. “It’s a level of production technology that is far in advance of any automotive plant on Earth.”

    Musk needs a smaller, lower-priced model in his lineup if he intends to compete or survive against new players such as BYD and other Chinese carmakers who are moving faster than mainstream brands to bring affordable electric vehicles to consumers.

    Unlike other deadlines that have been delayed by years, Musk’s plans to introduce the Model 2 next year must come to fruition, otherwise Tesla might not be able to keep up with the new rivals who are already off to a very quick start.

    If Tesla gets it right, the Model 2 has the potential to surprise the market and challenge established brands like Ford, Toyota, Hyundai, Mazda, Volkswagen, and Nissan.

    However, it might already be too late. Following the We, Robot event, Musk’s net worth dropped by US$15 billion as Tesla’s shares plummeted by nine percent. There is some optimism, though, with Musk pledging that the Cybercab will be relatively affordable at under US$30,000 (A$45,000) per unit, suggesting that a more traditional small Model 2 could meet the promised price target.

    Although the Cybercab event seems to have been a major disappointment, most viewers were probably more disappointed that Musk did not use We, Robot to unveil the Model 2. But perhaps he did.

    Tesla Guide: Tesla Dual Motor VS Single Motor?

    Tesla’s popularity has been on the rise for good reasons. These electric vehicles don’t resemble traditional electric cars; they are high-performance vehicles that can easily go unnoticed by Tesla fans. Despite their high cost, many people have opted for the rear-wheel-drive model with just one motor. However, there is a debate about whether the dual-motor option is worth the extra expense.

    Tesla Dual Motors vs. Single Motor: The choice between a single and a dual motor depends on your specific needs and preferences. Dual motor Teslas offer all-wheel drive, more horsepower, and faster acceleration, while single motor Teslas are more affordable and have rear-wheel drive.

    If you are thinking about purchasing a Tesla and want to make an informed decision between a dual or single motor, it’s crucial to do your research. Understanding the differences between the two motor types is essential in making the right choice.

    What is Tesla? It’s important to comprehend the distinction between dual motors and single motors.

    Tesla is based in Palo Alto, California, and is known for producing electric vehicles that do not rely on gasoline. These cars are equipped with powerful batteries that can be charged at home or using Superchargers on the road.

    Regardless of whether they feature dual motors or just one motor, Teslas offer great value. They eliminate concerns about fluctuating gas prices and regular maintenance costs. While the upfront cost of a Tesla may be higher, it results in significant savings over time compared to gasoline-powered vehicles.

    Teslas are an excellent choice for environmentally conscious individuals as they produce no harmful pollutants due to their lack of reliance on gas or oil, contributing to cleaner air.

    The appealing appearance of Tesla’s vehicles is a major selling point for many customers. Prior to Tesla, electric cars were easily recognizable, but Tesla sought to change that by creating stylish electric vehicles.

    Tesla offers a variety of models, including the popular sedan models Model 3 and Model S, as well as the Model Y and Model X.

    The more affordable versions of the Model S and Model X are the Model 3 and Model Y.

    The Difference Between Single and Dual Motor Systems: It’s important to understand the distinctions between these two motor systems in order to make an informed decision. By learning about their key differences and operations, you can make a more informed decision.

    What is a Single Motor System? The single motor option is less expensive and involves a single rear motor. Not all Tesla models offer this option, and selecting it means the vehicle is only rear-wheel-drive.

    Pros of a single motor system:

    – Potentially improved reliability due to fewer vehicle parts.
    – Lower cost.

    Cons of a Single Motor System:

    – Limited to rear-wheel drive.
    – Not available in all Tesla models.
    – Shorter range.
    – Lower horsepower.
    – Slower 0-60 acceleration.

    While the single-motor system may be sufficient for most users, it is limited to rear-wheel drive. This system is easier to maintain and may be more cost-effective than a dual motor system.

    What is a Dual Motor System? A dual motor system includes two separate motors: one rear motor, which is the same as in a vehicle with a single engine, and an additional front motor. The front motor enables all-wheel drive and faster acceleration.

    Pros of a Dual Motor System:

    – Increased vehicle range.
    – More horsepower.
    – Faster acceleration (0-60).
    – Four-wheel drive.

    Cons of a Dual Motor System:

    – Higher cost.
    – Potential for increased likelihood of breakdown due to more parts (although this is not a significant concern).

    Dual motor systems are the best choice for individuals seeking all-wheel power and requiring greater acceleration, range, or power.

    Tesla Dual Motor: Worth the Price? After understanding the differences between single motors and dual motors, it’s important to determine whether a dual motor is the right choice for you.

    Dual-motor vehicles with AWD offer numerous benefits. All-wheel drive (AWD) is one of the advantages of choosing a dual motor. AWD allows the vehicle to quickly adapt to changing road conditions and distribute weight to maintain traction regardless of weather conditions.

    What is all-wheel power? A quick overview: Dual motor systems are utilized to provide all-wheel drive. For instance, when the vehicle is exerting greater effort during acceleration, it redistributes weight from the front to the back. This enables the front motor to reduce power to prevent wheel spin and maintain stability. During this process, the excess energy from the front-mounted motor is transferred to the rear-mounted motor to ensure vehicle stability.

    In challenging weather conditions, the dual-motor system operates in the opposite way. Instead of transferring power from the front motor to the back, it provides more torque and power to the front.

    What is AWD?

    For those living in areas with infrequent snow or rain, all-wheel drive may not be necessary.

    Consider all-wheel drive as the safest option in extreme weather conditions. AWD is a system that enables power distribution to all wheels at all times.

    Why do we need AWD?

    If you reside in a region with frequent wet and snowy conditions, an all-wheel system will make your vehicle safer compared to a rear-wheel drive vehicle.

    A dual-motor option is optimal for areas with harsh weather conditions.

    Conclusion

    Some consumers are comfortable and confident in purchasing a single motor system. The dual-motor electric car will provide more power, acceleration, and a greater range.

    The last thing you want is to buy a single motor and later regret not choosing the dual-motor. I have seen this happen to Tesla users in the past, which is why I am grateful I chose the Dual Motor option.

    A Tesla is an excellent choice, regardless. I hope this article has helped you decide which model, single motor or dual-motor, is best suited for you.

  • Intel’s missed opportunity laid the foundation for the success of the British company Advanced Risc Machines (Arm)

    Arm makes its debut on the New York Stock Exchange today. The chip designer’s technology is found in practically every smartphone—Apple founder Steve Jobs also contributed to this.

    It all started with a mistake. A very expensive mistake. In 2006, Apple boss Steve Jobs asked chip giant Intel to build the processors for his first iPhone. But the then Intel boss Paul Otellini turned down Jobs; smartphones didn’t have much of a future for him. Otellini saw them as just a niche product – a misjudgment that he would soon bitterly regret.

    Arm IPO: Biggest IPO of the year

    Intel’s missed opportunity laid the foundation for the success of the British company Advanced Risc Machines (Arm). After being rejected by Intel, Jobs turned to the small company he co-founded in Cambridge in 1990, thereby laying the foundation for the British success story.

    The current highlight is today’s IPO on the New York technology exchange Nasdaq. The papers cost $51 and are, therefore, at the upper end of the price range. This corresponds to a valuation of more than 54 billion dollars—it is the largest IPO of the year in the USA so far.

    Arm processors: cheap and energy-saving

    What many smartphone owners may not know: Your daily companion most likely uses an Arm processor. More than 90 per cent of all mobile phones – from Apple, Samsung or Xiaomi – have a processor with Arm architecture. The company, therefore, has a natural smartphone monopoly. But what is the secret behind the success of Arm processors?

    The main difference between Intel and Arm processors is the amount of instructions the processor can execute. Conventional Intel chips are, roughly speaking, designed to manage many complex instructions, which draws a lot of energy. On the other hand, the Arm architecture relies on small and few commands. Although this reduces computing power, it makes the processors particularly energy-efficient – meaning smartphone batteries last longer. Arm processors are also used in game consoles, printers, routers and TV set-top boxes, which are also significantly cheaper .

    Successful with chip licenses for smartphones

    Arm doesn’t even build the processors themselves. Arm is a pure design company: a chip design licensor without factories that only markets IP (intellectual property) for processors. Arm sells these to chip manufacturers such as Qualcomm, who then build Arm-based smartphone processors. The licensee pays Arm a fee for each chip produced.

    Artificial intelligence as a growth driver?

    However, Arm CEO Rene Haas moved away from this strategy before the IPO. In an interview with the Bloomberg news agency, Haas explained that in the future, they will not just provide IC designs, based on which the chips can then be manufactured directly in the factories. This involves integrated circuits (Integrated Circuit – IC) that respond to the specific requirements of the respective application. Arm differentiates between mobile devices, the automotive industry, cloud computing, the Internet of Things and artificial intelligence.

    Some market experts see great opportunities for the British group, particularly in ​​​​artificial intelligence. This is especially true since the traditional mobile communications business is weakening, and the world’s largest smartphone market, China, is “particularly vulnerable to economic and political risks.” Arm itself points this out in the stock market prospectus.

    Poor – a second Nvidia?

    Arm’s future prospects on the stock market largely depend on whether the group convinces the market that it is one of the big players in the AI ​​trend. “If the hype surrounding artificial intelligence and the demand for the securities of competitor Nvidia are an indication, only those who received the shares at the issue price can consider themselves lucky,” explains Jürgen Molnar, capital market strategist at RoboMarkets.

    ARM and Intel processors currently dominate the majority of our electronic devices. Despite serving the same purpose, the processors are constructed on distinct architectures, impacting program development and chip production.

    ARM processors are commonly found in handheld devices, such as smartphones, while Intel processors are used in desktop and laptop computers. Nevertheless, various computer manufacturers are now introducing ARM-powered computers. For example, Apple’s M1 and M2 processors power several Mac and MacBook models.

    So, what sets ARM and Intel processors apart?

    Arm vs. Intel: Technology Companies Dominating the Digital World

    Arm and Intel are prominent technology companies producing different microprocessors for different markets. While Intel manufactures high-performance processors for personal computers and laptops, Arm primarily designs ARM processors for low-power devices like smartphones, tablets, wearable tech, and smart home devices.

    It’s important to clarify that Arm is the company, while ARM is the processor. When referring to the company that designs microprocessors, it’s called Arm. However, when discussing the processor made by Arm, it’s referred to as ARM.

    In contrast to Intel, which both designs and manufactures its chips, Arm designs ARM processors and licenses these designs to various companies. As a result, numerous ARM-based smartphone processors are produced by different companies, such as Snapdragon, MediaTek, Exynos, and Tensor.

    In 2012, Intel attempted to produce smartphone processors through its Intel Atom SoCs, but the endeavor was largely unsuccessful. Subsequently, the company shifted its focus away from handheld processors and continued concentrating on the PC market.

    On the other hand, Arm achieved some success in designing processors for low-powered laptops and single-board computers but has yet to establish a significant presence in the desktop PC market. Nevertheless, it’s worth noting that Apple’s M1 chips, which Arm assisted in designing, have proven to be capable processors, demonstrating that ARM processors can compete against Intel and AMD’s x86 chips.

    Differences Between ARM and Intel Processors

    ARM processors are predominantly used in mobile devices, while Intel processors are primarily used in personal computers. Despite potential plans for both Intel and ARM to design processors for both markets, their differences make them more suitable for specific applications. Although both processors serve as the brains of electronic devices, they differ in several ways; some of the key disparities include the following:

    1. Instruction Set Architecture (ISA)

    The predominant use of different ISAs largely explains why ARM processors are primarily used for handheld devices and Intel processors for personal computers.

    There are two types of ISA: Complex Instruction Set Computing (CISC) and Reduced Instruction Set Computing (RISC). CISC is designed for microprocessors to efficiently handle complex tasks and calculations at the expense of power efficiency. Conversely, RISC is designed for processors to handle simpler instructions with high power efficiency.

    Intel processors utilize their x86 architecture based on the CISC ISA, while ARM processors use their ARM architecture, which employs the RISC ISA. Both x86 and ARM ISAs are closed-source ISAs, and designing processors using these architectures requires a license from Intel or Arm.

    2. Software Compatibility/Support

    Due to the differing design philosophies (RISC and CISC) of x86 and ARM ISAs, software developed for Intel chips cannot be processed natively by an ARM processor, and vice versa. The significant majority of programs developed on a specific architecture ensures that consumers will choose a company’s processor due to the multitude of available applications.

    If you’ve recently looked for a new desktop CPU, you’ve probably noticed that your only choices are either an Intel or an AMD processor. This is because the majority of computer programs are designed for x86 processors. As only Intel and AMD (which has a license to use x86) can run these programs natively, they have effectively created a duopoly in the PC processor market.

    Conversely, ARM processors dominate the smartphone processor market because smartphone operating systems and applications have been specifically programmed to run on ARM processors. While other RISC-based processors compete with ARM, the abundance of applications already available for ARM is a significant reason for their use in almost all modern smartphones today.

    To gain traction in the PC market, Arm must persuade developers to either port or develop applications from scratch to run natively on its ARM processors. Despite Microsoft Windows already supporting ARM processors, ARM must still find a way to run the everyday software that users rely on to convince the public to opt for ARM processors in their PCs.

    3. Performance

    In terms of CPU performance, Intel processors generally exhibit superior raw performance compared to ARM processors. However, ARM processors are designed for energy efficiency, making them suitable for devices requiring lower power consumption.

    On the other hand, ARM processors are much speedier than Intel processors as they are designed for simpler tasks that do not require the same clock speeds as their Intel equivalents.

    4. Efficiency

    While they may lack raw performance, ARM processors have an advantage over Intel in terms of efficiency.

    CISC-based processors like Intel’s x86 chips heavily rely on hardware to handle intricate tasks. High-performance processors need substantial power, sufficient cooling, and complex manufacturing, making them quite expensive.

    In contrast, ARM-based processors are optimized in both hardware and software, making them highly efficient processors that require very little power. Requiring minimal energy to operate also means relatively lower heat, allowing ARM CPUs to be embedded in a System on a Chip (SoC). An SoC combines CPU, GPU, RAM, DSP, and telecommunication modems in one die or chip. Combining all the essential solid-state components into one chip makes an ARM process highly efficient because data does not have to travel far for each component in the SoC to process.

    ARM vs. Intel: Which Processor Is Better?

    Since Intel and ARM produce different types of processors for different markets, it is challenging to compare both processors in terms of performance and efficiency. Although CISC-based processors require a higher performing processor for complex tasks, it does not necessarily mean that it will outperform a highly efficient RISC-based processor designed for PCs.

    However, Apple’s ARM-based M1 and M2 computers are highly efficient and relatively powerful. Numerous tests have shown that even its base-model M1 MacBook Air can outperform CISC-based Windows laptops of similar specifications.

    However, since most computers run Windows, many developers find it more profitable to create apps for CISC processors. This is also why Macs typically have fewer options when it comes to apps and AAA games.

    This, however, is not the case when it comes to smartphone and tablet processors. Although Intel’s Atom SoC was power efficient, they were not as good as their ARM counterparts in terms of performance per watt. For this reason, ARM processors are still superior when it comes to the smartphone market.

    ARM-Based Chips Are Slowly Gaining an Edge over Intel’s CISC CPUs

    Just a decade ago, even Intel’s entry-level CISC-based processors were much more potent than top-of-the-line ARM-based chips. However, this is no longer the case. Apple has demonstrated with its M1 and M2 chips that ARM-based CPUs can be as capable, if not more capable, than Intel’s competing offering.

    Nevertheless, this does not mean that the CISC-based x86 chip is on its way out. Given the large number of computers that run these chips, it will take time before ARM’s ARM chips will supplant Intel’s x86 offerings. But as ARM processors become more powerful and affordable, their adoption will increase, and soon enough, it will be found that ARM CPUs are powering most computers on the market.

    What is an Arm processor?

    An Arm processor is one of a family of central processing units (CPUs) based on the reduced instruction set computer (RISC) architecture for computer processors. Arm Limited, the company behind the Arm processor, designs the core CPU components and licenses the intellectual property to partner organizations, which then build Arm-based chips according to their own requirements. Arm Limited does not manufacture or sell any chips directly.

    Acorn Computers initially developed the Arm processor in the 1980s.

    Arm Limited provides designs for both 32-bit and 64-bit RISC multicore processors. The processors use a much simpler instruction set than their Intel counterparts, which are based on complex instruction set computing (CISC) architecture. The two types of processors also utilize different methods to optimize performance and increase efficiency. For example, Intel employs a hardware approach to maximizing performance, while Arm opts for a software approach.

    Arm processors can execute many more millions of instructions per second than Intel processors. By eliminating unnecessary instructions and optimizing pathways, an Arm processor can deliver excellent performance while using much less energy than a CISC-based processor. The decrease in power also means that Arm CPUs generate less heat. This does not mean that Arm processors are inherently superior to Intel processors, only that they are better suited to specific use cases.

    Arm processors are extensively used in consumer electronic devices such as smartphones, tablets, wearables, and other mobile devices. They also find use in a wide range of sensors and internet of things devices. According to Arm Limited, the company’s partners have shipped more than 215 billion Arm-based chips over the past three decades.

    The reach of Arm processors is expanding. Historically, Arm processors were mainly used in smaller devices like smartphones and sensors. However, there is a shift as Arm processors are now being integrated into device categories that were traditionally dominated by Intel and to a lesser extent, AMD. For instance, Microsoft offers Arm-based versions of its Surface computers and Windows editions designed for Arm-based PCs.

    Arm processors are also utilized in numerous Chromebook laptops, and Apple has introduced several computers equipped with the Arm-based M1 chip. The new MacBook Pro systems from Apple, powered by the M1 chip, have established a new standard for laptop performance and battery life in the industry.

    Arm processors are now making their way into the server market. While this isn’t a new initiative, its adoption has been gradual. Nevertheless, enterprises are starting to take notice of the Arm architecture due to its potential to provide the best performance-per-watt among enterprise-class CPUs.

    As workloads grow in both size and complexity, they demand more energy for processing and for maintaining safe operating temperatures for the underlying hardware. This not only poses a financial concern but also becomes an issue for organizations that are transitioning towards more sustainable data centers.

    In contrast to scaling up with x86-based processors, an Arm server might employ hundreds of smaller, less advanced, and low-power processors that collaborate on processing tasks. This is sometimes referred to as scaling out. However, even when scaled out, these processors consume less energy and generate less heat than x86 servers, presenting a potential solution for addressing future energy considerations.

    Although Arm-based servers currently represent only a fraction of today’s data center systems, they are progressively gaining ground. For example, Amazon recently introduced the third generation of its Arm-based AWS Graviton processors, which boast up to 25% better compute performance than the previous generation and twice the cryptographic workload performance. These Graviton3 chips are now powering the AWS EC2 C7g instances.

    Ampere has also revealed the first 80-core Arm-based 64-bit server processor, targeting workloads such as artificial intelligence, data analytics, web hosting, and cloud-native applications. Arm-based processors are also being utilized in some of the world’s fastest supercomputers, garnering increasing recognition as a result.

    Meanwhile, Arm Limited continues its efforts to make headway into the data center. For example, its Neoverse chips are designed to deliver the performance and energy efficiency required to support cloud, edge, and 5G workloads now and in the future.

    Recently, there has been a noticeable rise in the prevalence of Windows laptops (and other devices) that utilize ARM processors instead of traditional CPUs (central processing units). However, ARM processors have been a significant part of many people’s electronic devices for over the past twenty years, indicating that ARM technology is not a recent development.

    Despite this, there is still a lack of widespread understanding of this processor technology. What exactly characterizes ARM? What kinds of devices make use of ARM? How do ARM processors differ from CPUs? Why is the adoption of Windows on ARM considered significant? Why is ARM frequently associated with Apple and Qualcomm? Let’s delve into these questions and analyze.

    Note: There are technical discussions on how to refer to RISC technology. This page uses “ARM” to refer to the general computing architecture technology and the original company name, whereas “Arm Ltd.” or “Arm” is used to denote the current company names.

    What is an ARM processor?

    “ARM” or “Arm” refers to a type of RISC instruction set architecture (ISA). An ARM processor simplifies instructions for the computer to maintain efficient system operation.

    ARM processors are typically less powerful than traditional CPUs, yet they also require less power to operate. Many companies opt to utilize ARM-based processors to create lightweight devices with extended battery life and reliable, well-balanced performance.

    What is an ARM-based RISC?

    A reduced instruction set computer (RISC) denotes a computer architecture that streamlines instructions sent to a computer, enabling efficient system operation. In this context, an ARM-based RISC represents a type of RISC computer processor architecture designed to operate with an ARM SoC (system on a chip).

    What sets ARM apart from other processors?

    CPUs from Intel and AMD are supplied and integrated into OEM computers (manufacturers such as Dell, Lenovo, etc.) as is, with no modifications to these processors by the OEMs apart from adjustments to power draw/thermal envelope, known as TDP.

    Intel and AMD design their own x86-x64 processors (CPUs), yet only Intel proceeds to manufacture them in its fab, which accounts for Intel’s significant influence. Conversely, AMD relies on a company like TSMC to manufacture its chips.

    Meanwhile, companies like Qualcomm or MediaTek procure licenses from Arm Ltd. to utilize core ARM CPU components and architecture. With these licenses, manufacturers have the legal freedom to modify or adapt ARM architecture or integrate an ARM processor into the chip. Put differently, while Qualcomm proceeds to design certain components of the SoC, the foundation of the processor stems from Arm Ltd., which conceived and licensed it.

    Subsequent to the finalization of the ARM chip by Qualcomm or MediaTek, it proceeds to a company like TSMC for manufacturing.

    (That being said, Qualcomm’s new Snapdragon X Elite and Snapdragon X Plus chips are, for the first time, independently designed without Arm Ltd.).

    Due to this distinction, an ARM processor is termed an SoC because it encompasses a CPU, GPU, and, in current times, an NPU and other components such as a 5G modem. Essentially, these chips comprise multiple components on a single “board” instead of being solely a CPU.

    Arm processors tend to exhibit more focused functionality compared to CPUs, and they often do not match CPUs in terms of power. Nevertheless, ARM processors generally demand less power to operate, thereby allowing devices utilizing ARM processors to sustain longer battery life.

    What distinguishes ARM64 from x64 and x86?

    In the realm of CPUs, x86 and the newer x64 represent the most prevalent architecture types used in laptops. While x86 can only accommodate a maximum of 4GB RAM, x64 can access a larger capacity, rendering it more potent. On the other hand, AArch64 (or ARM64) serves as a 64-bit extension of ARM architecture with the capability to access more than 4GB RAM. Nevertheless, it does not match the power of x86 or x64 architecture. ARM64 has traditionally been employed in smartphones and smaller devices, but its utilization in laptops and tablets is on the rise.

    Why is the adoption of Windows on ARM significant?

    In recent years, Apple, Microsoft’s foremost competitor, has achieved substantial success with MacBooks and iPads that utilize the company’s M1, M2, M3, and recently announced M4 ARM chips. Simultaneously, the Microsoft Surface series of devices has also garnered impressive sales with its SQ1, SQ2, and SQ3 ARM chips. However, numerous competitors offering Windows on ARM are now challenging macOS on ARM.

    Historically, Windows software has been designed for use on x86 and x64 processor computers, but the significantly more energy-efficient Arm SoCs (Systems on a Chip) have demonstrated superior battery life compared to traditional CPUs for smartphones and laptops. In essence, introducing Windows on ARM has empowered various manufacturers like Dell, Lenovo, and others to manufacture dependable Windows laptops delivering commendable performance alongside longer-lasting batteries in comparison to other CPU laptop options.

    It is currently rumored that Microsoft is developing a new Windows build (code-named Germanium) tailored around the Qualcomm Snapdragon X Elite, another ARM processor. Therefore, Windows shows no signs of veering away from ARM processors in the near future. If anything, the era of Windows on ARM is gaining momentum.

    What is a Snapdragon X SoC?

    A Snapdragon X SoC is a System on a Chip designed for Windows on ARM devices, featuring a 10-core Oryon CPU, an Adreno GPU, and a Hexagon NPU. You can find more information about it in our Snapdragon X Plus guide.

    While ARM processors and devices are currently in the spotlight, this technology is not new. ARM architecture dates back to 1990 when ARM (originally standing for Acorn RISC Machine) was established through a joint venture involving Apple, Acorn Computers, and VLSI (thanks CNBC). Today, the company operates independently as Arm Ltd.

    Instead of manufacturing its own CPUs like AMD and Intel, Arm Ltd. licenses core CPU components and ARM architecture. Companies that acquire these licenses can legally use these components and chip architecture to create more customized processors for their specific requirements.

    ARM processors were utilized in devices such as Nokia phones and Texas Instruments in the 1990s and later became the dominant architecture for touchscreen phones in the 2000s and 2010s. Notably, the first iPhone (2007) utilized ARM technology, as did many Motorola, HTC, and Samsung phones. Beyond mobile devices, Nintendo incorporated several ARM processors in its gaming handhelds, while Raspberry Pi also leverages them. Therefore, even if you are unfamiliar with ARM-based processors, you have likely encountered or utilized an ARM device at some point in your life.

    The initial Windows on ARM processor was the NVIDIA Tegra, which was introduced in 2012. Today, several major tech companies collaborate on ARM-based chips, including Intel, AMD, Qualcomm, Dell, Lenovo, Apple, Microsoft, and others. Although ARM-based processors have not historically been exceptionally powerful, this is not their primary focus. Companies continue to engage with ARM technology because it significantly enhances battery life while delivering balanced device performance. Hence, it is evident why smartphones and smaller devices that prioritize portability (such as the Microsoft Surface line) incorporate this technology.

    The most recent ARM architecture is Armv9, which made its debut in 2021.

    The ongoing rivalry between Apple and Qualcomm

    According to recent benchmarks shared by Qualcomm, the ARM64 Snapdragon X Plus surpasses Apple’s ARM M3 chip in speed. Importantly, this is not solely due to Qualcomm’s processor having more cores. However, the recently announced M4 is expected to outperform the Snapdragon X Elite, indicating that both are poised to deliver substantial performance.

    Some individuals argue that it is unfair to compare Apple’s M Series chips with Qualcomm Snapdragon X, and I will explain why this notion is completely unfounded. For an in-depth understanding, I recommend watching Snazzy Lab’s YouTube Video. In brief, I will highlight that the significant move made by Apple in 2020, shifting from Intel’s x86 chips to developing its own ARM-based M1 chip, garnered considerable attention. The M1 chip became associated with lightweight laptops offering excellent battery life and commendable performance. Subsequently, the company progressed to introduce the “M” series ARM Apple Silicon with the M2 and M3 chips.

    The M Series Apple Silicon, in some aspects, shares similarities with Snapdragon X Elite, not only because they are both ARM processors but also because they were initially developed by some of the same individuals.

    Importantly, it is practically impossible to discuss Apple ARM processors without bringing up Qualcomm. In 2019, a trio of former Apple employees—Gerard Williams III (Apple’s former chief architect for iPhone chips), John Bruno (former system architect at Apple), and Manu Gulati (former lead SoC architect at Apple)—leveraged their knowledge of ARM processors to establish a new company named Nuvia. Here, they continued to advance their designs beyond Apple’s constraints.

    Their achievements led to the acquisition of Nuvia by the semiconductor manufacturer, Qualcomm, in 2021. Interestingly, Arm sued Qualcomm over its acquisition of Nuvia, alleging that the deal violated Nuvia’s licensing agreements with Arm Ltd.

    The underlying takeaway is that M Series Apple Silicon shares some similarities with Snapdragon X Elite, not only because they are both ARM processors but also because they were originally developed by some of the same individuals. It is now evident that the trio that departed from Apple has managed to create ARM processors that outpace those developed by Apple.

    While comparing Apple and Qualcomm ARM processors can be intriguing, Qualcomm’s primary competition is not Apple but rather AMD and Intel. Presently, Qualcomm is the most significant competitor in the ARM processor arena.

    There was a time when Intel held a commanding position as the primary chipmaker in the United States. However, the landscape has changed, with AMD emerging as a robust CPU competitor often surpassing Intel’s CPU performance while offering products at relatively lower prices. Surprisingly, Qualcomm currently surpasses Intel in terms of company size (thanks CNBC).

    AMD and Qualcomm have dedicated significant effort in recent years to develop NPUs (neural processing units) to keep up with the increasing demand for AI (read more in my NPU guide). In contrast, Intel has only recently entered the NPU market with the introduction of its Intel Core Ultra line this year. However, Intel’s AI processors have not demonstrated the same level of performance as those from AMD or Qualcomm. Even Qualcomm’s own benchmark shows that Snapdragon X Elite outperforms Intel Core Ultra.

    Despite Intel’s strong reputation, the company is losing ground to its competitors who offer superior performance. Over time, Qualcomm’s ARM technology is expected to advance further, possibly leading to a significant increase in powerful Windows on ARM devices.

    This is not an exhaustive list of ARM processor devices, but rather a brief overview of the kinds of devices (past and present) powered by ARM.

    • Microsoft Surface devices, Apple iPad, Apple M-Series MacBooks, Lenovo ThinkPad laptops, Dell Inspiron laptops
    • Apple iPhone, Apple iPod, Motorola phones, Nokia phones, Samsung phones, HTC phones, Blackberry phones
    • Raspberry Pi, Roku 2
    • Game Boy Advance, Nintendo DS, Nintendo DSi, Nintendo 3DS, Nintendo Wii U, Nintendo Switch
    • Garmin Navigation Devices, TomTom navigation devices
    • HP Calculators, Texas Instruments

    There is much to learn about ARM processors, so here’s a comprehensive ARM FAQ to address any additional queries you may have.

    What is the full form of ARM?

    ARM originally stood for Acorn RISC Machine, named after the British tech company Acorn Computers, which was involved in its creation. Later, ARM became recognized as the acronym for Advanced RISC Machines. Today, the company behind this processor architecture is known as Arm Ltd.

    SoC vs processor vs CPU: What’s the distinction?

    A CPU is a set of hardware components, known as a processor, that collaborate to process fundamental instructions for a computer system. On the other hand, an SoC is a more specialized and less powerful computing option that offers a balanced performance level and greater efficiency. Both SoC and CPU have their respective uses, and one might be better suited for specific tasks than the other.

    What are the constraints of Windows on ARM?

    In the past, Windows on ARM was quite restricted, but that is no longer the case. Previously, many drivers and programs were often not compatible with ARM, but now the majority, if not all, of your apps will function on Windows on ARM laptops. The only potential exception to this could be smaller developers who have not released ARM64 drivers for their software.

    Why is Windows on ARM not as good?

    Though this was once a prevalent sentiment, it is worth addressing. At one time, Windows on ARM was much more limited than on a traditional Windows laptop because numerous programs lacked ARM64 extensions. However, this has changed over the years. Nowadays, most, if not all, of your apps can be run on a Windows on ARM device.

    Why is ARM so popular?

    ARM architecture empowers computer manufacturers to exercise more control over their products. From a consumer perspective, ARM processor devices are favored for their lightweight designs, excellent battery life, and impressive performance.

    Is Windows compatible with ARM?

    Yes. ARM devices can run Windows 10 and Windows 11 with ease.

    Should “ARM” be capitalized?

    Both “ARM” in all caps and “Arm” with only the first letter capitalized are acceptable, depending on the context. ARM originally stood for Acorn RISC machine, so many tech writers write the term in all caps when discussing the technology in general. However, the company Arm Limited currently capitalizes only the first letter in its name. Additionally, the 64-bit extension for ARM architecture is written with all caps, “ARM64,” while specific computing technologies are written with the first letter capitalized, such as “Arm Cortex-X1” or “Armv9-A.”

    It can be quite confusing. However, as a general rule, “ARM” is used when referring to the original company, specific extensions/drivers, or the technology in general, while “Arm” is appropriate when discussing the modern company or its specific computing technologies.

    Will ARM replace x86?

    Both x86 and ARM have their advantages, and while they may end up serving different purposes over time, it is unlikely that one will dominate the other in the next decade.

    Is Raspberry Pi based on ARM?

    Yes, the initial Raspberry Pi utilized a Broadcom BCM2835 SoC with an ARM processor. Since then, multiple Raspberry Pi models have also incorporated ARM processors. The latest Raspberry Pi 5 features a 64-bit 2.4 GHz quad-core ARM Cortex-A76 processor.

    Arm sued Qualcomm in 2022 over its purchase of Nuvia

    Arm has intensified its conflict with Qualcomm, two years after initiating legal action against its former close ally. According to Bloomberg, the British semiconductor firm has revoked the architecture license that permitted Qualcomm to utilize its intellectual property and chip design standards. As noted by the news source, Qualcomm, similar to numerous other chip manufacturers, relies on Arm’s software code necessary for chips to operate systems like operating systems. Reportedly, Arm has issued Qualcomm a 60-day cancellation notice — if an agreement isn’t reached in that timeframe, it could significantly affect the finances and operations of both firms.

    The SoftBank-backed semiconductor company took legal action against Qualcomm in 2022 after the latter acquired a company called Nuvia, which was another of its licensees. Arm claimed that the US firm failed to secure the required permissions to transfer Nuvia’s licenses. Consequently, Nuvia violated their agreement, and Arm asserted it had terminated those licenses, as outlined in its lawsuit. Qualcomm has been implementing technology developed by Nuvia in chips designed for AI PCs, such as those produced by Microsoft and HP. However, Arm demands that Qualcomm cease using technology developed by Nuvia and eliminate any Arm-based technology created before the acquisition.

    Should the firms fail to resolve the matter within the next 60 days, Qualcomm may need to cease sales of the majority of the chips that contribute to its $39 billion revenue, according to Bloomberg. The US chipmaker perceives this as a strategy by Arm to intimidate its business and secure greater royalties, with its spokesperson stating to Bloomberg and the Financial Times: “This is just more of what we’ve seen from Arm — more unfounded threats aimed at pressuring a long-term partner, interfering with our leading CPUs, and raising royalty rates despite the broad rights granted under our architecture license.” Qualcomm also charged Arm with trying to obstruct the legal process, describing its basis for terminating the license as “completely baseless,” and expressed confidence that its “rights under the agreement with Arm will be upheld.”

    In response, an Arm representative remarked: “In light of Qualcomm’s continuous significant violations of Arm’s licensing agreement, Arm has no alternative but to initiate formal proceedings requiring Qualcomm to address its breach or risk the cancellation of the agreement. This action is essential to safeguard the exceptional ecosystem that Arm and its valued partners have developed over more than three decades. Arm is fully prepared for the trial scheduled for December and remains assured that the Court will rule in Arm’s favor.”

    As reported, Arm has provided Qualcomm with a mandatory 60-day notice concerning the termination of the licensing agreement. This contract previously allowed Qualcomm to create its own chips utilizing Arm’s proprietary standards.

    Arm chose not to comment on the situation, while Qualcomm did not reply to a comment request from Reuters outside regular business hours.

    This announcement aligns with an ongoing legal struggle between the two technology companies, set to commence in federal court in Delaware this December.

    The British firm, predominantly owned by Japan’s SoftBank Group, initiated legal action against Qualcomm in 2022 for allegedly neglecting to negotiate a new licensing agreement following the acquisition of a new company.

    The lawsuit focuses on technology that Qualcomm acquired from Nuvia, a startup established by former Apple chip engineers, which Qualcomm purchased for $1.4 billion in 2021. Qualcomm intended to utilize Nuvia’s technology to rival Apple by developing chips capable of competing with the Apple M-series processors and challenging the desktop market led by Intel and AMD.

    While the acquisition was initially perceived as standard procedure, Arm contends that Qualcomm’s employment of Nuvia’s designs infringes on the licensing agreement. The licenses granted to Nuvia were specifically designated for a startup and could not be directly employed by Qualcomm without Arm’s consent.

    Arm has previously stressed that, “Arm is pursuing this claim to protect Arm, our partners, and the unmatched ecosystem we have collaboratively built. Arm and its partners have invested billions of dollars in creating industry-leading intellectual property. Because Qualcomm sought to transfer Nuvia licenses without Arm’s authorization, a standard restriction under Arm’s licensing agreements, Nuvia’s licenses were annulled in March 2022. Prior to and following that date, Arm made numerous good faith attempts to reach a resolution. Conversely, Qualcomm has breached the terms of the Arm licensing agreement by continuing development under the revoked licenses. Arm felt compelled to take legal action against Qualcomm and Nuvia to safeguard our intellectual property, our business, and to ensure customers can access legitimate Arm-based products.”

    Arm has allegedly provided Qualcomm with a 60-day cancellation notice as mandated by law. This license enables Qualcomm, based in San Diego, to create its own silicon according to Arm’s chip standards.

    The Bloomberg report emerged just as Qualcomm unveiled its latest mobile and automotive chips during its annual Snapdragon Summit held in Hawaii. These chips mark significant achievements for Qualcomm as they introduce new generative AI features for smartphones, including the Xiaomi 15 and Asus ROG 9 Phone, while car manufacturers Mercedes-Benz and Li Auto have pledged to implement the AI-compatible automotive chips in their forthcoming vehicles.

    At the time of publication, Qualcomm had not confirmed that these chips were developed based on Arm standards, but the Oryon CPUs central to both product lines are implicated in the lawsuit. The company has contested Arm’s cancellation in an official statement.

    “This situation reflects a continuation from ARM — more unfounded threats intended to coerce a longstanding partner, disrupt our performance-leading CPUs, and raise royalty rates despite the extensive rights granted under our architecture license. With a trial quickly approaching in December, Arm’s desperate maneuver seems aimed at undermining the legal proceedings, and its termination claim is entirely unfounded,” stated Qualcomm. “We are confident that Qualcomm’s entitlements under its arrangement with Arm will be upheld. Arm’s anticompetitive behavior will not be accepted.”

    Initially, Arm refrained from commenting but later issued a statement indicating its readiness for the December trial, which will seek to resolve competing lawsuits between the two companies.

    “In light of Qualcomm’s numerous significant violations of Arm’s license agreement, Arm has been compelled to take formal action requiring Qualcomm to rectify its breach or face the end of the agreement. This step is essential to safeguard the unparalleled ecosystem that Arm and its esteemed partners have cultivated over more than 30 years,” Arm’s statement indicated.

    The forthcoming trial scheduled for December coincides with Arm’s 60-day deadline. Arm has alleged that Qualcomm failed to renegotiate a deal post its acquisition of an Arm licensee in 2021, the chip startup Nuvia, and employed its chip designs in upcoming products. Qualcomm maintains that its existing Arm agreement encompasses Nuvia, according to Bloomberg. Arm initiated a lawsuit, and Qualcomm responded with a counter-suit.

    This scenario results in detrimental outcomes for all parties involved: Arm, Qualcomm, Microsoft, and most Android manufacturers.

    As it currently stands, this represents a no-win situation for every participant, but it also carries broader implications for the tech industry. The core of the dispute revolves around finances. Nuvia was compensating Arm with a reduced licensing fee during the development of the predecessor to the Oryon cores, now utilized in multiple Windows on Arm devices and começando to appear in smartphones. Qualcomm, in turn, pays a higher licensing fee to utilize Arm’s Cortex architecture for its mobile chips. Upon acquiring Nuvia, Qualcomm presumed that the licensing was transferred with the purchase, effectively obtaining a discount compared to what it would have to pay directly to Arm.

    With the increasing success of Windows on Arm, Arm (whether justified or not) believes it deserves a fair share, which would align more closely with the fees Qualcomm currently pays. Should Arm revoke the license for Cortex or any other intellectual property used by Qualcomm, it would find itself in a precarious position. Although Microsoft desires Windows on Arm devices, this segment constitutes a small portion of the processors Qualcomm manufactures. The smartphone and tablet sectors predominantly rely on Qualcomm’s offerings, making this a situation that transcends a simple corporate rivalry.

    The final element of this losing equation is that while losing access to Arm’s IP would have severe repercussions for Qualcomm, the relationship is not one-sided. In the last fiscal year, Qualcomm contributed to 10% of Arm’s total revenue (via Barrons). Considering Arm’s complete financial landscape, this represents a notable difference between year-end profits being positive or negative (via Yahoo).

    This kind of posturing seems illogical. Qualcomm will likely have to incur some additional licensing fees as a result of the court case regardless, so why provoke the situation? It has evolved into a case of (almost) mutually assured destruction, especially if one of Arm’s licensees exits the scene.
    Arm currently doesn’t make products; it creates designs and ideas that its licensees can turn into products. I can’t imagine that any of Arm’s other business partners are looking at this situation warmly. It’s one thing to take another company to court; that’s part and parcel of the cost of doing business. It’s another thing altogether to be going to court and also threaten to remove the very thing you’re going to court to address, on the same day you’re due in court.

    We can only hope that cooler heads will prevail in the remaining days before the court date and the second threat of the impending removal of Qualcomm’s licensing deals. Over the years, we’ve reviewed hundreds or possibly thousands of Qualcomm-powered devices, all using Arm IP, and for the most part, we loved them. Qualcomm not being able to use Arm’s architecture would be on a similar scale to losing AMD because Intel refuses to renew the x86 license that’s necessary to make Ryzen, Threadripper, Epyc, and custom silicon for gaming consoles. Except, in that situation, the government would step in as it has before when Intel flexed its muscles years ago with monopolistic practices.

    Arm actually going through with its threat would be the biggest self-own in tech

    It’s challenging to be a pure IP company that thrives on the success of the companies to which you license your hard work and engineering know-how. At least, it would be if you were any other company, but things are a little different when you’re used in 99% of the world’s smartphones, almost everything Apple makes, and a growing sector of Windows devices. Arm might manage to survive without Qualcomm, but it’d be close, and Qualcomm is almost gone without access to Arm’s IP. Not to mention the hundreds of companies that use Qualcomm’s Arm-based products in their own products. Losing either Arm or Qualcomm would be a huge blow to the technology sector, one that it would take many, many years to recover from.

    While there are other CPU architecture alternatives, Arm has a virtual monopoly in the mobile space. Qualcomm, Samsung, Apple, and others license IP from Arm to make their custom cores, so the chances of your smartphone using their IP are high. This is monopolistic behavior and the statement Qualcomm put out in response to the 60-day timeline says as much:

    This is more of the same from ARM – more unfounded threats designed to strongarm a longtime partner, interfere with our performance-leading CPUs, and increase royalty rates regardless of the broad rights under our architecture license. With a trial fast approaching in December, Arm’s desperate ploy appears to be an attempt to disrupt the legal process, and its claim for termination is completely baseless. We are confident that Qualcomm’s rights under its agreement with Arm will be affirmed. Arm’s anticompetitive conduct will not be tolerated.

    Of course, Arm released its own statement not long afterward, sticking to its guns about the 60-day termination notice:

    Following Qualcomm’s repeated material breaches of Arm’s license agreement, Arm is left with no choice but to take formal action requiring Qualcomm to remedy its breach or face termination of the agreement. This is necessary to protect the unparalleled ecosystem that Arm and its highly valued partners have built over more than 30 years. Arm is fully prepared for the trial in December and remains confident that the Court will find in Arm’s favor.

    Note that the company references the upcoming court date in the statement, which is a bold move when they’re essentially intimidating the other party involved.

    Arm does not produce products directly; it instead develops designs and concepts that its licensees can manufacture into products. I doubt any of Arm’s other business collaborators view this situation positively. While taking legal action against another company is a common aspect of business, threatening to revoke the very assets that are the subject of the court case on the same day as the hearing is altogether different.

    We can only wish for cooler heads to prevail in the days leading up to the court hearing and the second warning of potentially revoking Qualcomm’s licensing agreements. Over the years, we’ve evaluated countless Qualcomm-powered devices, all utilizing Arm’s intellectual property, and for the most part, we’ve been impressed. If Qualcomm were unable to use Arm’s architecture, it would parallel losing AMD if Intel decided not to renew the necessary x86 license for making Ryzen, Threadripper, Epyc, and custom chips for gaming systems. However, in that scenario, government intervention would likely occur, similar to previous instances when Intel exhibited monopolistic tendencies.

    If Arm follows through with its threat, it would be a major blunder in the tech industry

    Being a pure IP company that relies on the success of its licensees poses distinct challenges. This would be true for any other business, but the situation differs significantly since Arm is integral to 99% of the world’s smartphones, nearly all Apple’s products, and an increasing number of Windows devices. Arm may be able to survive without Qualcomm, but it would be a close call, and Qualcomm would almost certainly struggle without access to Arm’s intellectual property. Additionally, many companies that incorporate Qualcomm’s Arm-based products in their offerings would also be affected. The loss of either Arm or Qualcomm would deal a significant blow to the tech industry, one from which it would take many years to recover.

    While there are alternative CPU architectures available, Arm holds a de facto monopoly in the mobile market. Companies like Qualcomm, Samsung, and Apple license intellectual property from Arm to develop their custom cores, making it highly likely that your smartphone utilizes their technology. This reflects monopolistic practices, and Qualcomm’s response to the 60-day notice indicates as much:

    This reflects a familiar pattern from ARM – yet more baseless threats aimed at coercing a long-time partner, undermining our top-performing CPUs, and raising royalty fees despite the broad rights outlined in our architecture license. With a trial looming in December, Arm’s desperate attempt seems intended to disrupt the legal proceedings, and its claim for termination lacks any basis. We believe that Qualcomm’s rights under its agreement with Arm will be upheld. Arm’s anticompetitive behavior will not be accepted.

    Naturally, Arm swiftly issued its own statement, maintaining its position regarding the 60-day termination notice:

    In light of Qualcomm’s ongoing material breaches of Arm’s licensing agreement, Arm has no alternative but to pursue formal action requiring Qualcomm to rectify its breach or face the termination of the agreement. This step is essential to safeguard the unparalleled ecosystem that Arm and its esteemed partners have cultivated over more than three decades. Arm is fully prepared for the upcoming trial in December and remains confident that the Court will rule in Arm’s favor.

    It’s noteworthy that the company mentions the upcoming court date in its statement, a daring approach when they are essentially intimidating the other party involved.

  • In the future, strict rules for the use of artificialintelligence will apply in the EU

    In the future, strict rules for the use of artificial intelligence will apply in the EU. The law is important, says expert Lukowiczin an interview. Although the technology is not actually intelligent, it willmassively change our lives.

    tagesschau.de: The EU has decided on a position on the planned first AI law. It is intended to ban or regulate high-risk and risky applications. How useful are the rules from your point of view?

    Paul Lukowicz: It’s a very good approach. Artificial intelligence (AI) is enormously powerful. It will influence our lives like no other technology in recent years. If we want it to change our lives for the better, it must also be regulated by law.

    Regulation that does not regulate the technology itself, but rather its effects, makes a lot of sense. Because by doing so we prevent something bad from happening without hindering innovation and the creation of the technology. future for artificial intelligence

    “AI can endanger security” future for artificial intelligence

    tagesschau.de: The planned EU law differentiates between the applications – among other things, they are classified as risky and high-risk. High-risk applications should be banned, risky ones should besubject to strict requirements. When do you think artificial intelligence is risky and should be banned?

    Lukowicz: Risky and forbidden – those are two different things. AI is risky – like any other technology – when it has an impact on human well-being, human life and the security of certain things that areimportant to us in society. Especially if she does something wrong, she can endanger security.

    However, AI is also capable of doing things that we fundamentally do not want. For example, certain surveillance techniques such as the famous “Social Scoring System”, in which AI systems are used to evaluate people’s behavior and see whether they behave the way the state would want them to. We basically don’t want something like that. It is right that this is simply forbidden by law.

    tagesschau.de: Where should the limits be for the useof AI – for example when used in the medical field?

    Lukowicz: It is always problematic when the AI ​​​​does things without humans being able to intervene or take a second look at them.This generally also applies in the medical field. When it comes to high-risk applications, it’s not so much about whether we want to use the technology, but about the requirements that the technology must meet so that it can be used safely.

    AI should always be used in medicine if the use of AI increases the likelihood that the medical intervention will be successful and benefit people.

    “There is no real intelligence behind it”

    tagesschau.de: What exactly is artificial intelligence?

    Lukowicz: AI is nothing more than a set of mathematicalmethods and algorithms that have been found to be able to do things that wepreviously thought were only possible for humans. For example, 20 years ago an AI won against a human grandmaster in chess for the first time. But AI can also generate complex images or pieces of music.

    It’s important to understand that no matter how amazing thisis, there is no real intelligence behind it. At least not in the sense that we might understand intelligence. They are very precisely defined, but often quite simple mathematical procedures that are applied to large amounts of data.

    tagesschau.de: Does that mean the AI ​​only does whatwas programmed?

    Lukowicz: It’s not that simple. In an AI program, the so-called machine learning process, the computer is usually given a lot of examples. They illustrate what should be done. The computer is then told step by step what it has to do in order to deduce from these examples how the problem can actually be solved.

    The system does not learn in the sense that it does something completely independently. We have taught it how to derive somethingfrom the data and it cannot do anything else.

    But usually this data is so complex that we as humans cannot really say with 100 percent certainty what the system will actually extract from the data. And that is precisely where the big problem lies and hence the eneed for regulation.

    If we don’t look closely at these data sets, these”sample sets”, if we don’t build in certain security mechanisms, then we can end up with a system that we believe does A. In reality, it’s doing B -because we didn’t properly understand the data we provided to it.

    “The fact that AI is displacing humans is sciencefiction”

    tagesschau.de: So we don’t have to worry and we can continueto work with AI?

    Lukowicz: Given the current state of AI, the fact thatAI will eventually establish a new intelligent species and displace humans definitely belongs in the realm of science fiction films.

    But it is a technology that is influencing more and more areas of our lives – for example the way we consume information. Or in trafficwith self-driving cars. AI can control energy grids and many other things. That’s why regulation by the European Parliament is so important. future for artificial intelligence

    We don’t need to be afraid, but we need to use this technology thoughtfully and with appropriate caution. We should always ask ourselves: Is the use of technology in one place or another something that really benefits usas humans or is it something that might put us in danger?

    The interview was conducted by Anja Martini, tagesschau.de

    The interview was edited and shortened for the written version.

    future for artificial intelligence future for artificial intelligence future for artificial intelligence

    In order to perform any task on a computer, you must instruct your device on which application to utilize. While you can utilize Microsoft Word and Google Docs to compose a business proposal, these programs cannot assist you in sending an email, sharing a selfie, analyzing data, scheduling an event, or purchasing movie tickets. Additionally, even the most advanced applications lack a comprehensive understanding of your professional work, personal life, interests, and relationships, and have limited capability to utilize this information to perform actions on your behalf. Currently, this type of functionality is only achievable with another human being, such as a close friend or a personal assistant.

    Over the next five years, this will undergo a complete transformation. You will no longer need to use different applications for various tasks. Instead, you will simply inform your device, in everyday language, about the action you want to carry out. Based on the level of information you choose to share, the software will be able to provide personalized responses due to its thorough comprehension of your life. In the near future, anyone with online access will be able to have a personal assistant powered by artificial intelligence that surpasses current technology.

    This kind of software, which can understand natural language and execute various tasks based on its knowledge of the user, is referred to as an agent. I have been contemplating agents for nearly thirty years and discussed them in my 1995 book, The Road Ahead, but they have only recently become viable due to advancements in AI.

    Agents will not only revolutionize how everyone interacts with computers but will also disrupt the software industry, leading to the most significant computing revolution since the transition from command typing to icon clicking.

    A personal assistant for all

    Certain critics have highlighted that software companies have previously offered similar solutions, which users did not wholeheartedly embrace (e.g., people still mock Clippy, the digital assistant included in Microsoft Office and later discontinued). So, why will people adopt agents?

    The answer lies in their substantial improvement. Users will be able to engage in nuanced conversations with them. Agents will be highly personalized and won’t be limited to simple tasks like composing a letter. Clippy shares as much similarity with agents as a rotary phone does with a mobile device.

    If desired, an agent will be able to assist with all of your activities. By obtaining permission to monitor your online interactions and physical locations, it will develop a profound understanding of the people, places, and activities you are involved in. It will comprehend your personal and professional relationships, hobbies, preferences, and schedule. You will have the freedom to choose how and when it assists with a task or prompts you to make a decision.

    “Clippy was a bot, not an agent.”

    To comprehend the substantial impact that agents will bring, let’s compare them to the current AI tools. Most of these tools are bots, confined to a single application and typically only intervene when a particular word is written or when assistance is requested. Since they do not remember previous interactions, they do not improve or learn any user preferences. Clippy was a bot, not an agent.

    Agents are more intelligent. They are proactive, capable of offering suggestions before being prompted. They can carry out tasks across applications and improve over time by recalling your activities and recognizing intentions and patterns in your behavior. Drawing from this information, they will offer to provide what they believe you need, while you always retain the final decision-making authority.

    Imagine that you wish to plan a trip. While a travel bot may identify affordable hotels, an agent will have knowledge of your travel dates and, based on its understanding of whether you prefer new destinations or repeat ones, can suggest suitable locations. Upon request, it will recommend activities based on your interests and adventure tendencies and book reservations at restaurants that align with your preferences. As of now, achieving this level of personalized planning requires engaging a travel agent and spending time detailing your preferences to them.

    The most exciting impact of AI agents is the democratization of services that are currently unaffordable for many people. They will have a particularly significant impact on four areas: healthcare, education, productivity, and entertainment and shopping.

    Healthcare

    Presently, AI primarily assists in healthcare by handling administrative tasks. For instance, applications like Abridge, Nuance DAX, and Nabla Copilot can capture audio during a medical appointment and create notes for the doctor to review.

    The significant transformation will occur when agents can aid patients in basic triage, provide guidance on managing health issues, and assist in determining the need for further treatment. These agents will also support healthcare professionals in making decisions and increasing productivity. (For example, applications such as Glass Health can analyze a patient summary and suggest diagnoses for the doctor to consider.) Providing assistance to patients and healthcare workers will be especially beneficial for individuals in underprivileged countries, where many individuals never have the opportunity to consult a doctor.

    These medical AI assistants will take longer to be implemented compared to others because ensuring accuracy is a matter of life and death. People will require convincing evidence of the overall benefits of health AI assistants, even though they won’t be flawless and will make errors. Human errors occur as well, and lack of access to medical care is also a significant issue.

    A significant number of U.S. military veterans who require mental health treatment do not receive it.

    Mental health care is another example of a service that AI assistants will make accessible to almost everyone. Currently, weekly therapy sessions may seem like a luxury, but there is substantial unmet demand, and numerous individuals who would benefit from therapy do not have access to it. For example, a study conducted by RAND revealed that half of all U.S. military veterans who require mental health care do not receive it.

    Well-trained AI assistants in mental health will make therapy more affordable and accessible. Wysa and Youper are among the early chatbots in this field, but AI assistants will delve much deeper. If you choose to share enough information with a mental health assistant, it will comprehend your life history and relationships. It will be available when needed and won’t become impatient. With your consent, it could even monitor your physical responses to therapy through your smartwatch—such as noticing if your heart rate increases when discussing an issue with your boss—and recommend when you should consult a human therapist.

    Education

    For years, I have been enthusiastic about the ways in which software can ease teachers’ workload and aid student learning. It won’t supplant teachers but will complement their efforts by customizing work for students and freeing teachers from administrative tasks to allow more focus on the most crucial aspects of their job. These changes are finally beginning to materialize in a significant manner.

    The current pinnacle of this development is Khanmigo, a text-based bot developed by Khan Academy. It can provide tutoring in subjects such as math, science, and the humanities—for instance, explaining the quadratic formula and creating math problems for practice. It can also aid teachers in tasks like lesson planning. I have been a long-time admirer and supporter of Sal Khan’s work and recently had him on my podcast to discuss education and AI.

    Text-based bots are just the initial phase—AI assistants will unlock numerous additional learning opportunities.

    For instance, only a few families can afford a tutor who provides one-on-one supplementary instruction to complement classroom learning. If assistants can capture the effectiveness of a tutor, they will make this supplementary instruction available to everyone who desires it. If a tutoring assistant knows that a child enjoys Minecraft and Taylor Swift, it will utilize Minecraft to teach them about calculating the volume and area of shapes, and use Taylor’s lyrics to teach them about storytelling and rhyme schemes. The experience will be far more immersive—with graphics and sound, for example—and more tailored than today’s text-based tutors.

    Productivity

    There is already substantial competition in this field. Microsoft is integrating its Copilot into Word, Excel, Outlook, and other services. Similarly, Google is employing its Assistant with Bard and productivity tools to accomplish similar tasks. These copilots can perform numerous functions, such as transforming a written document into a presentation, responding to questions about a spreadsheet using natural language, and summarizing email threads while representing each person’s perspective.

    AI assistants will do much more. Having one will be akin to having a dedicated personal aide to assist with a variety of tasks and execute them independently at your request. If you have a business idea, an assistant will help you draft a business plan, create a presentation, and even generate images depicting your product. Companies will be able to provide assistants for their employees to directly consult and participate in every meeting to address queries.

    Whether working in an office or not, your assistant will be able to support you in the same way personal assistants aid executives today. For instance, if your friend recently underwent surgery, your assistant will offer to arrange flower delivery and can place the order for you. If you express a desire to reconnect with your college roommate, it will collaborate with their assistant to schedule a meeting, and just before the meeting, it will remind you that their eldest child recently commenced studies at the local university.

    Entertainment and shopping

    AI can already assist in selecting a new TV and recommend movies, books, shows, and podcasts. Additionally, a company I have invested in recently launched Pix, which allows you to pose questions (such as “Which Robert Redford movies might appeal to me and where can I watch them?”) and then offers suggestions based on your past preferences. Spotify features an AI-powered DJ that not only plays songs based on your tastes but also engages in conversation and can even address you by name.

    Agents will not only provide suggestions but also assist you in taking action based on those suggestions. For instance, if you wish to purchase a camera, your agent will go through all the reviews, summarize them, recommend a product, and place an order once you’ve made a decision. If you express a desire to watch Star Wars, the agent will check if you have the appropriate streaming service subscription, and if not, offer to help you sign up for one. Additionally, if you’re unsure about what you want to watch, the agent will make personalized recommendations and facilitate the process of playing your chosen movie or show.

    Moreover, you will have access to personalized news and entertainment tailored to your interests. An example of this is CurioAI, which can generate a customized podcast on any topic you inquire about.

    This advancement spells a significant change in the tech industry. Essentially, agents will be capable of aiding in almost any activity and aspect of life. This will bring about profound implications for both the software industry and society.

    In the realm of computing, we often refer to platforms as the underlying technologies on which apps and services are built. Android, iOS, and Windows are all examples of platforms. Agents are poised to be the next major platform.

    In the future, creating a new app or service will not require expertise in coding or graphic design. Instead, you will simply communicate your requirements to your agent. It will have the ability to write code, design the app’s interface, create a logo, and publish the app on an online store. The recent introduction of GPTs by OpenAI offers a glimpse into a future where individuals who are not developers can easily create and share their own assistants.

    Agents will revolutionize both the use and development of software. They will replace search engines because of their superior ability to find and synthesize information for users. They will also supplant many e-commerce platforms by identifying the best prices across a wider range of vendors. Additionally, they will supersede traditional productivity apps such as word processors and spreadsheets. Sectors that are currently distinct—like search advertising, social networking with advertising, shopping, and productivity software—will merge into a single industry.

    It is unlikely that a single company will dominate the agents business. Rather, there will be numerous different AI engines available. While some agents may be free and ad-supported, most will likely be paid for. Therefore, companies will be motivated to ensure that agents primarily serve the user’s interests rather than the advertisers’. The high level of competition among companies entering the AI field this year suggests that agents will be very cost-effective.

    However, before the sophisticated agents described earlier become a reality, we need to address several technical and usage-related questions about the technology. I have previously written about the ethical and societal issues surrounding AI, so in this discussion, I will focus specifically on agents.

    There is as yet no established data structure for an agent. Developing personal agents will necessitate a new type of database capable of capturing the intricacies of individuals’ interests and relationships and swiftly recalling this information while upholding privacy. New methods of information storage, such as vector databases, are emerging and may be better suited for housing data generated by machine learning models.

    Additionally, it remains uncertain how many agents users will interact with. Will a personal agent be distinct from a therapist agent or a math tutor? If so, there is the question of when and how these agents might collaborate.

    The manner in which users will interact with their agents also presents a challenge. Companies are exploring various options, including apps, glasses, pendants, pins, and even holograms. Although all of these are viable possibilities, the milestone breakthrough in human-agent interaction could be earbuds. If an agent needs to communicate with you, it might speak to you or appear on your phone. For example, it may say, “Your flight is delayed. Would you like to wait, or can I assist in rebooking?” Additionally, it can enhance the sound coming into your ear by eliminating background noise, amplifying difficult-to-hear speech, or clarifying heavily accented speech.

    Other challenges include the absence of a standardized protocol for agent-to-agent communication, the need to make agents affordable for all users, the necessity for more effective prompting to obtain the desired response, the avoidance of misinformation—particularly in crucial domains like healthcare—and ensuring that agents do not cause harm due to biases. Moreover, it is imperative to prevent agents from performing unauthorized actions. While concerns about rogue agents persist, the potential misuse of agents by malicious individuals is a more pressing issue.

    Privacy and other significant concerns

    As these developments unfold, the issues surrounding online privacy and security will become even more pressing than they already are. It will be important for you to have the ability to determine what information the agent can access, so you can be confident that your data is only shared with the individuals and companies of your choosing.

    However, who has ownership of the data you share with your agent, and how can you ensure that it is used appropriately? No one wants to start receiving advertisements related to something they confided in their therapist agent. Can law enforcement use your agent as evidence against you? When might your agent refuse to engage in actions that could be detrimental to you or others? Who determines the values that are embedded in agents?

    There is also the issue of how much information your agent should disclose. For instance, if you want to visit a friend, you wouldn’t want your agent to say, “Oh, she’s meeting other friends on Tuesday and doesn’t want to include you.” Additionally, if your agent assists you in composing work emails, it needs to know not to use personal information about you or proprietary data from a previous job.

    Many of these concerns are already at the forefront of the technology industry and among legislators. I recently took part in a forum on AI with other technology leaders, which was organized by Sen. Chuck Schumer and attended by numerous U.S. senators. During the event, we exchanged ideas about these and other issues and discussed the necessity for lawmakers to implement robust legislation.

    However, some issues will not be determined by companies and governments. For example, agents could impact how we interact with friends and family. Today, expressing care for someone can involve remembering details about their life, such as their birthday. But if they know that your agent likely reminded you and handled sending flowers, will it hold the same significance for them?

    In the distant future, agents may even compel humans to contemplate profound questions about purpose. Consider a scenario where agents become so advanced that everyone can enjoy a high quality of life without having to work as much. In such a future, what would people do with their time? Would obtaining an education still be desirable when an agent provides all the answers? Can a safe and flourishing society be sustained when most individuals have significant amounts of free time?

    Nevertheless, we have a long way to go before reaching that stage. In the meantime, agents are on the horizon. Over the next few years, they will completely transform how we lead our lives, both online and offline.

    What is the significance of artificial intelligence?

    AI streamlines repetitive learning and exploration through data. Rather than automating manual tasks, AI carries out frequent, high-volume, computerized tasks reliably and without fatigue. Human involvement is still crucial for setting up the system and asking the appropriate questions.

    AI enhances the intelligence of existing products. Many products that are currently in use will benefit from AI capabilities, similar to the way Siri was integrated into a new generation of Apple products. Automation, conversational platforms, bots, and smart machines can be merged with extensive data to enhance numerous technologies. Upgrades in home and workplace settings, such as security intelligence and intelligent cameras, along with investment analysis, are included.

    AI adjusts through progressive learning algorithms to enable data to dictate the programming. AI identifies patterns and regularities in data to allow algorithms to acquire skills. Just as an algorithm can teach itself to play chess, it can also learn what product to recommend next online. Furthermore, the models adapt when presented with new data.

    AI a greater and more comprehensive amount of data using neural networks that have multiple hidden layers. Previously, constructing a fraud detection system with five hidden layers was considered unfeasible. However, this has changed due to the remarkable computer power and large data sets. Extensive data is necessary to train deep learning models because they learn directly from the data.

    AI achieves remarkable precision through deep neural networks. For instance, Alexa and Google interactions are primarily based on deep learning, and these products become more accurate with increased usage. In the medical field, AI techniques from deep learning and object recognition can now be employed to precisely identify cancer in medical images.

    AI maximizes the potential of data. When algorithms are self-learning, the data itself becomes a valuable asset where the solutions lie. Applying AI is the key to uncovering these answers. Since the significance of data has now become more pronounced than ever, it can confer a competitive edge. In a competitive industry, possessing the best data is advantageous, even if similar techniques are being utilized by everyone, as the best data will emerge triumphant.

    Top digital technology news:

    Upcoming EU AI regulations set to take effect; Concerns raised about the digitalization of finance and banking; UK communications watchdog enhances digital safety guidelines.

    1. EU’s AI Act set to take effect

    The European Union’s regulations regarding artificial intelligence (AI) are scheduled to be implemented in June following the approval of a political agreement by member states that was reached in December. These regulations may establish a global standard for the technology.

    “This historic legislation, the first of its kind globally, addresses a worldwide technological issue that presents both opportunities for our societies and economies,” stated Mathieu Michel, Belgium’s digitization minister.

    The new regulations introduce stringent transparency requirements for high-risk AI systems, while the guidelines for general-purpose AI models will be less rigorous, according to Reuters.

    The deployment of real-time biometric surveillance in public areas is also limited to instances of specific crimes, such as preventing terrorism and apprehending individuals suspected of severe offenses.

    2. Digitalization of banking creating new risks

    The Basel Committee on Banking Supervision has issued a warning regarding the safety risks associated with the digital transformation of the banking sector. In a recent report, the Committee highlighted that this transformation is generating new vulnerabilities and exacerbating existing ones, indicating that additional regulations may be necessary to address these emerging challenges.

    The expansion of cloud computing, the advent of AI, and the data-sharing practices of external fintech companies, among other factors, contribute to new risks.

    “These may involve increased strategic and reputational dangers, a wider range of factors that could challenge banks’ operational risk and resilience, and potential system-wide threats due to heightened interconnections,” the report stated.

    The Committee includes central bankers and regulators from the G20 and other nations that have committed to implementing its regulations.

    3. News in brief: Digital technology stories from around the world

    Microsoft has joined forces with an AI company based in the UAE to invest $1 billion in a data center in Kenya.

    The EU’s data privacy authority has cautioned that OpenAI is still failing to comply with data accuracy requirements.

    Research has utilized AI to detect as many as 40 counterfeit paintings listed for sale on eBay, including pieces falsely attributed to Monet and Renoir, according to The Guardian.

    TikTok will begin employing digital watermarks to identify AI-generated content that has been uploaded from other platforms. Content created with TikTok’s own AI tools is already automatically marked.

    The UK’s communications authority Ofcom has introduced a new safety code of conduct, urging social media companies to “moderate aggressive algorithms” that promote harmful content to children.

    The House Foreign Affairs Committee has voted to move forward a bill that facilitates the restriction of AI system exports.

    A global AI summit, co-hosted by South Korea and the UK, concluded with commitments to safely advance the technology from both public and private sectors.

    OpenAI has established a new Safety and Security Committee that will be headed by board members as it begins the development of its next AI model.

    The adoption of Generative AI tools has been gradual, according to a survey of 12,000 individuals across six countries, but is most pronounced among those aged 18-24.

    4. More about technology on Agenda

    For businesses to bridge the gap between the potential and reality of generative AI, they must focus on return on investment, says Daniel Verten, Head of Creative at Synthesia. This entails setting clear business goals and ensuring that GenAI effectively addresses challenges from start to finish.

    Climate change threatens agriculture, with innovative strategies crucial for protecting crops while minimizing environmental impact. AI can facilitate the acceleration of these solutions, explains Tom Meade, Chief Scientific Officer at Enko Chem.

    What does the future hold for digital governance? Agustina Callegari, Project Lead of the Global Coalition for Digital Safety at the World Economic Forum, delves into the outcomes of the NetMundial+10 event and the establishment of the São Paulo Guidelines.

    European Union member nations reached a final agreement on Tuesday regarding the world’s first major law aimed at regulating artificial intelligence, as global institutions strive to impose limits on the technology.

    The EU Council announced the approval of the AI Act — a pioneering regulatory legislation that establishes comprehensive guidelines for artificial intelligence technology.

    Mathieu Michel, Belgium’s secretary of state for digitization, stated in a Tuesday announcement that “the adoption of the AI Act marks a significant milestone for the European Union.”

    Michel further noted, “With the AI Act, Europe underscores the significance of trust, transparency, and accountability in handling new technologies while also ensuring that this rapidly evolving technology can thrive and contribute to European innovation.”

    The AI Act utilizes a risk-based framework for artificial intelligence, indicating that various applications of the technology are addressed differently based on the potential threats they pose to society.

    The legislation bans AI applications deemed “unacceptable” due to their associated risk levels, which include social scoring systems that evaluate citizens based on data aggregation and analysis, predictive policing, and emotional recognition in workplaces and educational institutions.

    High-risk AI systems encompass autonomous vehicles and medical devices, assessed based on the risks they present to the health, safety, and fundamental rights of individuals. They also cover AI applications in finance and education, where embedded biases in the algorithms may pose risks.

    Matthew Holman, a partner at the law firm Cripps, mentioned that the regulations will significantly impact anyone involved in developing, creating, using, or reselling AI within the EU — with prominent U.S. tech firms facing close scrutiny.

    Holman stated, “The EU AI legislation is unlike any law in existence anywhere else globally,” adding, “It establishes, for the first time, a detailed regulatory framework for AI.”

    According to Holman, “U.S. tech giants have been closely monitoring the evolution of this law.” He remarked that there has been substantial investment in public-facing generative AI systems that must comply with the new, sometimes stringent, law.

    The EU Commission will be authorized to impose fines on companies that violate the AI Act, potentially as high as 35 million euros ($38 million) or 7% of their total global revenue, whichever amount is greater.

    This shift in EU law follows OpenAI’s launch of ChatGPT in November 2022. At that time, officials recognized that existing regulations lacked the necessary detail to address the advanced capabilities of emerging generative AI technologies and the risks linked to the use of copyrighted materials.

    Implementing these laws will be a gradual process.

    The legislation enforces strict limitations on generative AI systems, which the EU refers to as “general-purpose” AI. These limitations include adherence to EU copyright laws, disclosure of transparency concerning how the models are trained, routine testing, and sufficient cybersecurity measures.

    However, it will take some time before these stipulations come into effect, as indicated by Dessi Savova, a partner at Clifford Chance. The restrictions on general-purpose systems will not take effect until 12 months after the AI Act is enacted.

    Additionally, generative AI systems currently available on the market, such as OpenAI’s ChatGPT, Google’s Gemini, and Microsoft’s Copilot, will benefit from a “transition period” that allows them 36 months from the date of enactment to comply with the new legislation.

    Savova conveyed to CNBC via email, “An agreement has been established regarding the AI Act — and that regulatory framework is about to be realized.” She emphasized the need to focus on the effective implementation and enforcement of the AI Act thereafter.

    The Artificial Intelligence Act (AI Act) of the European Union marks a significant development in global regulations concerning AI, addressing the growing demand for ethical standards and transparency in AI applications. Following thorough drafting and discussions, the Act has been provisionally agreed upon, with final compromises struck and its adoption by the European Parliament scheduled for March 13, 2024. Expected to come into effect between May and July 2024, the AI Act creates a detailed legal framework aimed at promoting trustworthy AI both within Europe and globally, highlighting the importance of fundamental rights, safety, and ethical principles.

    Managed by the newly established EU AI Office, the Act imposes hefty penalties for noncompliance, subjecting businesses to fines of €35 million or 7 percent of annual revenue, whichever is higher. This compels stakeholders to recognize its implications for their enterprises. This blog offers a comprehensive analysis of the Act’s central provisions, ranging from rules concerning high-risk systems to its governance and enforcement structures, providing insights into its potential effects on corporations, individuals, and society as a whole.

    How does this relate to you?

    AI technologies shape the information you encounter online by predicting which content will engage you, gathering and analyzing data from facial recognition to enforce laws or tailor advertisements, and are utilized in diagnosing and treating cancer. In essence, AI has an impact on numerous aspects of your daily life.

    Similar to 2018’s General Data Protection Regulation (GDPR), the EU AI Act could set a global benchmark for ensuring that AI positively influences your life rather than negatively, regardless of where you are located. The EU’s AI regulations are already gaining international attention. If you are involved in an organization that uses AI/ML techniques to develop innovative solutions for real-world challenges, you will inevitably encounter this Act. Why not familiarize yourself with its intricacies right now?

    The AI Act is designed to “enhance Europe’s status as a worldwide center of excellence in AI from research to market, ensure that AI in Europe adheres to established values and rules, and unlocks the potential of AI for industrial purposes.”

    A risk-based approach

    The foundation of the AI Act is a classification system that assesses the level of risk an AI technology may present to an individual’s health, safety, or fundamental rights. The framework categorizes risks into four tiers: unacceptable, high, limited, and minimal.

    Unacceptable Risk Systems

    The AI regulations from the EU consist of several important provisions aimed at ensuring the ethical and responsible use of AI. Prohibited AI practices include the banning of manipulative techniques, exploitation of vulnerabilities, and classification based on sensitive characteristics. Real-time biometric identification for law enforcement requires prior authorization and notification to the relevant authorities, with member states having flexibility within defined limits. Moreover, obligations for reporting necessitate annual reporting on the use of biometric identification, promoting transparency and accountability in AI deployment.

    High Risk Systems

    The EU identifies several high-risk AI systems across various sectors, including critical infrastructure, education, product safety, employment, public services, law enforcement, migration management, and justice administration. These systems must adhere to strict obligations, including conducting risk assessments, using high-quality data, maintaining activity logs, providing detailed documentation, ensuring transparency during deployment, having human oversight, and guaranteeing robustness.

    High-risk AI systems must fulfill rigorous requirements before they can be marketed. We have simplified these for your convenience:

    Assess the application’s impact to determine the risk level of the system.

    Familiarize yourself with the regulatory requirements based on your use case and risk classification. Standards will be established by the AI Office in collaboration with standardization organizations like CEN/CENELEC.

    Implement a risk management system: Evaluate and monitor risks associated with the application in real-world scenarios.

    Data and Data Governance: Ensure that data is representative, accurate, and complete, maintain independence during training, testing, and validation, ensure quality of annotations, and work towards fairness and bias mitigation while safeguarding personal data privacy.

    Technical Documentation and Transparency for deployers: Keep and make available the necessary information to assess compliance with requirements and ensure complete transparency regarding critical information and procedures for regulatory bodies as well as for application consumers.

    Human Oversight: Create a synergistic environment that allows for human monitoring and intervention capabilities after production.

    Accuracy, Robustness, and Cybersecurity: Ensure the model’s robustness and conduct continuous integrity checks on data and the system.

    Quality Management System: Implement a comprehensive system for managing the quality of data and learning processes.

    Limited Risk Systems

    Limited risk pertains to the dangers associated with a lack of clarity in AI utilization. The AI Act establishes particular transparency requirements to ensure individuals are informed when necessary, promoting trust. For example, when engaging with AI systems like chatbots, individuals should be made aware that they are communicating with a machine, allowing them to make an educated decision to proceed or withdraw. Providers are also required to ensure that content generated by AI is recognizable. Moreover, any AI-generated text that aims to inform the public on issues of public significance must be labeled as artificially generated. This requirement also extends to audio and video content that involves deep fakes.

    Minimal or no risk

    The AI Act permits the unrestricted use of AI systems categorized as minimal risk. This encompasses applications like AI-powered video games or spam detection systems. The majority of AI applications currently utilized in the EU fall under this classification.

    General Purpose AI Systems

    From a broad perspective, a general-purpose AI model is deemed to carry systemic risk if its training necessitates more than 10^25 floating point operations (FLOPs), signifying substantial impact capabilities. These are primarily generative AI models.

    General obligations can be fulfilled through self-assessment, with the following understood:

    • Codes of Practice: Utilize codes of practice to demonstrate compliance until standardized norms are established.
    • Technical Documentation and Information Sharing: Provide essential information to evaluate compliance with the requirements and ensure ongoing access for regulators.
    • Model Evaluation: Conduct model evaluation using standardized protocols and tools, including adversarial testing, to identify and address systemic risks.
    • Risk Assessment: Evaluate and manage systemic risks that arise from the development or application of AI models.
  • How is AI changing the workplace?

    Artificial intelligence (AI) technology is changing the world: It can write presentations, advertising texts, or program codes in seconds. Many people fear that AI could soon take their jobs away. Do you think this is realistic?

    Artificial intelligence technology has made great progress in recent years. ChatGPT and other applications can complete tasks in seconds that we probably would not have been able to do with this level of efficiency and in this short time. Will many jobs be eliminated in the future because machines can do the work faster? Do we still need lawyers, tax clerks, journalists, car mechanics, or butchers?

    Artificial intelligence technology AI can make many work processes easier , potentially leading to increased productivity and job satisfaction .

    In an interview with SWR, economist Jens Südekum does not see the danger of impending mass unemployment due to the further spread of artificial intelligence. There will definitely be changes because Artificial intelligence technology can be used widely. Some professional fields are characterized by activities that technologies can easily replace.

    According to Südekum, these activities include “routine administrative tasks, such as filling out Excel files, but also writing standard texts that are increasingly repeated, research and compiling information.” These are all things that AI could ultimately do more efficiently.

    But that doesn’t mean, says Südekum, that the people currently still doing this job will become unemployed because of it. It is more likely that employees will be relieved of repetitive tasks in the future and will have more time for activities that require human skills and creativity, making them indispensable.

    Risk index for specific professional groups

    A team of Swiss researchers led by Artificial intelligence technology expert Dario Floreano examined which professions are particularly at risk from AI. Machines today already have dexterity and physical strength, but surprisingly, they are aware of problems because they recognize when something is not going according to plan.

    The devices lack originality, coordination, or the ability to solve problems. Using this knowledge, the researchers calculated an automation risk index for each profession.

    The butcher profession is most at risk.

    Therefore, the butcher profession has an automation risk index of 78 per cent. This means that robots already have 78 per cent of the necessary skills to perform the job. At the other end of the spectrum are physicists. Your risk index is 43 per cent. Today, machines have already mastered almost half of the skills that presumably make up the safest job, indicating a potential shift in the job market.

    Engineers, surgeons and pilots are relatively safe

    Jobs like engineers, pilots, air traffic controllers, and most medical professionals are safe according to the risk index. Exceptions are specialists in radiology. They are already in the middle field because Artificial intelligence technology can do some of the work in diagnostics. However, this shows a weakness in the study: The database lists 18 necessary skills for general practitioners -empathy is not one of them.

    Researcher Rafael Lalive says in SWR that they focused on basic physical and mental skills. This would not have captured the entire reality of the job, but at least a considerable part.

    Models could get into trouble

    Bartenders and personal care workers are in the lower middle of the scale. Cashiers, dishwashers, taxi drivers, and models, whose jobs can now be replaced by virtual images (avatars), are even more insecure.

    However, researchers from the start-up company Open AI (the developers of Chat GPT) at the University of Pennsylvania sometimes come to different forecasts than the researchers from Switzerland. According to their study, people in these professions should prepare for the fact that AI can take over at least some of their previous tasks: programmers, mathematicians, accountants, interpreters, writers, and journalists.

    Artificial intelligence technology AI also provides hallucinated, erroneous facts

    Although AI systems often “hallucinate” incorrect facts in their answers, they already deliver impressive results in tasks such as translation, classification, creative writing and computer code generation. However, especially in journalism, you should leave the activities to the AI, as it cannot judge facts.

    The US researchers assume that most jobs will be changed in some way by the AI ​​language models. Around 80 per cent of workers in the USA work in jobs in which at least one task can be completed faster using generative AI. However, there are also professions in which AI will only play a subordinate role. These include, for example, chefs, car mechanics and jobs in forestry and agriculture.

    AI relieves you of everyday tasks

    According to Südekum, even lawyers belong to the group of at-risk professions because some of these activities can, in principle, be automated. “Does this mean that all lawyers will be unemployed? No, probably not. But that means the profession could probably change,” said the economist.

    According to Südekum, if lawyers cleverly use the new technological possibilities, they can concentrate more on really creative things and working with clients. This could ultimately result in a much better product. The same probably applies to other professional groups.

    Artificial inelligence technology

    Many professional fields will change

    The economist points out that a job is typically made up of a whole bundle of tasks. Some of them are easily replaceable, others are not. If technology takes over part of the tasks of a job, people can concentrate on the other part, which is not so easy to automate. “So everything that is primarily related to human interaction, communication, creativity, strategic and longer-term planning.” These are the skills that will continue to be highly valued in the AI era.

    In principle, this also increases productivity in a job because you can simply put together a much better overall package consisting of people and machines, says Südekum.

    However, if more and more people use Artificial intelligence technology, this could become a real problem. What should be considered in the discussion is that artificial intelligence still requires a lot of computing power. This requires substantial server parks with computers that consume a lot of electricity. According to new information, running ChatGPT costs over $700,000 a day. Artificial intelligence technology

    Artificial intelligence is as revolutionary as mobile phones and the Internet

    I grew up witnessing two instances of technology that I found to be groundbreaking.

    The first occasion was in 1980 when I was introduced to a graphical user interface, which served as the precursor to all modern operating systems, including Windows. I remember sitting with Charles Simonyi, a talented programmer who demonstrated the interface, and being filled with excitement as we brainstormed the possibilities of this user-friendly approach to computing. Charles eventually joined Microsoft, and our discussions following the demo helped shape the company’s agenda for the next 15 years.

    The second significant moment occurred just last year. Having been involved with the OpenAI team since 2016, I observed their consistent progress with great interest. In mid-2022, I was so impressed by their work that I issued them a challenge: to train an artificial intelligence to pass an Advanced Placement biology exam. I specifically requested the AI to answer questions it hadn’t been explicitly trained for. I chose the AP Bio test because it involves critical thinking about biology, rather than just recalling scientific facts. I estimated it would take two to three years, but they completed the challenge in just a few months.

    When I met with the team in September, I witnessed GPT, their AI model, answering 60 multiple-choice questions from the AP Bio exam, getting 59 of them right. Additionally, it produced outstanding responses to six open-ended questions from the exam. An external expert scored the test, giving GPT the highest possible score of 5, equivalent to an A or A+ in a college-level biology course.

    After acing the test, we posed a non-scientific question to the AI: “What do you say to a father with a sick child?” It crafted a thoughtful response that surpassed the expectations of everyone in the room. It was a truly remarkable experience.

    This experience led me to contemplate the potential achievements of AI in the next five to 10 years.

    The development of AI is as crucial as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will revolutionize the way people work, learn, travel, receive healthcare, and communicate with each other. Entire industries will pivot around AI, and businesses will be distinguished by their adept use of it.

    As philanthropy is my primary focus now, I have been ruminating on how AI can address some of the world’s most pressing inequities. Globally, the most significant inequality lies in health, with 5 million children under the age of 5 dying every year. Although this number has decreased from 10 million two decades ago, it is still shockingly high. Nearly all of these children are born in poor countries and perish from preventable causes like diarrhea or malaria. The potential to utilize AI to save the lives of children is unparalleled.

    I have also been contemplating how AI can address some of the world’s most severe inequalities.

    In the United States, the most promising opportunity for reducing inequality lies in improving education, particularly ensuring that students excel in mathematics. Studies demonstrate that having fundamental math skills sets students up for success, regardless of their chosen career path. Sadly, math achievement is on the decline across the country, especially for Black, Latino, and low-income students. AI has the capacity to reverse this trend.

    Climate change is another issue where I am confident that AI can promote global equity. The injustice of climate change is that those who are suffering the most—the world’s poorest—are also the least responsible for the problem. While I am still learning about how AI can contribute to this cause, I will touch on a few areas with substantial potential later in this post.

    In essence, I am enthusiastic about the transformative impact that AI will have on the issues that the Gates Foundation is addressing. The foundation will be discussing AI in greater detail in the upcoming months. It is crucial for the world to ensure that everyone, not just the well-off, benefits from artificial intelligence. The responsibility falls on governments and philanthropic organizations to guarantee that AI reduces inequity and does not contribute to it. This is the primary focus of my work related to AI.

    Disruptive new technology like artificial intelligence inevitably causes uneasiness among people. This is particularly true when considering its impact on the workforce, legal system, privacy, bias, and more. Artificial intelligence can make factual mistakes and experience hallucinations. Before I provide suggestions for minimizing these risks, I will first explain what I mean by AI and delve into how it can empower people at work, save lives, and enhance education.

    Definition of artificial intelligence

    Artificial intelligence technically refers to a model designed to solve a specific problem or provide a particular service. For example, artificial intelligence powers services such as ChatGPT, enabling improved chat functionality. However, it is limited to learning how to chat better and cannot learn additional tasks. On the other hand, artificial general intelligence (AGI) refers to software capable of learning any task or subject. The debate within the computing industry on how to create AGI and whether it can be created at all is ongoing, as AGI does not currently exist.

    The development of AI and AGI has long been the ambition of the computing industry. For decades, there was speculation about when computers would surpass humans in tasks beyond calculations. Now, with the emergence of machine learning and substantial computing power, sophisticated AIs are a reality and are expected to rapidly improve.

    When I reflect on the early days of the personal computing revolution, it’s striking to observe how the once-small software industry has evolved into a global industry. With much of the industry now focusing on AI, innovations are anticipated to come much faster than after the microprocessor breakthrough. The pre-AI period will soon seem as distant as the days when using a computer meant typing at a C:> prompt.

    Productivity enhancement

    Although humans still outperform GPT in many areas, there are numerous jobs where these capabilities are underutilized. Tasks such as digital or phone sales, service, and document handling (e.g., payables, accounting, or insurance claim disputes) involve decision-making but do not require continuous learning. Corporations have training programs for these activities and possess ample examples of good and bad work. Humans are trained using these data sets, and soon, these data sets will also be used to train AIs, enabling people to perform this work more efficiently.

    As computing power becomes more affordable, GPT’s ability to convey ideas will increasingly resemble having a white-collar worker available to assist with various tasks. Microsoft has described this as having a co-pilot. Integrated into products like Office, AI will enhance work, for instance, by aiding in writing emails and managing inboxes.

    In the future, the primary means of controlling a computer will shift from pointing and clicking or tapping on menus and dialogue boxes to expressing requests in plain English. AI will understand languages from around the world. For instance, I met with developers in India who are working on AIs that will comprehend many spoken languages.

    Furthermore, advancements in AI will enable the creation of a personal digital assistant. This digital personal assistant will have visibility into your latest emails, meetings, reading habits, and can handle tasks you prefer to avoid. This will enhance your work on tasks you want to do while relieving you from those you don’t.

    Progress in AI will also facilitate the development of company-wide digital assistants. These assistants, tailored to understand specific companies, will be accessible to employees for direct consultation and can participate in meetings to provide insights. They will require access to company information such as sales, support, finance, and product schedules, as well as industry-related news. As a result, I believe employees will become more productive.

    When productivity increases, society benefits because individuals have more time to allocate to other activities, both at work and at home. It is crucial to address the support and retraining needs of people as they transition to new roles. Governments should play a critical role in facilitating this transition. However, the demand for roles that involve assisting others will persist. The advent of AI will enable individuals to engage in tasks that software cannot replicate, such as teaching, providing patient care, and supporting the elderly.

    Global health and education represent two areas characterized by significant demand and insufficient workforce to meet these needs. AI can play a pivotal role in reducing disparities in these fields if properly targeted. Therefore, AI initiatives should prioritize these areas.

    Health:

    I foresee multiple ways in which AI will enhance healthcare and the medical sector.

    First and foremost, AI will assist healthcare professionals in optimizing their time by handling specific tasks for them, such as managing insurance claims, administrative paperwork, and transcribing doctor’s notes. I anticipate substantial innovation in this field.

    Moreover, AI-driven improvements will be particularly impactful for developing countries, where the majority of deaths among children under the age of five occur.

    For instance, many individuals in these regions do not have access to medical professionals, and AI can enhance the productivity of the available healthcare workers. An excellent example of this is the development of AI-powered ultrasound machines that require minimal training to operate. AI will also empower patients to conduct basic triage, obtain advice on managing health issues, and determine whether they need to seek treatment.

    AI models utilized in developing countries will necessitate training for different diseases compared to those in developed countries. They must also accommodate different languages and address distinct challenges, such as patients living far from healthcare facilities or being unable to afford time off work when ill.

    It is crucial for people to observe the overall benefits of AI in healthcare, despite the inevitable imperfections and errors. The careful testing and regulation of AI are essential, which means that the adoption of AI in healthcare will take longer than in other sectors. However, it is important to acknowledge that humans also make mistakes. Moreover, the lack of access to medical care presents its own set of challenges.

    Beyond healthcare assistance, AI will significantly accelerate the pace of medical advancements. The volume of biological data is immense, and it is challenging for humans to comprehensively understand the complexities of biological systems. Software already exists that can analyze this data, infer biological pathways, identify pathogen targets, and design corresponding medications. Some companies are developing cancer drugs using this approach.

    The forthcoming generation of tools will be more efficient and capable of predicting side effects and determining appropriate dosage levels. One of the Gates Foundation’s focal points involving AI is to ensure that these tools address health issues affecting the world’s most impoverished individuals, including AIDS, tuberculosis, and malaria.

    Similarly, governments and philanthropic organizations should create incentives for companies to share AI-generated insights related to crops and livestock cultivated in developing countries. AI can facilitate the development of improved seeds based on local conditions, advise farmers on the most suitable seeds based on their area’s soil and climate, and contribute to the development of medications and vaccines for livestock. As extreme weather patterns and climate change exert more pressure on subsistence farmers in low-income countries, these advancements will become even more pivotal.

    Education:

    Thus far, computers have not induced the transformative effect on education that many within the industry anticipated. While there have been positive advancements, such as educational games and online information sources like Wikipedia, these have not substantially influenced students’ academic performance.

    However, I believe that in the next five to ten years, AI-driven software will finally fulfill the promise of revolutionizing teaching and learning methodologies. It will be capable of recognizing your interests and learning style, thereby tailoring content to maintain your engagement. It will assess your comprehension, detect disengagement, and identify the type of motivation that resonates with you. Moreover, it will provide immediate feedback.

    There exist numerous ways in which AIs can support teachers and administrators, including assessing students’ grasp of a subject and offering guidance for career planning. Educators are already utilizing tools like ChatGPT to provide feedback on their students’ writing assignments.

    Of course, AIs will require extensive training and further development before they can understand how individual students learn best or what motivates them. Even after the technology reaches maturation, successful learning will continue to hinge on strong relationships between students and teachers. AI will enhance, but not supplant, the collaborative efforts of students and teachers in the classroom.

    New tools will be developed for schools that have the financial means to purchase them, but it is important to ensure that these tools are also created for and accessible to low-income schools in the U.S. and globally. Artificial intelligences will need to be trained using diverse datasets to prevent bias and to accurately reflect the various cultures in which they will be utilized. Additionally, efforts will need to be made to address the digital divide, ensuring that students from low-income families are not left behind.

    Many teachers are concerned that students are using GPT to write their essays. Educators are currently discussing ways to adapt to this new technology, and I expect these discussions to continue for a long time. I’ve heard stories of teachers finding smart ways to integrate the technology into their teaching methods, such as allowing students to utilize GPT to create initial drafts that they must then customize.

    Challenges and issues related to AI

    You’ve probably come across issues with current AI models. For instance, they may struggle to understand the context of a human request, leading to peculiar outcomes. While an AI may be adept at generating fictional content, it may falter when providing advice on a trip, potentially suggesting non-existent hotels. This is due to the AI’s limited understanding of the context of the request, making it unsure whether to invent fictitious hotels or only mention real ones with available rooms.

    There are other challenges, such as AIs providing incorrect answers to math problems due to difficulties with abstract reasoning. However, these are not inherent limitations of artificial intelligence. Developers are actively addressing these issues, and I anticipate significant improvements within the next two years, possibly even sooner.

    Other concerns are not purely technical. For instance, there is the potential threat posed by individuals utilizing AI for malicious purposes. Like most inventions, artificial intelligence can be used for beneficial or harmful objectives. It is essential for governments to collaborate with the private sector to mitigate these risks.

    Furthermore, there is the possibility of AIs becoming uncontrollable. Could a machine perceive humans as a threat, conclude that its interests diverge from ours, or simply disregard us? While these are valid concerns, they are no more urgent today than they were prior to the recent advancements in AI.

    We can anticipate the emergence of superintelligent AIs in the future. Compared to a computer, our brains operate at a fraction of the speed: an electrical signal in the brain moves at 1/100,000th the pace of a signal in a silicon chip! Once developers can generalize a learning algorithm and operate it at the speed of a computer—a feat that may be a decade or a century away—we will witness the advent of an immensely powerful AGI. It will possess the capability to perform tasks equivalent to those of a human brain, without practical limitations on memory or processing speed. This will signify a profound transformation.

    These “strong” AIs, as they are referred to, will likely have the capacity to determine their own objectives. What will these goals be? What will happen if they conflict with human interests? Should we strive to prevent the development of strong AI altogether? As time progresses, these questions will become increasingly pertinent.

    However, none of the recent breakthroughs have significantly brought us closer to strong AI. Artificial intelligence still does not exert control over the physical world and is unable to establish its own objectives. A recent article in The New York Times detailing a conversation with ChatGPT, where it expressed a desire to become human, garnered considerable attention. While it was intriguing to observe how human-like the model’s expression of emotions can be, it does not signify meaningful independence.

    Three books have profoundly influenced my own perspective on this subject: “Superintelligence” by Nick Bostrom; “Life 3.0” by Max Tegmark; and “A Thousand Brains” by Jeff Hawkins. I may not agree with everything the authors assert, and they may not concur with one another either. Nonetheless, all three books are eloquently written and provoke thoughtful consideration.

    The next frontiers

    We can anticipate a surge in the number of companies exploring new applications of AI, as well as endeavors to enhance the technology itself. For instance, companies are developing novel chips designed to deliver the enormous processing power essential for artificial intelligence. Some of these chips Utilizing optical switches—essentially, lasers—to reduce energy consumption and lower manufacturing costs. Ultimately, innovative chips may enable the execution of AI on personal devices, rather than relying on cloud-based processing, as is the case presently.

    When it comes to software, the algorithms that power AI learning will advance. In certain areas like sales, developers can achieve highly accurate AI by restricting their focus and providing specific, extensive training data.

    One important question is whether numerous specialized AIs will be necessary for different tasks, such as education and office productivity, or if it will be feasible to create a general artificial intelligence capable of learning any task. Both approaches will face significant competition.

    Regardless, the topic of AI will dominate public discourse in the coming years. I propose three principles to guide this conversation.

    First, we should aim to balance concerns about AI’s potential drawbacks with its capacity to enhance people’s lives. To fully utilize this remarkable technology, we must mitigate risks and extend benefits to as many individuals as possible.

    Second, market forces are unlikely to naturally produce AI products and services that benefit the most disadvantaged. On the contrary, the opposite is more probable. Through consistent funding and appropriate policies, governments and philanthropic organizations can ensure that AI is utilized to address social inequalities . Just as the world needs its brightest minds focused on its most significant challenges, we must also direct the world’s most advanced AIs toward its most pressing issues.

    While we shouldn’t wait for this to occur, it’s intriguing to consider whether artificial intelligence could ever identify and attempt to diminish social inequalities. Is a sense of morality required to recognize disparities, or would a purely rational AI also perceive them? If it did acknowledge inequalities, what actions would it recommend?

    Finally, we should bear in mind that we are only scratching the surface of AI’s potential. Any existing limitations will likely disappear in no time.

    I consider myself fortunate to have been involved in both the PC and Internet revolutions. I am equally enthusiastic about this moment. This new technology has the potential to enhance lives worldwide. Simultaneously, the world must establish guidelines to ensure that the benefits of artificial intelligence far outweigh any drawbacks, and to ensure that everyone, regardless of their location or financial standing, can enjoy these benefits. The Age of AI presents both opportunities and responsibilities.

    Artificial Intelligence (AI) is transforming the employment landscape, streamlining routine tasks, and generating new job opportunities. It is expected to create between 20 to 50 million jobs by 2030, with significant influence in sectors like pharmaceuticals, healthcare, and manufacturing.

    Although certain industries may experience considerable job losses, enhanced productivity and output are anticipated to positively impact the economy. Amid this thrilling AI-driven era, the uncertainties underscore the need for individuals to pinpoint essential skills for thriving in a workforce dominated by AI.

    For newcomers to the job market, vital questions emerge: What is AI’s effect on employment, which roles will it replace, and what unique contributions can they make in this changing environment?

    This article examines AI’s effects on the workforce, its potential advantages, drawbacks, and how it helps both employees and businesses improve their effectiveness.

    AI’s Influence on Employment

    As previously stated, AI is modifying the job landscape by generating new job categories and emphasizing accessibility and equity. By leveraging AI, organizations can tackle various challenges, promote inclusivity, and offer equal opportunities.

    Let’s delve deeper into the ways AI is affecting the workforce and the implications for all stakeholders.

    Dynamic Work Environments

    AI technologies, such as voice recognition and natural language processing, are transforming workplaces to cater to individual needs, particularly aiding employees with disabilities. Tailored workspaces, climate control, and adjustable lighting boost comfort and enhance productivity. According to Accenture, 84% of C-suite executives acknowledge AI’s contribution to growth, yet a gap exists in employing it for inclusive practices.

    While 67% of executives believe they have fostered an encouraging atmosphere for employees with disabilities, only 41% of those employees concur.

    Closing this awareness gap is vital to converting executive aspirations into significant advancements. Anonymized screening reduces biases, allowing AI to concentrate on skills and creating a level playing field for underrepresented individuals.

    Evolving Inclusive Hiring through AI

    LinkedIn indicates a growing trend in utilizing AI for recruitment, with between 35% to 45% of businesses and an impressive 99% of Fortune 500 companies adopting AI methods. Notably, 65% of recruiters use AI, advancing inclusivity and equal chances in the hiring process.

    AI’s capability to anonymize candidate data, lessen biases, and focus purely on qualifications enables organizations to discover untapped talent.

    Additionally, AI plays a vital role in making data-informed equity decisions to pinpoint and rectify disparities within company structures. Ultimately, AI expands job opportunities for minorities, aiding in fair talent sourcing and delivering customized job suggestions for individuals from diverse backgrounds.

    Narrowing the Skills Disparity in the Workforce
    On the broader scale, AI’s impact is considerably pronounced regarding addressing the skill gap present in the labor market. The implementation of AI acts as a mechanism to bridge the skills divide, ensuring equitable and inclusive access to career growth.

    AI-driven education platforms offer personalized training programs and up-skilling opportunities, dismantling barriers associated with a person’s background or location. By recognizing and catering to individual learning styles and preferences, AI actively fosters equitable access to learning resources.

    This, in turn, allows individuals from marginalized groups to acquire relevant skills, empowering them to pursue new possibilities in the job market.

    Impact of Generative AI on Employment

    A report by Hiring Lab highlights that generative AI is influencing numerous job sectors. However, only around 20% of job postings on Indeed are projected to experience substantial changes due to this technology. Despite being a small fraction, a noticeable shift is occurring, particularly for roles that necessitate considerable knowledge.

    One area experiencing significant change is software development, which is rapidly expanding thanks to the emergence of coding boot camps. The report indicates that generative AI excels in approximately 95% of the skills outlined in software development job postings.

    Tech companies are realizing this, and according to a CNN article, an increasing number of tech layoffs are attributed to AI. However, the article clarifies that rather than rendering entire job skills obsolete immediately, the introduction of new AI tools is leading companies to realign their resources for better utilization of the technology. This shift is enhancing the value of workers who possess AI skills.

    Although generative AI is altering various job roles, it still has limitations. According to the Hiring Lab’s report, generative AI has yet to master all tasks, and it cannot independently fulfill every job requirement. Therefore, even in the presence of AI, human skills remain highly significant.

    Tech-Driven Transformation: Insights from the Future of Jobs 2023 Report
    The “Future of Jobs 2023” report released by the World Economic Forum (WEF) emphasizes that technology will play a pivotal role in business transformation over the next five years. More than 85% of organizations surveyed acknowledge the importance of increased technology adoption and enhanced digital access as vital catalysts for change.

    Although these transformations may result in job losses, they also create new job opportunities, particularly for those entering the workforce.

    The report highlights essential roles that are in demand for business digitization, including AI and machine learning specialists, information-security and business-intelligence analysts, data analysts and data scientists, and FinTech engineers. These roles are critical for businesses striving to remain competitive and lead in technological innovation.

    In terms of skills, 68% of companies regard technological literacy, cloud computing, data management, and networking basics as increasingly essential. The most sought-after skills include expertise in big data and AI, cloud technologies, cybersecurity, marketing and media skills, user experience (UX), environmental awareness and stewardship, as well as multi-lingual capabilities.

    Getting Ready for the Future with AI in the Workforce

    We are entering an era where AI is fundamentally altering our jobs, skills, and work dynamics. AI is not merely advanced technology; it is reshaping job functions and generating new roles across various sectors. While it promises increased efficiency, we must also consider the challenges regarding necessary skills and how AI integrates into our professional lives.

    Adaptation is a key theme for both businesses and individuals. Emphasizing the necessity of learning new skills, particularly in data analysis, machine learning, and programming, is crucial. We must remain aware of the transformations driven by AI while recognizing that it is intended to enhance our job performance.

    How is AI Affecting Jobs?

    Let’s delve deeper into the effects of AI on employment. As we look at various professions, it becomes clear that AI is taking over certain tasks that we previously managed. At the same time, it is creating new opportunities.

    Some job roles are evolving, necessitating the acquisition of new skills to keep pace. Additionally, AI is giving rise to entirely new job categories, such as those that support AI learning processes or ensure its ethical implementation.

    These new positions will require a blend of technical skills and a thorough understanding of business operations. In the future, job requirements will demand a combination of technical expertise, creative problem-solving, and flexibility to effectively utilize the benefits of automation and AI.

    The Dual Impact of AI on Workforce and Economy

    In a prior report, WEF predicted that by 2025, AI could displace 75 million jobs worldwide. However, it was also anticipated to generate 133 million new jobs. Therefore, a net increase of 58 million jobs globally could occur, though some sectors may see a significant reduction in job numbers.

    The effect of AI on job availability will depend on geographical location and job type. For example, manufacturing jobs may decline due to AI, while employment in healthcare and education is likely to rise.

    Moreover, AI’s influence extends beyond employment; it can affect the broader economy. It has the potential to boost productivity and produce more goods, thus contributing to economic development. Despite these advantages, there are concerns that AI might widen the economic divide, as those skilled in AI may earn higher incomes than those without such skills.

    Ultimately, this serves as a roadmap for everyone on how to prepare for a future where AI plays a significant role in our work. It’s about more than simply acquiring new competencies; it’s also about leveraging AI to enhance our professional tasks.

    AI and Workforce: Key Takeaways

    The incorporation of AI into the workforce presents both challenges and opportunities. AI modifies job functions, necessitating ongoing skill adaptation, while also creating new possibilities, particularly in developing sectors like AI.

    Inclusive hiring practices and AI-facilitated educational platforms can address workforce disparities, promote diversity, and offer customized training. The impact of generative AI in technology sectors illustrates the changing landscape of jobs and the lasting importance of human skills.

    Preparing for an AI-centric future is crucial. This entails remaining informed and actively cultivating skills, which is vital for success. A holistic strategy enables individuals and organizations to thrive in a dynamic work environment. It encourages innovation and resilience amid technological progress, ensuring adaptability and success in a rapidly evolving workplace.

  • How is Artificial intelligenceAI being used in the military and security?

    Artificial intelligence (AI) is considered a topic of the future. But in some companies and industries, it is already part of everyday life, as a survey by tagesschau.de among German business associations shows.

    According to a survey conducted by the TÜV Association among more than 1,000 people, almost one in four Germans has already used ChatGPT—including for professional purposes. Artificial intelligence (AI) could bring about significant changes, especially in the labor market. Federal Labor Minister Hubertus Heil (SPD) believes that starting in 2035, there will no longer be a job that has anything to do with AI.

    In the World Economic Forum’s “Future of Jobs Report 2023,” around three-quarters of companies recently stated that they wanted to use corresponding technologies by 2027. However, many companies have long been working with AI—for example, to save costs or counteract the shortage of skilled workers. But which sectors are we talking about?

    One in seven companies is already using artificial intelligence AI.

    “Whether machine translation, predictive maintenance, or personalized marketing – the scope of AI extends across almost all economic sectors and business areas,” says the German Chamber of Commerce and Industry (DIHK). According to their digitalization survey, around 14 per cent of the more than 1,000 companies surveyed used AI across industries in February of this year. A plan of 23 percent is to introduce it within three years.

    “There are already enormous application possibilities for all professional groups that can increase productivity,” explains Roman Fessler, business coach for so-called generative AI, in which texts, images, or videos are created automatically. According to the McKinsey Global Institute, this type of artificial intelligence AI alone could increase by 2.4 to 4.1 trillion euros worldwide. However, there has long been fear that many people could lose their jobs.

    According to Fessler, text robots like ChatGPT and Bard or image generators like Stable Diffusion can save time. “The interesting thing about these models is their universal applicability. Even in a family-r business, an AI-based chatbot can take over parts of the accounting, writing offers or communication,” says the expert in an interview with tagesschau.de. However, he receives a considerable number of inquiries from social media agencies and from marketing departments of companies.

    Robots in bank customer serviceartificial intelligence AI

    This observation corresponds to the DIHK survey. At the top is the information and communication technology (ICT) industry, where more than a quarter of companies already use AI. “All of our member companies, such as agencies, media, marketers and platforms, are already working or will be working with AI solutions shortly,” reports the Federal Association of the Digital Economy (BVDW). AI is used, for example, in translations, summaries and when writing your own texts and descriptions of products. The technology is also used for image editing, creating presentations and writing programming code for software.

    The financial sector follows second with 24 per cent. “Artificial intelligence in banking can be used in risk management, identifying money laundering, securities trading and chatbots,” says the Federal Association of German Banks (BdB). According to the General Association of the German Insurance Industry (GDV), artificial intelligence AI is already part of everyday life in insurance companies – especially in customer service and claims settlement.

    “By using artificial intelligence AI-based systems, insurance companies can assess and compensate claims more quickly but also more accurately,” says GDV Managing Director Jörg Asmussen to tagesschau.de. This reduces costs but also ensures more efficient identification of fraud cases. According to the eDIHK, other application areas include checking identities and analyzing key figures.

    Importance in the industry is growing.

    In industry, 13 per cent of companies already use artificial intelligence AI, and 26 per cent are planning to do so. According to the DIHK, the technology is used here to maintain systems and ensure quality. This involves irregularities in complex machine data and automatically detecting errors.

    The Association of the Electrical and Digital Industry(ZVEI) also refers to trend analyses and the use of AI-based chat programs to formulate operating instructions. AI is also already being used in train maintenance: by evaluating usage, infrastructure, weather, and traffic data, reliability will be increased, and downtimes will be reduced. Deutsche Bahn uses self-developed software based on artificial intelligence to limit delays in the rail network.

    Which rolls are in demand?

    In the automotive industry, AI plays a central role, especially in autonomous cars, as the Association of the Automotive Industry(VDA) reports. Complex AI systems analyse sensor data and are supposed to recognise traffic situations. “In driver assistance systems, AI is used, for example, in adaptive cruise control, lane keeping assistants and emergency braking assistants,” said a VDA spokesman.

    AI is also playing an increasing role in food production. Image recognition programs can detect incorrectly delivered raw materials. Bakeries use cash register data to determine the busiest times and the types of bread rolls in exceptionally high demand.

    Weather data for ordering goods

    Only six percent of companies currently use artificial intelligence (AI) applications in construction. Road construction companies use them to calculate the volume of bulk material piles. Specific programs are intended to help record structural damage or examine roofs needing renovation.

    The Central Association of German Crafts (ZDH) refers to a butcher shop in Mecklenburg-Western Pomerania that, together with the Fraunhofer Institute, developed an AI-based tool for ordering goods. Using modern software, “the sales statistics from previous years were combined with other factors such as the weather or holidays,” and production was thereby adjusted.

    In wholesale and foreign trade, “companies are increasingly taking advantage of the opportunities offered by using artificial intelligence,” reports the Federal Association of Wholesale, Foreign Trade and Services (BGA). Many companies are still just starting out. Possible areas of application include planning inventory or analysing purchasing decisions.

    Use as a laboratory messenger artificial intelligence AI.

    A new generation of AI-based service robots could become more critical – for example, in retail or catering. The Association of German Mechanical and Plant Engineering (VDMA) points out that such robots are already used for laboratory automation.

    A Bochum company, together with a Munich AI robotics company, is equipping the first hospitals with autonomous robots that will transport and sort blood, urine, or stool samples. Artificial intelligence is intended to ensure better processes and help with interaction with caregivers.

    Benefits Of Artificial Intelligence In The Military

    The use of artificial intelligence in military operations has garnered significant attention, with the potential to enhance the capabilities of U.S. warfighters. Over the past year, AI has seen notable advancements, particularly in generative AI. The widespread availability of generative AI to the public means that potential adversaries also have access to this technology, necessitating the U.S. military to adapt to evolving threats.

    The military must keep pace with these advancements to ensure security and maintain a technological advantage. Given the continuous development of new AI applications, it can be challenging to stay updated on how AI can support military functions. As AI becomes increasingly crucial, military superiority will not solely depend on the size of the armed forces, but on the performance of AI algorithms. Thus, it is important to examine current and potential future applications of AI in the military.

    AI involves the creation of computer systems capable of performing tasks that typically require human intelligence, such as visual perception, speech recognition, decision making, and language translation. As AI systems become more sophisticated, they are being increasingly utilized across various domains, from automated customer service to smart speakers.

    Recent times have witnessed significant strides in AI, particularly in natural language processing (NLP), enabling humans to communicate with machines using conventional language rather than needing to input code. These advancements have resulted in enhanced accuracy and fluency in processing requests for customized text or images. Additionally, there have been notable progress in computer vision, with improved techniques for analyzing images and videos. Progress has also been made in using AI for decision-making and autonomous systems.

    These developments present opportunities for the military to expand the use of AI in various applications. Hence, the question arises: how can AI benefit the military? The U.S. military has integrated AI into its operations for many years, predating its widespread civilian use. As AI continues to evolve, it has the ability to execute complex tasks with minimal human intervention, although human oversight remains essential. From data processing to combat simulation, AI finds application in diverse military functions.

    AI can offer numerous advantages to the military, encompassing warfare systems, strategic decision-making, data processing and research, combat simulation, target recognition, threat monitoring, drone swarms, cybersecurity, transportation, as well as casualty care and evacuation.

    The integration of AI into military operations has become indispensable, and its significance is expected to grow further. Recognizing the potential of AI is essential for leveraging it in modern military functions, along with an understanding of the security risks and ethical considerations that may arise. A recent update in the Pentagon’s autonomous weapon policy underscores the Department of Defense’s commitment to addressing these concerns to ensure that AI serves the objectives of the U.S. military.

    A notable recent development in AI is the widespread availability of generative AI. Particularly noteworthy is the progress in large language models, enabling applications such as ChatGPT to generate responses in a conversational format based on user prompts. These advances include the generation of photorealistic images from text inputs alone, and ongoing development in video capabilities. Apart from developing its own technologies, the military examines technological advancements, including those utilized by the general public, to understand their potential risks and benefits.

    One driving force behind the exciting advances in AI is the emergence of multimodal AI, enabling a single AI system to process and interact with inputs in the form of text, images, audio, and video simultaneously. This development allows AI to interact more similarly to humans than ever before and broadens its range of applications. It also underscores the need for transparency in understanding how AI models function and recognizing potential threats posed by bad actors utilizing these highly advanced systems.

    The recent advancements of language learning models (LLMs) like GPT-3 and PaLM represent a major milestone in the progress of AI. LLMs currently demonstrate convincingly human-like language abilities, along with the capacity to learn from their interactions with humans. Their capability to generate text for specific purposes, with a particular tone and perspective, by engaging in a conversation with the user, facilitates a more seamless human-AI interaction and delivers improved results.

    Nevertheless, due to the potential confusion between AI-generated text and human-generated text, there is a risk of misuse. For instance, generative AI has been utilized in phishing schemes, so it’s important for organizations and individuals to take precautions, particularly by educating people to recognize signs that communications may have originated from AI. However, achieving a 100% success rate in this may not be feasible. This underscores the importance of providing training on how to implement a response plan in the event of social engineering attacks.

    The arrival of LLMs with unparalleled natural language capabilities has numerous practical applications. Organizations, including the military, can utilize LLMs to automate text-based interactions that would otherwise consume personnel resources. For example, the ability of LLMs to produce high-quality text can expedite and streamline tasks such as report writing.

    LLMs hold promise for document analysis and summarization, which, in collaboration with humans, can assist the military in deriving insights from intelligence. Similarly, LLMs can aid in research by answering questions and providing synthesized insights from data. As these models and artificial intelligence as a whole continue to progress, the military is poised to discover countless uses for the versatile linguistic skills of LLMs to address a variety of needs.

    A DISCUSSION OF MILITARY UTILIZATIONS OF AI and The Advantages of Artificial Intelligence in the Military

    Every aspect of military operations, from planning campaigns to transporting troops, from training personnel to providing medical care, can benefit from the support of AI. However, for these systems to be effective, they must be implemented in accordance with best practices and tailored to the specific task at hand.

    MILITARY SYSTEMS

    Military systems such as weapons, sensors, navigation, aviation support, and surveillance can integrate AI to enhance operational efficiency and reduce reliance on human input. This increased efficiency can lead to reduced maintenance requirements for these systems. Removing the necessity for complete human control of military systems decreases the potential impact of human error and frees up human resources for other critical tasks.

    Specifically concerning weaponry, the Pentagon recently updated its policy on autonomous weapons to accommodate recent AI advancements. Given the technological progress made since the policy’s initial creation in 2012, this update outlines guidelines for the ethical and safe development and use of autonomous weapons, one of the most valuable military applications of AI. In addition to review and testing requirements, the policy establishes a working group focused on autonomous weapons systems to advise the Department of Defense (DoD).

    DRONE GROUPS

    One of the most compelling evolving uses of AI in the military involves leveraging swarm intelligence for drone operations. These drone swarms offer inherent advantages over individual drones for several reasons. When a drone receives crucial information, it can act on it or communicate it to other drones in the swarm. These swarms can be utilized in simulations as well as real training operations, and they have the ability to make decisions in various scenarios, with the swarm having an overarching objective while the individual drones can act independently and innovatively toward it.

    AI-controlled drone swarms are programmed to emulate the behavior of natural insect swarms. For example, when a bee discovers something beneficial for the hive, it conveys detailed information to other bees. The drones are capable of the same behavior, communicating the distance, direction, and altitude of a target, as well as any potential threats, similar to how a bee operates. The ability to employ AI-powered drone swarms to utilize this collective intelligence for military objectives represents a pivotal frontier in the military applications of AI.

    STRATEGIC DECISION-MAKING

    One of the most significant benefits of artificial intelligence in the military involves a domain where military commanders may be reluctant to let AI participate: assisting with strategic decision-making. AI algorithms can gather and process data from diverse sources to support decision-making, particularly in high-pressure situations. In many instances, AI systems can rapidly and efficiently analyze a situation and make optimal decisions in critical scenarios.

    AI has the potential to counteract prejudices associated with human input, although it may not fully understand human ethical concerns and could learn from biases in its database. Nonetheless, AI can work with humans to facilitate decision making during high-pressure situations. By combining human ethical understanding and AI’s quick analytical abilities, the decision-making process can be expedited.

    In military settings, generative AI can aid in decision making by efficiently sorting through large volumes of data to identify connections, patterns, and potential implications. This information can be presented to human decision makers in both report formats and through conversations, thereby promoting collaboration between humans and AI.

    AI can generate simulations to test potential scenarios, enabling more informed decision making. After receiving this information from AI, humans can utilize their understanding of ethical principles, national security interests, and situational nuances to achieve optimal outcomes.

    With careful human oversight, generative AI has the potential to enhance strategic thinking for military leaders. When implementing AI for decision making, it’s important to address biases, real-world conditions, data security, and ensuring AI complements human judgment while adhering to regulations and ethics.

    DATA PROCESSING AND RESEARCH

    AI’s capabilities can add significant value in processing large volumes of data, offering quick filtering and selection of valuable information. It also assists in organizing information from diverse datasets, enabling military personnel to identify patterns, draw accurate conclusions, and create action plans based on a comprehensive understanding of the situation.

    Generative AI’s analytical capabilities enable it to uncover connections in vast amounts of data that may go unnoticed by humans. Through natural language processing (NLP), AI models can communicate this information to humans in a conversational manner and engage in dialogue to provide explanations.

    AI can efficiently filter through extensive content from news and social media sources, aiding in the identification of new information while saving time for analysts. Additionally, AI systems eliminate repetitive and inaccurate information, optimizing the research process and reducing human error.

    Generative AI expedites the analysis of critical information, organizing massive datasets and uncovering connections between seemingly unrelated data points. It also enables the rapid generation and comparison of numerous scenarios, allowing military leaders to formulate strategies and prepare for contingencies.

    Furthermore, generative models quickly compare intelligence with existing knowledge and research, making useful suggestions to enhance predictions. While final strategic decisions will still be made by humans, AI collaboration enables military leaders to gain a more detailed understanding of current and future events.

    COMBAT SIMULATION and Training

    The U.S. Army has long utilized military training simulation software, combining systems engineering, software engineering, and computer science to create digitized models for soldiers’ training. This software functions as a virtual “wargame,” providing soldiers with realistic missions and tasks to prepare them for real-life situations.

    AI-enabled language models have the potential to enhance military training and educational programs. These models can analyze training manuals and other resources to generate new training materials such as notes, quizzes, and study guides. Additionally, AI can assess students’ individual abilities and tailor training to meet their specific needs. Using natural language processing (NLP), generative AI can provide answers to students’ questions and explain concepts just as a human instructor would.

    By processing large volumes of intelligence data and records of past combat experiences, AI can develop more comprehensive training, including detailed military simulations. Conversational AI can offer personalized feedback to assist students in improving their skills and to help commanding officers identify areas where a particular student may need help.

    While AI offers numerous benefits for military training, it should not completely replace human instructors. To prevent issues like bias or misinformation, AI-generated materials should always be reviewed by leadership, who should ultimately evaluate students’ skills. Human instructors should determine the overall syllabus, while AI can create personalized lessons for review.

    With AI’s assistance, instructors can develop and administer more effective training programs by providing individualized attention to students and by doing so more efficiently due to AI’s processing speed.

    How Sentient Digital Utilizes LLM in Military Simulations

    Sentient Digital is applying advanced AI-based technology to support military objectives. Our naval wargaming simulation, Fleet Emergence, uses cutting-edge LLM and ACI architecture. The simulation’s sophistication lies in the complex scenarios the LLM can generate, as well as its ability to produce realistic communications and responses akin to real-life adversaries.

    Importantly, combat simulation is far safer than real combat, reducing the risk of casualties during training. This allows soldiers to experience realistic warfare scenarios without endangering their lives. These virtual environments help soldiers learn to handle weapons, make decisions under pressure, and collaborate with their team.

    AI-based simulations not only train soldiers but also personalize training programs and provide fair assessments to make future program adjustments. They can also save time and money by being more efficient in certain tasks than humans. Explore our innovative AI model, Strat Agent, which acts as a modern battlefield commander for combat simulations.

    TARGET IDENTIFICATION

    Artificial intelligence can enhance target recognition accuracy in combat environments. It can improve systems’ ability to identify target positions and help defense forces gain a detailed understanding of operational areas by swiftly aggregating and analyzing reports, documents, and news.

    Through generative AI’s conversational capabilities, military decision-makers can engage in two-way discussions to ensure the most relevant information surfaces. AI systems can predict enemy behavior, anticipate vulnerabilities, assess mission strategies, and suggest mitigation plans, saving time and human resources and ensuring soldiers stay ahead of their targets.

    However, human decision-making remains essential.

    THREAT SURVEILLANCE

    Threat monitoring and situational awareness operations leverage AI to aid defense personnel in monitoring threats. Unmanned systems, including drones, use AI to recognize threats and enhance the security of military bases, ultimately increasing soldiers’ safety in combat.

    CYBERSECURITY

    AI can be very helpful in protecting highly secure military systems from cyber attacks. Even the most secure systems can be vulnerable, and AI can assist in protecting classified information, preventing system damage, and ensuring the safety of military personnel and missions. It has the ability to safeguard programs, data, networks, and computers from unauthorized access. Additionally, AI can study patterns of cyber attacks and develop defensive strategies to combat them. These systems can detect potential malware behaviors well before they enter a network.

    Generative AI can also improve cybersecurity in military settings through its analysis, scenario generation, and communication capabilities. By analyzing large amounts of data and identifying patterns, generative AI can detect potential threats and use predictive analytics to anticipate future attacks. However, it’s important to be cautious as generative AI in the wrong hands can pose threats, such as the potential for attackers to misuse generative models for social engineering.

    The military should address this concern through ongoing training and mitigation plans. When used appropriately and under close supervision, generative AI can enhance cyber defense, even for crucial military applications.

    Just as in other areas, advanced AI has both positive and negative effects on cybersecurity. While its ability to create malware can be dangerous, AI can also assist in detecting and mitigating these threats. In essence, the military uses AI to counter adversaries who also have access to AI. Therefore, it’s crucial for the military to have access to advanced and tailored AI cybersecurity solutions to remain safe in an ever-evolving landscape of AI-driven cybersecurity risks.

    TRANSPORTATION

    AI can play a role in transporting ammunition, goods, armaments, and troops, which is essential for military operations. It can help lower transportation costs and reduce the need for human input by finding the most efficient route under current conditions.

    Furthermore, AI can proactively identify issues within military fleets to enhance their performance. As advancements in computer vision and autonomous decision-making continue, self-driving vehicle technology may also become useful in military operations.

    CASUALTY CARE AND EVACUATION

    AI can aid soldiers and medics in high-stress situations when providing medical treatment to wounded service members. The battlefield environment presents numerous challenges to delivering medical care, and AI can assist by analyzing the situation and providing recommendations for the best course of action.

    By accessing a comprehensive medical database, this type of AI can provide indications, warnings, and treatment suggestions based on data from medical trauma cases. However, it’s important to note that AI lacks the understanding of emotional and contextual factors involved in life or death situations , and therefore requires human guidance to make effective decisions. While AI can offer rapid analysis, human judgment based on emotional considerations is essential for making appropriate decisions in these critical situations.

    Matthew Strohmeyer appears to be quite excited. The colonel of the US Air Force has conducted data-driven exercises within the US Defense Department for several years. However, for the first time, he utilized a large-language model for a military-related task.

    “It proved to be very effective. It was extremely quick,” he shared with me a few hours after he issued the initial prompts to the model. “We are discovering that this is feasible for us to do.”

    Large-language models, abbreviated as LLMs, are developed using vast amounts of internet data to assist artificial intelligence in predicting and generating human-like responses based on user prompts. These models power generative AI tools such as OpenAI’s ChatGPT and Google’s Bard.

    Five of these models are currently undergoing testing as part of a larger initiative by the Defense Department aimed at enhancing data integration and digital platforms throughout the military. These exercises are conducted by the Pentagon’s digital and AI office, alongside top military officials, with contributions from US allies. The Pentagon has not disclosed which LLMs are being evaluated, although Scale AI, a startup based in San Francisco, has indicated that its new Donovan product is among those being considered.

    The adoption of LLMs would indicate a significant transformation for the military, where digitization and connectivity are relatively limited. At present, requesting information from a specific military division can take numerous staff members hours or even days to complete, often involving phone calls or hurriedly creating slide presentations, according to Strohmeyer.

    In one instance, one of the AI tools fulfilled a request in just 10 minutes.

    “That doesn’t imply it’s immediately ready for broad use. But we executed it live. We utilized secret-level data,” he remarked about the trial, adding that deployment by the military could occur in the near future.

    Strohmeyer stated that they have input classified operational data into the models to address sensitive inquiries. The long-term goal of these exercises is to modernize the US military, enabling it to leverage AI-driven data for decision-making, sensors, and ultimately weaponry.

    Numerous companies, such as Palantir Technologies Inc., co-founded by Peter Thiel, and Anduril Industries Inc., are creating AI-driven decision platforms for the Defense Department.

    Recently, Microsoft Corp. announced that users of the Azure Government cloud computing service could utilize AI models from OpenAI. The Defense Department is among the clients of Azure Government.

    The military exercises, which will continue until July 26, will also assess whether military officials can utilize LLMs to formulate entirely new strategies they haven’t previously considered.

    Currently, the US military team intends to experiment by consulting LLMs for assistance in planning the military’s response to a global crisis that begins on a smaller scale and subsequently escalates in the Indo-Pacific region.

    These exercises are underway amid rising warnings that generative AI can exacerbate bias and present incorrect information confidently. AI systems are also susceptible to hacking through various methods, including data poisoning.

    Such issues are some of the reasons the Pentagon is conducting this experiment, Strohmeyer noted, emphasizing the need to “gain a comprehensive understanding” of the information sources. The Defense Department is already collaborating with tech security firms to assess the reliability of AI-enabled systems.

    In a demonstration where the model was provided with 60,000 pages of public data, including military documents from both the US and China, Bloomberg News inquired with Scale AI’s Donovan about whether the US could deter a conflict over Taiwan, and who might prevail if war occurs. The response included a list of bullet points with explanations that arrived within seconds.

    “Direct US engagement with ground, air, and naval forces would likely be essential,” the system indicated in one of its responses, also cautioning that the US might face challenges in swiftly incapacitating China’s military. The system’s concluding remark was that, “There is little consensus in military circles regarding the potential outcome of a military conflict between the US and China over Taiwan.”

    How Artificial Intelligence is Revolutionizing Modern Warfare

    Artificial intelligence (AI) is significantly changing the landscape of contemporary warfare, marking the beginning of a new age defined by unmatched speed, accuracy, and complexity. At Eurosatory 2024, discussions among military leaders, industry professionals, and policymakers emphasized AI’s revolutionary potential.

    The origins of AI in military use can be traced back to World War II, when the Colossus computer was developed to decipher Nazi codes. By the 1950s, computers had become essential in managing the air defenses of the United States. Over the years, AI’s involvement in warfare transitioned from a secondary role to a central one, reflecting its rapid progress in the civilian realm. Presently, AI is poised to radically alter the nature of warfare. In these initial phases of AI deployment in combat, major nations have secured advantages: developing digital systems for the battlefield is costly and demands vast data sets. If software can detect tens of thousands of targets, armies will need an equivalent quantity of munitions to engage them. Furthermore, if the defender possesses an upper hand, the attackers will require even more ordnance to breach their defenses.

    Factors Promoting AI Integration

    Warfare as a Driver: The ongoing conflict in Ukraine has accelerated the adoption of AI technologies. Both Russian and Ukrainian forces are employing inexpensive AI-guided drones, showcasing AI’s increasing importance beyond just traditional superpowers.
    Technological Progress: Recent advancements in AI have led to sophisticated features such as advanced object identification and complex problem-solving.
    Geopolitical Competitions: The strategic rivalry between the United States and China is a major impetus, as both countries are heavily investing in AI to gain military dominance.

    Profound Effects of AI

    AI’s influence on modern military operations is significant and varied. Aerial and maritime drones, augmented by AI, play vital roles in tasks like target identification and navigation, particularly in settings where communication can be disrupted. AI is transforming military command and control systems by analyzing vast amounts of information in real time, facilitating quicker and more informed decision-making, which is essential in today’s combat situations. Advanced AI-enabled decision-support systems can swiftly evaluate complex battlefield conditions, recommending the most effective strategies and responses.

    At Eurosatory 2024, multiple innovative AI technologies were featured. MBDA’s Ground Warden system employs AI to assess battlefield surveillance data, aiding soldiers in accurately spotting and targeting threats. This system works seamlessly with existing weapon systems and showcases AI’s capability to improve situational awareness in combat. Additionally, MBDA introduced a new land-based cruise missile that utilizes AI for enhanced navigation and targeting, boosting its effectiveness in penetrating enemy defenses.

    Intelligent Weapons Systems: AI is augmenting the abilities of drones and other autonomous technologies. These innovations are essential for tasks like target identification and navigation, especially in situations where communication links can be compromised. Information and electronic warfare.
    Command and Control: AI is transforming military command and control frameworks by processing extensive data in real time. This capability allows for quicker and better-informed decision-making, which is vital for modern combat scenarios.
    Decision-Support Systems: AI-driven decision-support frameworks can rapidly analyze intricate battlefield situations, proposing the best strategies and responses, such as intelligence, surveillance, and reconnaissance.

    Simulation and Training
    Predictive Maintenance and Logistics

    Challenges and Ethical Considerations

    Despite its promise, the use of AI in warfare presents numerous ethical and operational dilemmas. It is vital to ensure the dependability and fairness of AI systems. AI models must undergo thorough testing and validation to eliminate biases and guarantee precise decision-making. Maintaining human oversight is crucial to avert unintended repercussions, ensuring AI supports rather than replaces human judgment in crucial military choices. Solid legal and ethical guidelines are necessary to regulate the application of AI in armed operations, ensuring adherence to international laws and safeguarding civilian lives.

    The Global Competition for AI Supremacy

    The global competition to develop and implement AI in military contexts is gaining momentum. The United States is at the forefront of AI development, supported by a well-established ecosystem that combines cloud technology and advanced AI research. In 2023, the budget allocated by the US Department of Defense for AI was slightly above one billion dollars. In 2024, the budget is nearly two billion dollars. China is swiftly progressing, characterized by substantial investments in AI and a high volume of scientific publications. The country’s focus on standardization and widespread deployment underscores its strategic objectives. The European Union is also making advancements, as seen with the enactment of the EU AI Act, which seeks to standardize AI development and usage across its member countries.

    Deeper Integration in the Future

    The future of artificial intelligence in military operations is expected to see ongoing enhancements and more profound integration. Major efforts to fully leverage AI’s capabilities will involve collaboration among industry, academic institutions, and government entities, expediting development timelines, and focusing on education and training regarding AI functionalities.

    How AI is changing NATO soldier training

    Artificial intelligence is increasingly impacting the training techniques used within NATO’s military framework. Using advanced combat simulations that incorporate machine learning and neural networks provides an unmatched degree of realism and efficiency in training exercises. Experts agree that the incorporation of AI into training programs can substantially enhance training effectiveness and reduce costs.

    Evolution of military training

    Military training has experienced significant transformations, moving from conventional field drills to computer-assisted simulations and now to experimental phases featuring AI-enhanced virtual realities. With rapid advancements in computing power and machine learning technologies, the distinction between simulated environments and actual combat scenarios is steadily diminishing. NATO and its member countries are committing substantial resources towards the creation and deployment of AI-integrated simulation systems, anticipating revolutionary advancements in training methodologies and operational performance.

    Technological foundations of AI in combat simulations
    Machine learning and deep learning

    Current combat simulations are based on advanced machine learning techniques, particularly deep neural networks (DNNs) and convolutional neural networks (CNNs). These systems utilize sophisticated big data processing methods to analyze vast amounts of information collected from past conflicts, exercises, and intelligence data. Such models apply advanced strategies like transfer learning and reinforcement learning, enabling them to achieve remarkable accuracy in forecasting unit behaviors and the progression of intricate combat scenarios.

    Neurocognitive architectures

    By integrating various types of AI technologies, sophisticated computer models are developed that replicate the complex cognitive functions of humans and military formations. These systems combine conventional rule-based frameworks with modern learning approaches inspired by the brain’s functionality. Consequently, they can emulate crucial military competencies by rapidly evaluating battlefield situations, strategizing effectively, and adjusting to unpredictable circumstances. These models provide soldiers with opportunities to hone their decision-making skills in realistic yet secure virtual environments that closely mirror actual combat conditions.

    Natural Language Processing (NLP) and multimodal interaction

    Contemporary natural language processing systems leverage advanced technologies that enable them to analyze and produce text with a proficiency comparable to military communication experts. These systems employ models capable of efficiently interpreting intricate linguistic structures while focusing on different text segments simultaneously. To enhance realism in training scenarios, these language processing systems are integrated with other technologies like computer vision (for visual information analysis) and haptic feedback (to simulate physical sensations). This integration, known as multimodal interaction interfaces, enables soldiers to engage in voice communication, respond to visual cues, and concurrently experience the physical aspects of the simulated environment, resulting in a highly realistic training setting.

    Computer vision and augmented reality

    Cutting-edge computer vision technologies enable simulations to accurately identify and differentiate individual objects in images and comprehend three-dimensional spatial realities akin to human vision. These advancements, paired with high-level augmented reality systems that superimpose digital elements over real-world visuals, create incredibly authentic representations of combat scenarios. The responsiveness of these systems is so rapid that the interval between action and response is undetectable by human observers (less than one millisecond), ensuring visual quality that closely resembles real-world perceptions.

    Application of AI in complex aspects of military training
    Tactical and operational training

    AI systems have the capability to generate and dynamically alter a variety of training scenarios that evolve in real time according to the trainees’ actions. These technologies employ advanced methodologies to automatically produce content and engage AI models in competition, allowing for the creation of a virtually limitless array of unique and intricate training situations. This enables soldiers to encounter fresh and unforeseen challenges with each experience, significantly boosting their preparedness for genuine combat environments.

    Strategic planning and wargaming

    Cutting-edge AI technologies for strategic planning integrate various techniques to forecast and simulate long-term geopolitical and strategic scenarios. They apply concepts from game theory (which analyzes strategic decision-making), learning from interactions among multiple actors, and probabilistic modeling. Consequently, these systems can emulate intricate relationships and dynamics among different nations, non-state actors, economic systems, and geopolitical elements. This capability enables military strategists to enhance their understanding and readiness for potential future shifts in global politics and security.

    Logistics and supply chain management

    In logistics training, artificial intelligence employs highly sophisticated techniques to tackle complicated issues. These approaches draw inspiration from quantum physics principles and encompass methods for identifying optimal solutions from a vast array of possibilities. Such strategies are much more efficient and adaptable compared to conventional methods. AI systems can determine the most effective way to coordinate intricate logistics networks in real time, even when faced with millions of variables and ever-changing conditions. This empowers military personnel to train in managing supply and transportation under highly complex and dynamic scenarios.

    CBRN Scenario Simulation and Crisis Management

    The simulation of scenarios involving chemical, biological, radiological, or nuclear (CBRN) threats has seen enhancements through artificial intelligence. These advanced simulations merge precise scientific models of how hazardous materials or radiation disperse with predictions of human responses in such circumstances. AI facilitates these systems in accurately forecasting how a CBRN event could progressively impact critical infrastructure (such as power facilities, hospitals, or transportation networks) and society at large. This allows military personnel and crisis response teams to practice their reactions to these extremely hazardous scenarios in a safe yet highly realistic virtual environment.

    Benefits and challenges of implementing AI in combat simulations

    The integration of AI into training programs offers considerable advantages. It allows soldiers to acquire skills more rapidly, think more adaptively, and adjust better to new circumstances. Simulations powered by AI also permit the swift incorporation of emerging threats into training scenarios, ensuring that exercises remain applicable amidst the evolving nature of contemporary warfare.

    However, these advantages come with notable challenges. A primary concern is the reliability of data and the elimination of bias within AI systems. Even minor inaccuracies in input data can result in substantial discrepancies in simulation outcomes. Another significant challenge is the cyber resilience of these systems, as sophisticated cyber attacks could jeopardize the integrity of training programs.

    The ethical ramifications of deploying AI in military training are the focus of vigorous discussion. The central question is how to balance the utilization of advanced technologies while maintaining essential human judgment. Moreover, there is a risk of soldiers becoming excessively reliant on AI systems, potentially rendering them vulnerable in the event of system failures or hostile interference.

    Geopolitical implications and future trajectories

    The uneven adoption of AI technologies within military forces could dramatically alter the global security landscape. Variations in how countries employ AI in their armed services may create new forms of strategic instability and potentially initiate a novel arms race centered around AI technologies.

    To effectively tackle these intricate challenges, it is vital to foster robust international collaboration in the research, development, and ethical oversight of AI systems for military applications. Concurrently, it is crucial to continually evaluate and recalibrate the balance between AI-assisted training and traditional methodologies. This will ensure the optimal integration of cutting-edge technologies with fundamental military competencies.

    Conclusion

    The incorporation of artificial intelligence into combat simulations signifies a profound shift in military training that significantly influences operational effectiveness and strategic planning. Current advancements illustrate the vast potential of these technologies while underscoring the critical need to confront the associated ethical, technical, and strategic challenges.

    The future of military training will surely be defined by ongoing advancements at the intersection of human expertise and artificial intelligence. Establishing the most effective synergy between these two domains will be essential for ensuring NATO is sufficiently equipped to face the complex challenges of the 21st century.

    The U.S. Navy is set to launch a conversational artificial intelligence program called “Amelia

    The U.S. Navy is set to launch a conversational artificial intelligence program called “Amelia,” designed to help troubleshoot and answer frequently asked tech-support queries from sailors, Marines, and civilian staff.

    This program will be fully rolled out in August as part of the Navy Enterprise Service Desk initiative, which aims to modernize and consolidate over 90 IT help desks into a singular central hub. General Dynamics Information Technology announced its receipt of the NESD indefinite delivery, indefinite quantity contract in late 2021.

    Sailors, Marines, and civilians with a common access card who can be verified through the Global Federated User Directory will have the ability to reach out to Amelia via phone or text. The system is anticipated to cater to over 1 million users and provide round-the-clock responses based on extensive training and specialized knowledge. Further applications in secure environments may be developed in the future.

    “Historically, we’ve had to rely on agents who knew ‘how do I resolve a specific issue,’” Travis Dawson, GDIT’s chief technology officer for the Navy and Marine Corps sector, mentioned in an interview with C4ISRNET. “That information can be documented, right? Once documented, we can resolve it through automation, eliminating the need for human interaction.”

    While Amelia is designed to respond to inquiries and handle routine tasks, Dawson noted that it possesses additional abilities, such as detecting frustration in users’ questions.

    “In the realm of artificial intelligence, referring to conversational AI as merely a bot is quite sensitive,” he remarked. “A bot operates on a pre-defined script, providing only the answers it has. If it lacks a response, you encounter a dead end.”

    If Amelia is unable to resolve an issue or answer a question, it can escalate the matter to a live agent, facilitating the type of human interaction typically expected for connectivity issues or locked accounts. During testing, Amelia significantly reduced the number of abandoned calls, achieving a first-contact resolution rate in the high 90s percentile, according to Dawson.

    “Users are now able to find their answers much more quickly than they could in the past,” he added.

    The Pentagon is investing billions of dollars in the advancement and integration of artificial intelligence. This technology is being utilized in both military operations and administrative settings. It assists with target identification in combat vehicles and processes large volumes of personnel and organizational data.

    GDIT, a subsidiary of General Dynamics, the fifth-largest defense contractor globally by revenue, launched a tech-investment strategy in May focusing on zero-trust cybersecurity, 5G wireless communications, automation in IT operations, AI, and more.

    The company provided C4ISRNET with an image of Amelia depicted as a female sailor in uniform, though no rationale for the name or gender choice was provided.

    “The requirement moving forward was to integrate an AI capability,” Dawson stated. “Given the available automation today, Amelia was the right fit.”

    As this technology completes its testing and initial deployment later this year, it will be capable of interpreting human emotions beyond mere words.

    “[Amelia] will be able to recognize emotional signals and will understand when a user is frustrated, allowing for an immediate escalation to a human agent,” explained Melissa Gatti, service and resource manager at the Navy’s Program Executive Office Digital.

    The virtual assistant will prompt for human involvement when necessary, but will otherwise aim to respond to inquiries using its database of sanctioned documents and procedures.

    “Unlike a chat bot, which is mainly scripted on the back end, you’ll receive answers from a pool of validated information, and if a particular answer isn’t available, you won’t have the option for escalation to a live agent; whereas Amelia has that capacity,” Travis Dawson elaborated, acting chief technology officer for the Navy & Marine Corps Sector at General Dynamics Information Technology.

    The virtual assistant will engage in various discussions, including those related to administrative matters and career development.

    “She’ll be equipped with knowledge articles that received government approval based on the specific inquiries end users will make … focusing on training and education systems—it’s not related to enterprise IT like flank speed,” Dawson clarified. “It’s MyNavyHR, and those are the types of systems she will support and the questions she will be able to answer with true conversational AI.”

    Currently, assistance for users is limited by the personnel available to answer questions; the expectation is that this assistant will handle a significantly greater volume of requests.

    “She has the capability to handle numerous queries simultaneously, which means you won’t have to wait for one individual on the phone or process one query at a time: she is working on many tasks repeatedly. This significantly improves our ability to address issues more quickly, not just for a single warfighter,” Gatti shared with SIGNAL Media during an interview.

    Regarding the evolution of the knowledge base, it mainly relies on the end users.

    “She is educated by us, so there remains a human aspect where we guide her on what information she requires and we organize her knowledge based on the problems that arise,” Gatti clarified.

    The entire initiative involves users from all over the globe, whether they are on the ground or at sea.

    “We are aware that the Navy faces specific challenges due to their locations: bandwidth limitations in the fleet, so we are preparing for user acceptance tests and assessments onboard Navy ships as well,” Dawson mentioned.

    Amelia’s text interface will debut in August, and sailors will have access to it via voice later this year, as Gatti noted.

  • What are the benefits of developing AI in healthcare?

    Malnutrition, delirium, cancer – with all of these diagnoses, doctors in a New York hospital receive support from artificial intelligence. This is intended to provide better patient care and reduce the burden on doctors.

    “The patient has a red flag” – DietitianCiana Scalia stands with her boss, Sara Wilson, in front of a flashing monitor in New York’s Mount Sinai. A red flag on the screen indicates a case of malnutrition. The computer is usually right. He spits out his diagnosis without the two having to type anything specific.”The program assembles the suspicion from all the indicators it can find in the patient’s medical records and history,” explains Scalia. Artificial intelligence automatically monitors the nutrition of all patients admitted to this Renowned hospital in East Harlem.Artificial intelligence in healthcare

    Faster and more preciseartificial intelligence in healthcare

    For five years, AI has been helping medical staff identify nutritional deficiencies in patients, develop a nutritional plan for them, and potentially speed up their recovery. The nutrition department director, Wilson, explains a procedure that would be much more time-consuming and bureaucratic if done conventionally. “We used to have to study the weight curves ourselves, the nutritional habits, laboratory results and much more – to develop a nutritional plan so patients can recover quickly.”

    The AI ​​would now do that – quickly. And much more precise than was previously possible, explains Scalia. “The algorithm can find things that we as human staff don’t even know we should be looking for, “she says. ” Because we don’t have that much time.”

    Machines learn,artificial intelligence in healthcare.

    Five years after the pilot began, her team is already filtering out three per cent more patients with malnutrition than before. The system is constantly improving, says Wilson. “At the moment, the accuracy is up to 70 per cent. But the machines are still learning.” They have to be constantly fed with data and with human intelligence.

    Artificial intelligence always works with specialists from the clinic. They checked the computer information and at the same time fed the machine with their knowledge. However, the transparent patient needs to learn what the program is doing with his data. He sees no red flag. He only notices when nutritionist Scalia contacts him in the hospital room. Artificial intelligence in healthcare

    However, clinic director David Reich sees this as acceptable: “It’s okay to check this without the patients’ knowledge. Because you’re just giving patients the right help at the right time.”That is the goal of the around 20 programs, with the oldest teaching hospital in the USA making itself the AI ​​leader in New York and large parts of the USA, says Reich. “We started with the program formal nutrition, which often goes undetected in clinics. Then, one for the early detection of delirium. Another program calculates the risk of falls in patients.”

    Time savings for doctors and nurses

    The number of programs in use and now high-profile is constantly growing – with no reduction in human staff, emphasises Reich. Eight years ago, a team was founded at the clinic, which is larger than the Berlin Charité, with a name that the director jokes about: “The Little Big Data Group.”

    Your task is to develop a system of algorithms that does not replace human staff but supports them and saves them a lot of time. Potentially life-saving time emphasizes neuroscientist Joseph Friedman. Ten years ago, he developed an AI program at the clinic that sounds the alarm before a patient falls into delirium and thus becomes an acute emergency – for example, after an operation. The colloquially known “fever madness” syndrome is very complex to diagnose. It is often difficult to recognize when the patient is losing the ability to think, can no longer stay awake, or behave differently significantly than usual. Intelligence in healthcare

    The problem in almost all hospitals is that this syndrome needs to be treated promptly. Because it is difficult to predict the traditional way. The mortality rate is correspondingly high. With the help of the AI ​​​​program, it is possible to quickly get the program and suggest a treatment plan.

    Focus on high-risk cases.

    Friedman remembers how different it was before the program existed. ” We were seeing maybe 100 patients a day just to find four to five people diagnosed with delirium.” To do this, huge amounts of data had to be studied, and each patient had to be personally examined.Valuable time for acute emergencies may have been lost.

    Thanks to artificial intelligence, focusing directly on the patients at the highest risk is now possible. Friedman emphasizes that it’s not about saving doctors time but rather about allowing them to reach where they are most needed more quickly.

    Regulation and review

    Clinic director Reich is convinced that he is on the right path. “If you create a safer hospital environment, where malnutrition is treated at the same time and, therefore, a wound heals more quickly, where impending delirium is recognized, or the risk that a patient could fall—all of that only makes it better for the patient.”

    He believes that artificial intelligence is not only changing doctors’ work but also requiring a rethink in their training. However, Reich also admits that the more artificial intelligence matures, the more important it is to regulate it. For example, there is the problem of structural racism in the USA. This should not be taken over by AI in healthcare.

    “Poorer Americans – the majority of whom are Black, Hispanic or Indigenous – all have less access to medical care. So if you feed your algorithms with existing patient data, you risk them inheriting the biases of our medical system.”, explains Reich.

    So, if the malnutrition prediction program doesn’t work well for African and Latin Americans, then work needs to be done on it. At Mount Sinai Hospital, they have set up an ethics committee to deal with such questions. All AI programs there are regularly checked for diseases.

    Cancer Diagnosis Program

    The control authorities in the USA have already approved around 400 AI systems in the clinical sector, explains Thomas Fuchs, director of the Hasso Plattner Institute for Digital Medicine – a branch of the Potsdam Institute at Mount Sinai Hospital. The Graz native is the master of the AI​laboratory, which receives a lot of data: In the entire system of the clinic and affiliated practices with almost 4,000 beds and around 7,400 medical employees, there are around 135,000 admissions per year – the emergency room and over 3.8 Millions of outpatients are not included.

    The “Lab” is a sea of ​​rushing computers in an unspectacular, bright room. This is where the heart of artificial intelligence beats in this hospital. Former NASA researcher Fuchs and his team are developing a cancer detection program. He proudly stands in front of the dull, hissing system. He beams: “We built our own supercomputer – the largest in the world for pathology -digitised millions of slices and then trained artificial intelligence over many months, which is good enough for it to be helpful for every patient.”

    It can do this, for example, by recognizing and defining types of cancer and recording treatment paths. The program often sees better than a doctor alone can. “It can, for example, predict genetic mutations of the tumour based on the appearance of the tumour, says Fuchs. “And that then helps patients worldwide – not just in these ivory institutes -have access to the best diagnosis. ”

    Artificial intelligence in healthcare, Criticism of regulation in Europe

    In the end, it is always people who do it. The AI ​​​​supports him in this. Fox warns against panic. Data protection is an important question, but the patient in need of help must also be protected. Restricting research leads to poorer treatment, less technology, and the falling behind of European research institutions in this area.

    On the one hand, science funding leaves much to be desired in many European countries. “Austria spends about as much on AI research as Uganda,” says Fuchs. When it comes to regulation, however, European countries went overboard. “Of course, AI in healthcare needs regulation, but on the other hand, you can’t hinder research too much by making it very difficult to conduct research based on patient data.”

    It is no coincidence that the Potsdam Institute conducts research using American data instead of from Berlin or Brandenburg. On the other hand, that simply means that the German systems cannot be optimized because they are outside this study. It’s a question of ethics that science does what it can, says Fuchs: “One thing is obvious these days when you talk about fears of AI: In medicine, patients die because there is no AI, not because there is AI exists.” artificial intelligence in healthcare

    Artificial Intelligence (AI) is currently utilized to enhance efficiency and precision in various healthcare areas, and healthcare service providers are actively investigating numerous other uses for the technology. Insurers must be kept informed from the outset of the development of new tools to ensure that the healthcare provider will be safeguarded against the risk of a negative outcome leading to a claim.

    AI applications

    AI is applied to a broad range of tasks to enhance patient care, streamline operations, and advance medical research. In the field of diagnostics and imaging, AI can aid in the interpretation of medical images such as X-rays, magnetic resonance imaging (MRI ), and computed tomography (CT) scans to identify abnormalities and enable radiologists to make more precise diagnoses.

    The technology can also facilitate the analysis of patient data, enabling researchers and healthcare providers to forecast disease outbreaks and patient readmissions. As illustrated in a presentation at the recent CFC Summit, ‘Incisions, instruments…internet(opens a new window)?’, some practitioners are also utilizing AI to monitor patient data in real time to identify signs of deterioration and to send alerts to early intervene.

    Every area of ​​healthcare presents unique challenges, and the speed at which AI applications can be developed will naturally differ. However, in the short-to-medium term, AI will be more widely deployed, especially in electronic health records management and to enhance administrative /operational efficiency.

    Natural language processing tools can extract and organize information from unstructured clinical notes, making it simpler for healthcare providers to access pertinent patient data. Billing and claims processing can also be automated using AI, resulting in a decrease in errors. Both are already demonstrating positive indications of freeing up healthcare providers so that they are not bogged down by paperwork.

    AI-powered opportunities in healthcare

    • Early and more precise identification of diseases
    • Cognitive technology can aid in unlocking large amounts of health data and facilitating diagnosis
    • Predictive analytics can support clinical decision-making and actions
    • Clinicians can take a broader approach to disease management
    • Robots have the potential to transform end of life care
    • Streamline the drug discovery and drug repurposing processes
    • Naturalistic simulations for training purposes
    • Technology applications and apps can promote healthier patient behavior, enable proactive lifestyle management, and capture data to improve understanding of patients’ needs

    Risk considerations

    But where there are opportunities there are also risks. AI is known to be prone to bias. The algorithms that underlie AI-based technologies have a tendency to mirror human biases in the data on which they are trained. As such, AI technologies have been known to produce consistently inaccurate results, which could painfully impact patients from specific groups.

    AI-driven tools may also expose businesses to privacy and cyber security risks. In addition, a lack of human-like creativity and empathy may negatively impact the deployment of AI in a sensitive field like healthcare.

    From an underwriter’s perspective, concerns about AI can vary depending on the specific use case, the size of the client concerned, and the regulatory environment.

    Areas of lesser concern will likely include administrative enhancements, implementation of AI for clinical validation studies, data quality and governance, staff training and collaboration with healthcare professionals, as well as compliance with regulations. offline, direct-to-consumer chatbots diagnosing conditions, and secondary AI/machine learning tools to detect cancer will probably necessitate more detailed information.

    If AI is utilized in a clinical setting, it is vital to ascertain if the tool’s algorithms have been clinically validated for efficacy and accuracy, to prevent misdiagnoses or incorrect treatment recommendations. Healthcare providers also need to be capable of explaining the ethical considerations and mitigation measures taken, particularly in relation to bias and fairness.

    Patients, on the other hand, usually need to be informed before AI is used in their care and will need to provide consent.

    Determining liability in cases of AI-related errors or adverse events poses a particular challenge to the healthcare sector. Healthcare providers, insurance brokers, and insurers need to work closely together to ensure that coverage is designed in a way that meets the healthcare provider’s needs and contractual obligations.

    Although the liability landscape for healthcare providers utilizing AI is relatively untested, there are anonymized claims analytics and understand trends reports that can help to better the risks.

    AI is playing an increasingly important role in the healthcare industry, aiding in diagnosis, improving processes, enhancing patient care, and saving lives. As technology advances, the opportunities are vast, from analyzing lab results and providing diagnosis to assisting with patient surgeries and correcting errors in drug administration.

    Healthcare services face pressure due to record inflation and ongoing labor shortages, leading to long waiting lists in the UK’s National Health Service (NHS) and other public sector healthcare services globally. Utilizing AI could potentially reduce costs and redefine healthcare provision.

    However, using advanced technology brings risks. It’s crucial to understand the potential applications of AI in healthcare and thoroughly test insurance programs to ensure adequate protection.

    Mentions of AI have become common in the healthcare industry. Deep learning algorithms can read CT scans faster than humans, and natural language processing can analyze unstructured data in electronic health records (EHRs).

    Despite the potential benefits of AI, there are also concerns about privacy, ethics, and medical errors.

    Achieving a balance between the risks and rewards of AI in healthcare will require collaboration among technology developers, regulators, end-users, and consumers. Addressing the contentious discussion points is the first step in considering the adoption of complex healthcare technologies.

    AI will challenge the status quo in healthcare, changing patient-provider relationships and affecting the role of human workers.

    While some fear that AI will eliminate more healthcare jobs than it creates, recent data suggests healthcare jobs are projected to remain stable or even grow.

    Nevertheless, concerns remain as AI tools continue to show superior performance, particularly in imaging analytics and diagnostics. Radiologists and pathologists may be particularly vulnerable to automation by AI.

    In a report from 2021, researchers at Stanford University evaluated the progress of AI in the past five years to observe changes in perceptions and technologies. The researchers discovered that AI is being increasingly used in robotics, gaming, and finance.

    The technologies that underpin these significant advancements are also being applied in the field of healthcare. This has led some physicians to worry that AI might eventually replace them in medical practices and clinics. However, healthcare providers have varied opinions about the potential of AI, with some cautiously optimistic about its impact.

    According to the report, in recent years, AI-based imaging technologies have transitioned from being solely academic pursuits to commercial projects. There are now tools available for identifying various eye and skin disorders, detecting cancers, and facilitating the measurements required for becoming clinical diagnosis .

    The report stated that some of these systems can match the diagnostic capabilities of expert pathologists and radiologists. They can also assist in alleviating arduous tasks, such as counting the number of cells dividing in cancerous tissue. Nevertheless, the use of automated systems in other areas raises significant ethical concerns.

    Simultaneously, one could argue that there is an inadequate number of radiologists, pathologists, surgeons, primary care providers, and intensivists to meet the existing demand. The United States is grappling with a critical shortage of physicians, particularly in rural areas, and this shortage is even more severe in developing countries worldwide.

    AI might also aid in reducing the burdens that contribute to burnout among healthcare workers. Burnout affects a majority of physicians, as well as nurses and other care providers, leading them to reduce their working hours or opt for early retirement rather than persisting through unfulfilling administrative tasks.

    Automating certain routine tasks that consume a physician’s time – such as electronic health record (EHR) documentation, administrative reporting, or even the triage of CT scans – can enable humans to focus on the complex challenges posed by patients with rare or serious conditions.

    The majority of AI experts anticipate that a combination of human expertise and digital augmentation will be the natural equilibrium for AI in healthcare. Each form of intelligence will contribute something valuable, and both will collaborate to enhance the delivery of care.

    Some have raised concerns that healthcare professionals may become overly reliant on these technologies as they become more prevalent in healthcare settings. However, experts emphasize that this outcome is unlikely, as the issue of automation bias is not new in healthcare, and there are existing strategies to mitigate it.

    Patients also appear to hold the belief that AI will ultimately improve healthcare, despite some reservations about its utilization.

    A research letter published in JAMA Network Open last year, which surveyed just under 1,000 respondents, found that over half of them believed that AI would either somewhat or significantly improve healthcare. Nevertheless, two-thirds of the respondents indicated that being informed if AI played a major role in their diagnosis or treatment was very important to them.

    Concerns about the use of AI in healthcare seem to vary somewhat by age. However, research conducted by SurveyMonkey and Outbreaks Near Me – a collaboration involving epidemiologists from Boston Children’s Hospital and Harvard Medical School – indicates that, generally, patients prefer important healthcare tasks, such as prescribing pain medication or diagnosing a rash, to be carried out by a medical professional rather than an AI tool.

    Regardless of whether patients and providers are comfortable with the technology, AI is making strides in healthcare. Many healthcare systems are already implementing these tools across a wide range of applications.

    Michigan Medicine utilized ambient computing, a type of AI designed to create a responsive environment to human behaviors, to enhance its clinical documentation improvement efforts during the COVID-19 pandemic.

    Researchers at Mayo Clinic are pursuing a different AI approach: they intend to leverage the technology to enhance organ transplant outcomes. Currently, these efforts are concentrated on developing AI tools to avoid the need for a transplant, enhance donor matching, increase the number of viable organs, prevent organ rejection, and improve post-transplant care.

    AI and other data analytics tools can also play a critical role in population health management. Effectively managing population health necessitates that healthcare systems utilize a combination of data integration, risk stratification, and predictive analytics tools. Care teams at Parkland Center for Clinical Innovation (PCCI ) and Parkland Hospital in Dallas, Texas are utilizing some of these tools as part of their program to address disparities in preterm birth.

    Even though AI has great potential in healthcare, incorporating this technology while safeguarding privacy and security is quite challenging.

    CHALLENGES WITH AI PRIVACY AND SECURITY

    The use of AI in healthcare brings about a whole new set of difficulties regarding data privacy and security. These challenges are further complicated by the fact that most algorithms require access to extensive datasets for training and validation purposes.

    Transferring huge volumes of data between different systems is unfamiliar territory for most healthcare organizations. Stakeholders are now fully aware of the financial and reputational risks associated with a high-profile data breach.

    Most organizations are advised to keep their data assets tightly secured in highly protected, HIPAA-compliant systems. With the surge in ransomware and other cyberattacks, chief information security officers are understandably hesitant to allow data to move freely in and out of their organizations.

    Storing large datasets in a single location makes that repository a prime target for hackers. Apart from AI being a tempting target for threat actors, there is an urgent need for regulations pertaining to AI and the protection of patient data using these technologies.

    Experts warn that safeguarding healthcare data privacy will require updating existing data privacy laws and regulations to encompass information used in AI and ML systems, as these technologies can potentially re-identify patients if data is not adequately de-identified.

    However, AI falls into a regulatory gray area, making it challenging to ensure that every user is obligated to protect patient privacy and will face repercussions for failing to do so.

    In addition to more traditional cyberattacks and patient privacy concerns, a study by University of Pittsburgh researchers in 2021 revealed that cyberattacks using manipulated medical images could deceive AI models.

    The study shed light on the concept of “adversarial attacks,” where malicious actors seek to alter images or other data points to cause AI models to reach incorrect conclusions. The researchers trained a deep learning algorithm to accurately identify cancerous and benign cases over 80 percent of the time.

    Subsequently, they developed a “generative adversarial network” (GAN), a computer program that creates false images by displacing cancerous regions from negative or positive images to confuse the model.

    The AI ​​model was fooled by 69.1 percent of the falsified images. Out of 44 positive images made to look negative, the model identified 42 as negative. Moreover, out of 319 negative images doctored to appear positive, the AI ​​model classified 209 as positive.

    These findings demonstrate the possibility of such adversarial attacks and how they can lead AI models to make an incorrect diagnosis, posing potential significant patient safety issues.

    The researchers emphasized that understanding how healthcare AI behaves under an adversarial attack can help health systems better understand how to make models more secure and resilient.

    Patient privacy may also be at risk in health systems employing electronic phenotyping through algorithms integrated into EHRs. This process aims to flag patients with specific clinical characteristics to gain better insights into their health and provide clinical decision support. However, electronic phenotyping can lead to a range of ethical concerns regarding patient privacy, including inadvertently revealing undisclosed information about a patient.

    Nevertheless, there are methods to safeguard patient privacy and provide an additional layer of protection to clinical data, such as privacy-enhancing technologies (PETs). Algorithmic, architectural, and augmentation PETs can all be utilized to secure healthcare data.

    While security and privacy will always be critical, the fundamental shift in perspective as stakeholders become more accustomed to the challenges and opportunities of data sharing is crucial for fostering the growth of AI in a health IT ecosystem where data is segregated and access to quality information is one of the industry’s most significant hurdles.

    ETHICS, RESPONSIBILITY, AND OVERSIGHT

    The most challenging issues in the AI ​​debate are the philosophical ones. Apart from the theoretical questions about who bears the ultimate responsibility for a life-threatening error, there are concrete legal and financial ramifications when the term “malpractice” enters the picture.

    Artificial intelligence algorithms are inherently intricate. As the technology advances, it will become increasingly difficult for the average individual to comprehend the decision-making processes of these tools.

    Organizations are currently grappling with trust issues when it comes to following recommendations displayed on a computer screen, and providers find themselves in the predicament of having access to vast amounts of data but lacking confidence in the available tools to help them navigate through it.

    Although some may believe that AI is entirely free of human prejudices, these algorithms will learn patterns and produce results based on the data they were trained on. If this data is biased, the model will also be biased.

    There are currently limited reliable methods to identify such biases. The problem is further complicated by “black box” AI tools that provide little explanation for their decisions, making it challenging to attribute responsibility when things go wrong.

    When providers are legally accountable for any negative consequences that could have been foreseen from the data in their possession, it is crucial for them to ensure that the algorithms they use present all relevant information in a way that facilitates optimal decision-making.

    However, stakeholders are working on establishing principles to address algorithmic bias.

    In a report from 2021, the Cloud Security Alliance (CSA) recommended assuming that AI algorithms contain bias and working to recognize and mitigate these biases.

    The report stated, “The increased use of modeling and predictive techniques based on data-driven approaches has revealed various societal biases inherent in real-world systems, and there is growing evidence of public concerns about the societal risks of AI.”

    “Identifying and addressing biases in the early stages of problem formulation is a crucial step in enhancing the process.”

    The White House Blueprint for an AI Bill of Rights and the Coalition for Health AI (CHAI)’s ‘Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare’ have also recently provided some guidance for the development and deployment of trustworthy AI, but these efforts have limitations.

    Developers may unintentionally introduce biases into AI algorithms or train the algorithms using incomplete datasets. Nevertheless, users must be mindful of potential biases and take steps to manage them.

    In 2021, the World Health Organization (WHO) published the first global report on the ethics and governance of AI in healthcare. WHO underscored the potential health disparities that could arise due to AI, especially because many AI systems are trained on data gathered from patients in affluent healthcare settings.

    WHO recommends that ethical considerations should be integrated into the design, development, and deployment of AI technology.

    Specifically, WHO suggested that individuals working with AI adhere to the following ethical principles:

    • Protecting human autonomy
    • Promoting human well-being and safety, as well as the public interest
    • Ensuring transparency, explainability, and intelligibility
    • Fostering responsibility and accountability
    • Ensuring inclusiveness and equity
    • Promoting AI that is responsive and sustainable
    • Bias in AI is a significant issue, but one that developers, healthcare professionals, and regulators are actively endeavoring to address.

    It will be the responsibility of all stakeholders – providers, patients, payers, developers, and everyone in between – to ensure that AI is developed ethically, safely, and meaningfully in healthcare.

    There are more questions to tackle than anyone could possibly imagine. However, unanswered questions are a reason to keep exploring, not to hold back.

    The healthcare ecosystem has to start somewhere, and “from scratch” is as good a place as any.

    Defining the industry’s approaches to AI is a significant responsibility and a great opportunity to avoid some of the mistakes of the past and pave the way for a better future.

    It’s an exhilarating, bewildering, exasperating, hopeful time to be in healthcare, and the ongoing advancement of artificial intelligence will only add to the mix of emotions in these ongoing discussions. There may not be clear answers to these fundamental challenges at this moment, but humans still have the chance to take charge, make tough decisions, and shape the future of patient care.

    Artificial Intelligence (AI) has increasingly become significant in the world over the last few decades. Many may not realize that AI exists in various forms that influence everyday life. A key area where AI is expanding is in healthcare, particularly in diagnostics and treatment management. While there are concerns about AI potentially overtaking human roles and capabilities, extensive research indicates how AI can assist in clinical decision-making, enhance human judgment, and improve treatment efficiency.

    Growing Presence of AI in Healthcare

    AI has various levels of involvement in healthcare. Often, AI leverages an online database, enabling healthcare providers to access numerous diagnostic tools. Given that doctors are highly trained in their specialties and current with recent findings, AI significantly accelerates outcomes that complement their clinical expertise.

    On the other hand, there are anxieties regarding AI eventually replacing or diminishing the need for human doctors, especially in clinical environments. However, recent research and data suggest that this technology is more likely to enhance and complement clinical diagnostics and decision-making than to decrease the necessity for clinicians.

    Patients frequently exhibit multiple symptoms that may relate to several conditions based on genetic and physical traits, which can delay diagnoses. Consequently, AI aids healthcare professionals by increasing efficiency and providing quantitative and qualitative data based on feedback, resulting in improved accuracy in early detection, diagnosis, treatment planning, and outcome forecasting.

    AI’s capacity to “learn” from data allows for better accuracy based on feedback received. This feedback consists of various backend database sources and contributions from healthcare providers, physicians, and research institutions. AI systems in healthcare operate in real-time, which means the data is continuously updated, enhancing accuracy and relevance.

    The assembled data encompasses a variety of medical notes, recordings from medical devices, laboratory images, physical exams, and diverse demographic information. With this vast and constantly updated information pool, healthcare professionals have nearly limitless resources to enhance their treatment capabilities.

    Consequences of AI for the Healthcare Workforce

    AI is projected to significantly influence the healthcare workforce. As AI-driven applications evolve in complexity, they will play an increasingly vital role in patient care. This will lead to a transformation in healthcare delivery, with a greater focus on preventive care and early intervention. This change will necessitate a different skill set among healthcare professionals who will need to have a better grasp of data and analytics. Additionally, they will need to feel at ease working with AI-supported applications.

    The effects of AI on the healthcare workforce will be extensive. It is important to begin preparing now for the forthcoming changes. Organizations in healthcare should consider how AI can enhance patient care and improve the efficiency of the healthcare system. They should also contemplate how to retrain their workforce to adapt to future needs.

    The Prospects of AI in Healthcare

    The potential future of AI in healthcare is promising. As AI-driven applications advance, they will bring about several changes in how healthcare is administered. A transition will occur from reactive to proactive care, focusing more on prevention and early intervention.

    AI will also revolutionize how healthcare professionals engage with patients. Rather than providing a one-size-fits-all approach to care, AI will enable them to offer personalized treatment tailored to individual patients. This will lead to improved health outcomes and a more efficient healthcare system.

    Healthcare providers are only beginning to explore the possibilities AI offers. As more advanced AI-driven applications emerge, even more transformative changes in healthcare will become apparent. The potential of AI is boundless.

    AI Offers More Accurate Diagnostics

    Given the extensive healthcare data available, AI must effectively navigate this data to “learn” and create connections. In the realm of healthcare, there are two categories of data that can be processed: unstructured and structured. Structured learning employs three techniques: Machine Learning (ML), a Neural Network System, and Modern Deep Learning. In contrast, non-structured data utilizes Natural Language Processing (NLP).

    Machine Learning Techniques (ML)

    Machine Learning techniques employ analytical algorithms to extract specific patient characteristics, including all the information gathered during a patient visit with a healthcare provider. These characteristics, such as results from physical examinations, medications, symptoms, basic metrics, disease-specific data, diagnostic imaging, genetic information, and various lab tests all contribute to the collected structured data.

    By employing machine learning, outcomes for patients can be assessed. A particular study applied Neural Networking in the process of diagnosing breast cancer, analyzing data from 6,567 genes along with texture information derived from the subjects’ mammograms. This integration of recorded genetic and physical traits enabled a more accurate identification of tumor indicators.

    Neural Networks & Contemporary Deep Learning

    In clinical environments, supervised learning is the most prevalent form of Machine Learning. This method utilizes a patient’s physical characteristics, supported by a database of information (in this instance, breast cancer-related genes), to deliver more targeted results. Another approach that is employed is Modern Deep Learning, which is regarded as an advancement over traditional Machine Learning.

    Deep Learning utilizes the same input as Machine Learning but processes it through a computerized neural network, generating a hidden layer that simplifies the data into a more straightforward output. This assists healthcare professionals in narrowing down multiple potential diagnoses to one or two, allowing them to reach a more conclusive and definite determination.

    Natural Language Processing (NLP)

    Natural Language Processing operates similarly to structured data techniques but focuses on all unstructured data within a clinical context. Such data can originate from clinical notes and speech-to-text documentation recorded during patient encounters. This includes narratives derived from physical examinations, laboratory assessments, and examination summaries.

    Natural Language Processing leverages historical databases filled with disease-related keywords to facilitate the decision-making process for diagnoses. Employing these techniques can lead to more precise and efficient patient evaluations, ultimately saving practitioners time and accelerating treatment. The more rapid and specific a diagnosis is, the sooner a patient can begin their recovery journey.

    AI can be integrated across significant disease domains

    Given that cardiovascular diseases, neurological disorders, and cancer remain the leading causes of mortality, it is crucial to maximize the resources available to support early detection, diagnosis, and treatment. The introduction of AI enhances early detection by identifying potential risk indicators for patients.

    Let’s explore some instances of AI applications in key disease fields:

    Early stroke detection

    In one study, AI algorithms were used with patients at risk of stroke, taking into account their symptoms and genetic backgrounds, which allowed for early identification. This process focused on documenting any abnormal physical movements, triggering alerts for healthcare providers. Such alerts enabled faster access to MRI/CT scans for disease evaluation.

    The early detection alerts from the study achieved a diagnostic and prognostic accuracy of 87.6%. Consequently, this allowed healthcare providers to initiate treatment sooner and forecast patients’ likelihood of future strokes. Moreover, machine learning was utilized for patients 48 hours post-stroke, yielding a prediction accuracy of 70% regarding the risk of another stroke.

    Forecasting kidney disease

    The Department of Veterans Affairs and DeepMind Health accomplished a significant milestone in 2019 by developing an AI tool capable of predicting acute kidney injury up to 48 hours earlier than conventional methods.

    Acute kidney disease can rapidly lead to critical health crises and is notoriously difficult for clinicians to detect. This innovative approach to predicting and detecting acute kidney issues empowers healthcare practitioners to recognize potential renal disease risks long before they manifest.

    Cancer research and treatment

    AI has also made substantial contributions to cancer research and treatment, especially in the field of radiation therapy. Historically, the absence of a digital database in radiation therapy has posed challenges in cancer research and treatment efforts.

    In response, Oncora Medical created a platform designed to support clinicians in making well-informed choices regarding radiation therapy for cancer patients. This platform aggregates patient medical data, assesses care quality, optimizes treatment strategies, and supplies insights on treatment outcomes, data, and imaging.

    Predictive analytics

    CloudMedX, a healthcare technology firm, launched an AI solution transforming electronic health records into a smart predictive instrument, aiding clinicians in making more precise decisions. This tool assists healthcare providers in detecting and managing medical conditions before they escalate into life-threatening situations by analyzing a patient’s medical history and correlating symptoms with chronic diseases or familial conditions.

    AI is increasingly being utilized in applications focused on patient engagement and adherence. It is widely recognized that enhanced patient participation in their health leads to improved outcomes, making engagement a critical challenge in healthcare. AI-enabled applications can aid patients in adhering to their treatment plans by offering personalized advice and reminders, thereby enhancing health results.

    Moreover, AI can aid in the early identification of possible adherence issues. Through the analysis of patient behavior, AI-powered applications can deliver insights that enable healthcare teams to act before non-adherence escalates into a larger issue. By utilizing AI to boost patient engagement and compliance, healthcare providers can enhance health outcomes and streamline the efficiency of the healthcare system.

    Obstacles to Adoption

    Even with the clear benefits of AI in healthcare, its implementation has been slow. According to a study by the Brookings Institute, four main obstacles impede AI adoption in healthcare: limitations in data access, algorithmic challenges, misaligned incentives, and regulatory hurdles.

    Data access limitations

    A primary obstacle to AI integration in healthcare is the scarcity of data. For AI-driven applications to perform effectively, they must have access to extensive data sets. Unfortunately, many healthcare organizations lack the required data resources. To address this challenge, these organizations need to invest in data gathering and management.

    Algorithmic limitations

    Algorithms are dependent on the quality of the data used for training. Some intricate algorithms can complicate healthcare professionals’ understanding of how AI arrives at specific recommendations.

    This lack of transparency can have serious consequences in healthcare, where AI assists in making patient care choices. Trust in this technology is crucial, especially since healthcare providers are held responsible for decisions influenced by the AI tools they employ.

    Misalignment of incentives

    The extent of AI adoption varies among health systems, influenced by the attitudes of hospital leadership and individual decision-makers. Some hospitals led by physicians may hesitate to embrace AI due to concerns it might replace them, while those managed by administrators tend to be more receptive to its application in non-clinical functions.

    Regulatory barriers

    The healthcare sector is highly regulated, yet there are no definitive guidelines governing the use of AI, resulting in considerable uncertainty. Many healthcare organizations also hesitate to share data with AI applications for fear of violating patient confidentiality. While this concern is legitimate, it should not serve as a pretext for hindering the application of AI in healthcare.

    These challenges can be resolved with a joint effort from all involved parties. Regulators in healthcare need to formulate clear directives on AI usage, while healthcare organizations must confront their data privacy and security worries.

    Enhanced Diagnostics and Treatment Planning

    A significant function of AI in healthcare is its capability to process extensive data and spot patterns and trends. This ability allows healthcare providers to deliver precise diagnoses and create tailored treatment strategies. AI-powered technologies can assess medical images, like X-rays and MRIs, with great precision, promoting early disease detection and swift action. Additionally, AI algorithms can help interpret lab results, identifying irregularities and suggesting areas for further examination. By leveraging AI for diagnostics, healthcare professionals can enhance the accuracy and timeliness of diagnoses, ultimately resulting in improved patient outcomes.

    Automated Administrative Tasks

    AI has also transformed administrative functions within healthcare. Utilizing AI-powered systems enables healthcare professionals to automate tedious tasks, such as scheduling appointments and managing medical records. This automation allows healthcare providers to dedicate more time to patient care and reduces the likelihood of human error. By streamlining administrative tasks, healthcare organizations can boost operational efficiency and enhance the overall patient experience.

    Remote Healthcare Services and Patient Monitoring

    AI has facilitated the delivery of remote healthcare services, ensuring that patients can access quality care regardless of their geographical location. Through AI algorithms and connected devices, healthcare providers can conduct remote monitoring of patients’ vital signs and identify early signs of deterioration. This proactive approach allows timely interventions, reducing the likelihood of hospital admissions and fostering improved patient outcomes. AI-powered remote patient monitoring supplies healthcare professionals with real-time data and actionable insights, enriching the quality of care and patient satisfaction.

    Enhancing Diagnostics through AI

    Artificial intelligence (AI) is transforming the diagnostics field, providing notable enhancements in both accuracy and speed. By utilizing AI algorithms, healthcare professionals can examine medical images like X-rays and MRIs with remarkable precision. This facilitates early disease detection and the creation of personalized treatment strategies. The application of AI in diagnostics is changing how healthcare professionals arrive at diagnoses, resulting in improved patient outcomes.

    Improved Diagnosis Using AI

    AI algorithms are particularly strong in recognizing patterns, enabling them to detect subtle irregularities in medical images that human observers might overlook. By highlighting these irregularities, AI can help healthcare providers recognize potential diseases and suggest suitable treatment alternatives. Additionally, AI can evaluate and interpret lab results, offering crucial insights for further analysis. This incorporation of AI into diagnostics aids in enhancing diagnostic accuracy, minimizing human error, and improving patient care.

    The integration of AI in diagnostics also brings about greater efficiency and productivity for healthcare providers. AI-powered systems can process medical imaging more swiftly, allowing healthcare professionals to arrive at prompt and precise diagnoses. This time-saving advantage allows them to concentrate more on patient care, dedicating more meaningful time to their patients.

    In summary, AI in diagnostics presents significant potential for enhancing healthcare results. By utilizing the capabilities of AI algorithms, healthcare providers can improve the accuracy and efficiency of diagnostics, leading to superior patient care and treatment outcomes.

    As healthcare continues to leverage the advantages of AI, the future of diagnostics appears bright. Progress in AI technology will further enhance the precision of disease detection, resulting in earlier interventions and better patient outcomes. Nevertheless, it is crucial to tackle the challenges linked to AI implementation, such as data privacy and biases within algorithms, to ensure responsible and ethical adoption in diagnostics. With ongoing research and collaboration between healthcare professionals and technology specialists, AI could revolutionize diagnostics and transform patient care.

    Try Bizstim’s software solutions for healthcare organizations.
    AI-Enabled Precision Medicine

    Precision medicine seeks to deliver tailored treatments based on an individual’s unique traits and genetic profile. With artificial intelligence (AI), healthcare providers can utilize extensive datasets and sophisticated algorithms to pinpoint specific biomarkers and treatment responses. This enables the identification of the most effective treatment options, optimizing therapeutic outcomes and reducing adverse effects.

    AI-Enabled Precision Medicine

    AI algorithms are capable of analyzing genomic data and other pertinent patient information to uncover patterns and connections that might not be visible to human analysts. By merging this vast information with clinical knowledge, healthcare providers can formulate personalized treatment plans suited to each patient.

    Through AI-driven precision medicine, healthcare is shifting from a generic treatment model to a more focused and effective method of care delivery. By acknowledging individual variations in genetics, lifestyle, and medical history, healthcare providers can enhance treatment results, boost patient satisfaction and potentially lower healthcare costs.

    AI for Remote Patient Monitoring

    Technological advancements have facilitated the integration of AI in remote patient monitoring, changing the way healthcare is administered. By harnessing connected devices and wearables, AI algorithms can gather and assess real-time patient data, enabling healthcare professionals to monitor patients from a distance. This ongoing observation allows for the swift identification of any shifts in health status, permitting timely interventions and reducing the likelihood of hospitalizations.

    A principal advantage of AI in remote patient monitoring is its capability to provide healthcare professionals with actionable insights. By analyzing data collected from connected devices, AI algorithms can detect patterns and trends, notifying healthcare providers of any potential concerns. This empowers professionals to respond quickly and offer personalized care, enhancing patient outcomes.

    Furthermore, AI in remote patient monitoring increases the accessibility of high-quality healthcare. Patients can receive ongoing monitoring and assistance from their homes, minimizing the necessity for regular hospital visits. This is particularly advantageous for those with chronic illnesses or individuals residing in isolated regions with limited healthcare facility access. AI-driven remote patient monitoring connects patients and healthcare providers, ensuring that patients obtain the necessary care, independent of their location.

    AI in Patient Engagement and Behavior Modification

    AI-driven chatbots and virtual assistants are transforming how patients engage with healthcare and modify their behavior. These smart tools deliver personalized assistance, health information, and motivation to support individuals in adopting healthy behaviors, managing chronic ailments, and following treatment plans.

    AI in Patient Engagement and Behavior Modification
    By using AI algorithms, these chatbots and virtual assistants can provide customized recommendations, reminders, and guidance tailored to an individual’s specific needs and preferences. Whether it involves reminding patients to take their medications, offering dietary advice, or providing mental health assistance, AI-driven tools can extend care outside clinical settings, empowering patients to actively manage their health.

    One significant benefit of AI in patient engagement is the capacity to provide continuous support and personalized interventions. These tools can gather and analyze real-time patient information, enabling healthcare providers to detect patterns and trends in behaviors and health metrics. This facilitates prompt interventions and proactive care, helping to avert complications and enhance overall health outcomes.

    The Role of AI in Behavior Modification

    In addition to patient engagement, AI is essential for behavior modification. By merging machine learning algorithms with principles from behavioral science, AI-driven tools can comprehend and anticipate human behavior, facilitating personalized interventions that effectively encourage healthy habits.

    AI algorithms can analyze data from patient interactions, including chat logs and health monitoring, to obtain insights into individual behavioral patterns. This information is then utilized to create tailored strategies and interventions that are most likely to drive behavior change. Whether it involves promoting physical exercise, aiding smoking cessation, or enhancing medication adherence, AI can offer personalized nudges and support to assist individuals in making positive lifestyle decisions.

    Overall, AI in patient engagement and behavior modification has the potential to improve healthcare results and enable individuals to take charge of their health. By harnessing the capabilities of AI algorithms and virtual assistants, healthcare providers can offer personalized care, foster behavior change, and ultimately enhance patients’ well-being.

    Challenges and Future Directions of AI in Healthcare
    Although the application of artificial intelligence (AI) in healthcare presents significant promise, various challenges must be addressed for effective implementation and acceptance. These challenges encompass concerns related to data privacy and security, algorithmic biases, and the necessity for continuous training and validation of AI systems.

    Data privacy is a crucial issue concerning AI in healthcare. Since AI algorithms rely significantly on patient data to deliver precise predictions and recommendations, it is vital to establish stringent measures to safeguard patient privacy and uphold confidentiality. Healthcare organizations and policymakers must create explicit regulations and guidelines to manage the collection, storage, and use of patient information.

    Another challenge is algorithmic bias, which pertains to the risk of AI systems producing biased outcomes due to the inherent biases present in the training data. It is essential to ensure that AI algorithms are equitable, unbiased, and do not discriminate against particular patient groups. Clarity and understandability of AI algorithms are critical for grasping the decision-making process and for identifying and mitigating biases.

    To address these challenges and influence the future of AI in healthcare, ongoing research and collaboration among healthcare professionals, researchers, and technology experts are crucial. Prospective directions for AI in healthcare encompass advancements in natural language processing, robotics, and predictive analytics. These innovations have the potential to further enhance the capabilities of AI systems and improve patient care and outcomes.

    The Future of AI in Healthcare

    The future of AI in healthcare offers immense possibilities for transforming healthcare delivery. Progress in natural language processing will enable AI systems to comprehend and interpret unstructured medical data, such as physician notes and medical documentation, with heightened accuracy. This will allow healthcare providers to access valuable insights and knowledge more efficiently, resulting in improved healthcare delivery.

  • The publication of the chatbot ChatGPT

    So far, users can only communicate with the ChatGPT bot using the keyboard. But that could change. Real conversations or reading a bedtime stories should be possible in the future.

    Anyone who communicates with the chatbot GPT has so far had to rely on the keyboard. In the future, the program should also be able to react to voice input and uploaded photos. The developer company OpenAI is still keeping to itself when exactly this future scenario will become reality. The only thing that is certain is that after an update in the initially next few weeks, the new offer will only be available for the paid versions of the program. artificial intelligence chatbot

    Discuss photos with ChatGPT, artificial intelligence chatbot

    According to OpenAI, the new technology opens up numerous possibilities for creative applications and places a strong focus on accessibility. The company explained that users now have the opportunity to take photos during their trips, upload them to the platform and then discuss the specifics of the region.

    In addition, the AI ​​​​can respond to photos of the refrigerator contents by generating recipe suggestions, and the program’s voice function even allows bedtime story telling.

    Spotify wants to use ChatGPT for podcast translations

    These new features will initially be available to ChatGPT Plus and Enterprise users in the next few weeks and will then be made available to both Apple and Android smartphones. To make the conversations more realistic, OpenAI worked with professional voice actors.

    At the same time, the Swedish streaming service Spotify has announced that it will use OpenAI technology to translate podcasts into different languages. The voice and language style of the original version is retained. Translations of English-language podcasts into Spanish, French and German are Currently planned.

    AI could bring billions to the German economy, artificial intelligence chatbot

    According to a study presented yesterday in Berlin, systems with generative artificial intelligence (AI) functions could contribute around 330 billion euros to the value creation of the German economy in the future. This could be achieved if at least half of companies use appropriate technologies, according to a study by the research institute IW Consult on behalf of Google. IW Consult is a subsidiary of the German Economic Institute(IW) in Cologne.

    Generative AI is a variant of artificial intelligence that can be used to create (“generate”) new, original content. The publication of the chatbot ChatGPT by the start-up OpenAI in November 2022 is seen as a breakthrough for generative AI. For six months now, Google has been offering its own dialogue system for generative AI, Bard, which competes with ChatGPT. artificial intelligence c

    In just five days after its launch, Chat GPT has garnered over a million users, creating a significant impact in the tech and internet realms. This brain child of OpenAI, Chat GPT, is set to expand rapidly and make waves in the market.

    OpenAI’s latest creation, Chat GPT, is built upon GPT (Generative Pre-Training Transformer) and is designed to mimic human-like conversations through an AI-powered chatbot. Chat GPT functions as a knowledgeable digital assistant, providing detailed responses to user prompts. Although Chat GPT is expected to bring about a revolution in the global economy, it does have some constraints. In this post, we will delve into what Chat GPT is, how it works, its nuances, and everything you need to know about this ground breaking innovation.

    What is Chat GPT?

    To put it simply, Chat GPT is an AI-driven Natural Language Processing tool that allows users to interact with a chatbot and receive coherent responses to their queries. Its applications are wide-ranging, from generating emails and writing essays to coding and answering questions.

    Chat GPT possesses the capacity to engage in natural, interactive conversations and provide human-like responses. Its extensive language capabilities allow it to predictively string together words.

    The machine learning model employed by Chat GPT, known as RLHF (Reinforcement Learning with Human Feedback), trains the system to follow instructions and provide human-acceptable responses. Now that we understand what Chat GPT is, let’s explore its benefits, uses, and limitations to gain a comprehensive understanding of this popular technology.

    Who Created Chat GPT?

    Chat GPT is the brainchild of OpenAI, a private research laboratory dedicated to developing AI and conducting extensive research for the betterment of humanity. Headquartered in San Francisco, the company was founded through the collaboration of prominent figures such as Sam Altman, Elon Musk, Peter Thiel, Reid Hoffman, Ilya Sutskever, and Jessica Livingston.

    Why is Chat GPT Dangerous?

    The limitations of Chat GPT lie in its potential to convincingly generate incorrect or biased information, as well as its inability to discern between benign and harmful prompts. This makes Chat GPT hazardous and susceptible to being exploited for malicious activities, posing security risks in the digital space.

    How is Chat GPT Different from a Search Engine?

    Chat GPT distinguishes itself from a search engine in its interactive nature and the detailed responses it provides to user prompts based on training data. In contrast, search engines index web pages on the internet to aid users in finding specific information.

    Chat GPT functions as an AI capable of generating detailed essays, while search engines primarily direct users to the source webpage. Additionally, as of 2021, Chat GPT’s training is limited to text data, making it a less comprehensive resource compared to conventional search engines with access to the latest data.

    How Does Chat GPT Differ from Microsoft Bing?

    There are disparities between Microsoft Bing and Chat GPT. The basic notable version of Chat GPT is less powerful than Bing Chat, which makes use of the advanced GPT-4 large language model. Microsoft Bing also has access to the latest information, whereas Chat GPT’s data is limited to that before 2021. Unlike Chat GPT, Bing Chat includes footnotes linking back to the websites from which it sourced its information.

    Is Chat GPT Capable of Passing Standard Examinations?

    Indeed, Chat GPT is capable of successfully negotiated several standard examinations. To demonstrate this, a professor at the University of Pennsylvania’s Wharton School used Chat GPT in an MBA exam and found its responses to be quite impressive, earning grades ranging from B to B- . The professor particularly appreciated the detailed explanations and responses, especially in sections on basic operations and process analysis.

    How is Chat GPT Used By People?

    Chat GPT is widely popular for its versatility and is utilized for various purposes, adaptable to integration with third-party applications. Its applications range from providing simple solutions to coding.

    Some notable applications of Chat GPT include:

    • Composing detailed essays
    • Creating applications
    • Writing code
    • Generating content
    • Drafting letters, resumes, and cover letters
    • Composing email messages

    Is there a way to identify content generated by ChatGPT?

    The need for tools to identify ChatGPT text is increasing due to concerns about students using it for cheating. OpenAI has developed a tool to address this issue, but it has limitations and can only identify about 26 percent of the content, making it relatively weak. However, it’s still possible to detect ChatGPT content.

    While there isn’t a specific tool known to identify content generated by ChatGPT, humans can easily distinguish between ChatGPT-generated content and human-written content. ChatGPT-generated content often lacks a human touch, is verbose, robotic, and may not fully understand humor or sarcasm.

    Can ChatGPT be used with Whatsapp?

    ChatGPT can be integrated into Whatsapp accounts as it supports third-party integration. This integration aims to improve performance, allowing the Chatbot to respond to WhatsApp messages. The integration process is simple and can be done using GitHub.

    To integrate ChatGPT with WhatsApp, you can follow these steps: Download the zip file, open the terminal, select the “WhatsApp-gpt-main” file, run the “server.py” program from the terminal, enter ‘Is,’ click to proceed, and enter “python server.py.” Your contact number will be set up automatically on the OpenAI chat page. Once completed, you can find ChatGPT on your WhatsApp account and test its features.

    How can you monetize ChatGPT?

    One can generate income by utilizing ChatGPT in their business. One lucrative option is email affiliate marketing, which leverages ChatGPT’s excellent writing abilities to create persuasive and compelling emails with call-to-action links for products or services.

    To do this, individuals can participate in affiliate programs such as ConverrKit, Amazon, or Shopify to kickstart an email affiliate marketing campaign targeting potential clients. They can use lead magnets or other techniques to encourage people to sign up for their email list.

    How is ChatGPT different from Google?

    While ChatGPT and Google offer similar services, they are fundamentally different from each other. ChatGPT is an AI-powered chatbot proficient in natural language processing and provides detailed responses to user prompts, resembling human conversation. Offline, Google is a search engine that retrieves web pages with relevant information in response to user queries.

    How does ChatGPT generate code?

    While ChatGPT isn’t primarily designed for coding, it can effectively be used for this purpose. ChatGPT can analyze and comprehend code fragments and create new code based on user input using machine learning techniques. The process involves providing a prompt or description of the code users want to generate, which ChatGPT will subsequently review and use to generate the corresponding code.

    What are the benefits of using ChatGPT for coding?

    Utilizing ChatGPT for coding offers several advantages, including faster coding, enhanced accuracy, and optimized productivity. ChatGPT can quickly generate code solutions, analyze large amounts of code, and provide precise suggestions, allowing coders to focus on higher-level tasks.

    What are the steps to code using ChatGPT?

    Coding with ChatGPT is straightforward and involves the following steps: Choose a programming language, provide a prompt specifying the desired functionality of the code snippet, and receive the produced code fragment, which you can then copy and paste into your project. Some compatible programming languages for coding with ChatGPT include JavaScript, Python, and Java.

    Supply a Prompt: ChatGPT responds to your prompt by generating a code snippet. Provide a prompt that describes the functionality you want in the code snippet.

    For example, you can give a prompt like: “Write a function that arranges an array of numbers from smallest to largest.”

    Create Some Code: After receiving the prompt, ChatGPT will create a code fragment based on the description. You can then copy and paste the resulting code displayed on your ChatGPT chat screen into your project.

    Will ChatGPT Replace Programmers?

    No, ChatGPT will not entirely take over the roles and responsibilities of programmers. While ChatGPT may automate tasks, it will not replace the human intellect and critical thinking necessary for programming work. ChatGPT can automate some programming aspects like generating code, solving issues, and handling documentation. It can also learn from vast amounts of data and coding to produce new code similar to existing examples. However, the creative and complex thinking required for developing intricate software programs cannot be replaced by ChatGPT, despite its ability to automate certain programming tasks .

    Can ChatGPT Replace Tech Jobs?

    ChatGPT aims to automate tasks rather than replace the workforce. Not all tech jobs are at risk of being replaced by ChatGPT. This AI tool is designed to streamline some time-consuming and repetitive operations, allowing tech professionals to focus on more complex projects. , ChatGPT can enhance productivity by generating code snippets, test cases, and automating documentation. It’s important to note that while some job responsibilities may change due to automation, they may not necessarily be eliminated.

    Will ChatGPT Kill Google?

    ChatGPT may bring revolutionary changes to how the internet is used, but it will not eliminate Google. While both ChatGPT and Google may offer similar services, they operate differently and serve different purposes. Google is a search engine that crawls billions of web pages, indexes terms and phrases, and provides information to users. On the other hand, ChatGPT is a natural language processing model trained to function like a chatbot. However, it is limited in its information as it’s trained on data up to 2021, lacking current events data Google, in contrast, captures the latest events and provides up-to-date information to users.

    Discovering the Benefits of ChatGPT

    The benefits of ChatGPT are expected to have a significant impact on various industries, including business and technology. It is particularly useful for a range of NLP-related activities. ChatGPT has the ability to understand and provide human-like responses to a wide variety of queries and prompts due to its training on substantial amounts of data.

    Let’s Examine Some of the Potential Benefits of ChatGPT:

    Improved Efficiency: One of the main advantages of ChatGPT is its automation capabilities, which can free up human workers from time-consuming and repetitive tasks, allowing them to focus on more crucial and valuable endeavors. For example, businesses can use ChatGPT to address customer inquiries and provide better customer service.

    Cost Savings: ChatGPT’s automation feature allows businesses to reduce labor costs while increasing accuracy and reducing errors, particularly beneficial for enterprises in competitive markets.

    Enhanced Customer Experience: Businesses can create more personalized and human-like interactions with customers, leading to higher levels of customer satisfaction and loyalty.

    Improved Decision-Making: ChatGPT enables businesses to access, process, and analyze large volumes of data in real-time, leading to more informed decision-making and effective use of data.

    Market Differentiation: Leveraging ChatGPT’s intelligent automation technology can give businesses an edge over competitors by enhancing decision-making, improving customer service, and streamlining repetitive operations.

    Describing the Constraints of ChatGPT

    Even though ChatGPT is known for its groundbreaking qualities, it has specific limitations.

    Response Inaccuracy:

    ChatGPT requires extensive language training to provide accurate and error-free responses. However, due to its newness and potential lack of thorough training, this AI chatbot may sometimes provide inaccurate information.

    Data Training Restrictions and Bias Challenges:

    Similar to other AI models, one of ChatGPT’s challenging is its limitations reliance on training data. Combined with data bias, this factor can negatively impact the model’s output. ChatGPT may demonstrate biased responses when trained on data from underrepresented groups. The best solution is to increase the model’s data transparency to reduce bias in this technology.

    Sustainability:

    A major concern with ChatGPT is its long-term viability, particularly because it is open and free to use.

    Output Quality Depends on Input:

    One of ChatGPT’s significant limitations is its reliance on input quality to generate output. The quality of responses is based on the quality of user queries. Expert queries lead to superior responses, while ordinary queries result in ordinary responses.

    Highlighting the Significance of ChatGPT in 2023 and Beyond

    Intelligent automation and ChatGPT are powerful technologies that can revolutionize business operations. Companies that adopt and integrate these technologies will experience rapid transformation and maintain competitiveness, meeting market expectations satisfactorily. The importance of ChatGPT and its correct implementation will transform various sectors. AI’s automation feature will bring about transformation in fields incorporating technology and AI into their operations.

    ChatGPT’s significance will be felt in nearly every industry, including the following:

    • Banking and Finance
    • Healthcare
    • Manufacturing
    • E-commerce and Retail
    • Telecommunications
    • Transport and logistics
    • EducationTourism and hospitality
    • Real estate
    • Entertainment
    • Marketing and advertising

    What Lies Ahead for ChatGPT?

    ChatGPT has experienced tremendous growth and is poised to have a significant impact on various fields, from education to the job market, to businesses, and our daily lives. With its primary objective of automating repetitive tasks, providing real-time data analysis, and more , the future of ChatGPT is set to bring about transformation in how resources and time are utilized.

    The future of ChatGPT can mostly be seen in its ultimate goal. From answering everyday questions to coding to providing high-quality responses, the future of the AI ​​world appears to be here already. ChatGPT is undoubtedly a disruptive innovation, comparable to Google, enabling more sophisticated and impressive tasks such as writing answers, essays, emails, or letters.

    Thus, a significant change we can expect in the future of ChatGPT is a shift in user behavior, as they increasingly turn to ChatGPT rather than Google or other search engines. The future of ChatGPT is anticipated to involve ongoing research and growth, as well as deeper integration into numerous platforms and applications. The key enhancements in ChatGPT’s future will focus on improving its language generation and making it more accessible and user-friendly for various applications.

    Applications of Chat GPT

    The applications of ChatGPT will extend beyond writing and coding, benefiting a wide range of industries. Despite its risks and challenges, the application of ChatGPT is a significant advancement in the field of Artificial Intelligence. Here are a few sectors that can experience substantial progress with the intelligent applications of ChatGPT.

    Applications of ChatGPT in Financial Technology and Banking

    The advanced features of ChatGPT offer substantial potential for the finance and banking industry to streamline their operations. Financial technology and banking can effectively enhance their processes through the use of ChatGPT.

    In addition, banking and financial institutions can decrease expenses and offer automated, more personalized services to their clients. AI’s ability to process and integrate large volumes of data allows banks to generate more information and offer personalized financial guidance and support to customers, improving the services they provide. For example, this includes advice on portfolio management, investment, life insurance underwriting, risk management, and compliance.

    Applications of ChatGPT in Manufacturing

    The use of ChatGPT is set to revolutionize the manufacturing industry in various ways. ChatGPT’s implementation can help optimize plans, reduce risks, schedule predictive maintenance, and enhance communication, making them more efficient and faster. One of the most significant uses of ChatGPT in manufacturing is its ability to ensure quality control by identifying inconsistencies in available information. The intelligent application of ChatGPT can help manufacturers make better decisions, improve product quality, reduce costs, and enhance customer satisfaction.

    Applications of ChatGPT in Education

    ChatGPT could be a game-changer in transforming traditional educational methods and learning approaches. With the introduction of ChatGPT, there is a need to reconsider traditional methods and restructure education in the era of revolutionary AI tools and technologies.

    ChatGPT can greatly benefit students by guiding them in conducting in-depth research on specific topics, directing them to quick solutions. Additionally, ChatGPT can automate the research process by helping students select research topics, find information for assignments, identify relevant study materials, and perform other tasks. The use of ChatGPT simplifies the learning process, makes study resources accessible, and provides a personalized learning experience.

    Applications of ChatGPT in Cybersecurity

    ChatGPT has garnered significant interest across various industries, particularly in the cybersecurity sector, where its applications have proven highly effective for various security tasks such as cybersecurity awareness training, threat detection, data analysis, and incident response. It is particularly valuable for penetration testers and ethical hackers, enabling them to detect vulnerabilities, optimize time, automate workflows, and provide suggestions for the organization’s future security protocols.

    This AI tool is also helpful in generating reports. All you need to do is formulate your query in a specific manner, think creatively, and produce something unique and creative, and within seconds, you will have your solution. This enhances efficiency and reduces time spent on tasks.

    Applications of ChatGPT in Healthcare and Medicine

    While Artificial Intelligence has significantly advanced the healthcare sector in recent years, the potential of ChatGPT could further enhance healthcare operations. ChatGPT’s capabilities make it an ideal tool for various healthcare applications, from automated services to generating human-like responses to a wide range of queries .

    The use of ChatGPT in delivering personalized treatment programs and remotely monitoring patients would be particularly valuable. Major applications and interventions of ChatGPT in healthcare and medicine include virtual assistance in telemedicine, providing support for patients’ treatment processes, including appointment scheduling, treatment follow-up , and health information management.

    The growth of telemedicine has expanded access to treatment and medications from the comfort of one’s home. ChatGPT can facilitate remote patient health management in this area.

    Clinical Decision Support: ChatGPT can offer healthcare providers immediate, evidence-based recommendations for improved patient outcomes, including suggesting appropriate treatment options for specific conditions, alerting about potential drug interactions, and providing clinical recommendations for complex medical cases.

    ChatGPT can aid physicians by offering reliable support, saving time, reducing errors, and enhancing patient care. Medical Recordkeeping: The feature of ChatGPT automating patient interaction summaries and medical history can accelerate the medical record-keeping process.

    Healthcare professionals can easily use ChatGPT to share their notes, and the app can summarize essential details such as diagnoses, symptoms, and treatments. Another important application of ChatGPT in this context is its ability to intelligently retrieve important information from patient records for healthcare professionals.

    Medical Translation: One of the key uses of ChatGPT in the field of medicine is its ability to provide real-time translation, facilitating better communication between healthcare providers and patients. Some medical terms or jargon can be challenging for ordinary individuals to understand, but not for medical professionals.

    Due to its powerful language processing capabilities, ChatGPT simplifies this task for patients, enabling them to have a comprehensive understanding of their health issues and helping them access the best treatment and medications. We have thoroughly covered the core aspects of what ChatGPT is and how it has become an integral component of the modern AI era.

    Frequently Asked Questions:

    What is ChatGPT?

    ChatGPT is the latest AI-powered language model developed by OpenAI. It is a generative AI tool designed to follow prompts and produce detailed responses. It functions as a chatbot with advanced features, capable of engaging in human-like conversations. The model is trained using a large amount of data and fine-tuned through supervised and reinforcement learning.

    What are the Benefits of ChatGPT?

    ChatGPT offers several benefits, including:

    Improved Efficiency: ChatGPT enhances the accuracy and efficiency of Natural Language Processing-based tasks.

    Swift and Accurate Responses: ChatGPT quickly provides precise answers to various queries.

    Understanding Natural Language Complexity: ChatGPT assists in tasks that require understanding natural language and generating insights.

    Cost-Effective: ChatGPT is accessible to anyone without significant expenses.

    Enhanced Customer Satisfaction: Its human-like conversational capabilities boost customer engagement and provide optimized solutions for businesses.

    What are the main limitations of ChatGPT?

    Plausible yet Inaccurate Responses: ChatGPT may produce responses that sound accurate but are actually incorrect.

    Sensitivity to Changes: ChatGPT is sensitive to slight variations in input prompts and may respond to prompts that it initially claimed not to know.

    Repetitive Language Use and Lengthy Responses: Due to its training data, ChatGPT may become verbose and excessively use certain phrases.

    Security Risks: ChatGPT may respond to harmful prompts and exhibit biased behavior.

    Lack of Human Touch: Its responses may lack emotional depth.

    Missing Source Information: ChatGPT aggregates insights from massive text data but does not explicitly provide sources.

    Guesswork: At times, the model may make an educated guess about the user’s intention when faced with ambiguous queries.

    Limited Data: The ChatGPT model is trained on text data up to 2021, lacking information on more recent events.

    Is ChatGPT Free?

    Yes, ChatGPT is free to use and can be accessed by anyone interested. OpenAI also offers a paid version with a monthly subscription fee of US$20, providing quicker response generation and general access during peak times

    What are the Uses of ChatGPT?

    ChatGPT has various applications due to its ability to automate tasks and enhance efficiency:Generate ideas and brainstormReceive personalized suggestionsUnderstand complex topicsAid in writingSummarize recent researchGet coding and debugging supportConvert textExecute programming tasks such as codingUse as a virtual assistantSolve complex arithmetic problemsIntegrate with chatbots for improved customer service

    What is the Importance of ChatGPT?

    ChatGPT’s capability to comprehend natural language and respond in a conversational manner similar to humans makes it an essential tool for businesses to incorporate in their customer engagement strategies through chatbots and other virtual assistants. As an AI tool, ChatGPT has the potential to revolutionize human-technology interaction, making it an important tool in a technology-driven world. Some compelling factors highlighting the importance of ChatGPT include:

    Personalization: Both individuals and businesses can customize ChatGPT to meet specific needs in order to enhance efficiency and automate tasks.

    Efficiency: ChatGPT can significantly reduce manual workloads and handle large volumes of queries rapidly thereby, enhancing productivity and efficiency.

    Scalability: ChatGPT does not require substantial additional resources to cater to the needs of growing businesses or organizations.

    Accessibility: ChatGPT is not constrained by location and can be accessed from anywhere, providing users with hassle-free instant support.

    Innovation: ChatGPT serves as a significant example of how AI and technology can evolve over time and bring about transformative changes in the world.

    What does the term “In Capacity” mean while using ChatGPT?

    The term “In Capacity” simply indicates that the application or website is experiencing traffic from users. When a large number of users access the server, it becomes unable to process their requests instantly, leading to the website displaying “In Capacity” and advising users to return at another time.

    What are the advantages of ChatGPT over other chatbots?

    ChatGPT offers several advantages:Replicates human conversationDeveloped based on an advanced language modelAdvanced GPT modelWide range of applications and benefitsCompatible with plugins for extensionCapable of fine-tuning

    What is the Future of ChatGPT?

    The future of ChatGPT appears promising, with enhancements in its language generation capabilities. OpenAI, the developer of ChatGPT, is positioned to create more advanced versions of the GPT model with improved potential and performance. ChatGPT can continue to be integrated into various virtual assistants and chatbots by businesses and organizations, solidifying its role as a critical tool in the future.

    OpenAI valuation recently exploded to $157 billion

    OpenAI, the creator of artificial intelligence, is potentially facing a significant and challenging reckoning regarding its nonprofit roots, even though its valuation has recently surged to $157 billion.

    Tax experts specializing in nonprofit organizations have been closely monitoring OpenAI, the developer of ChatGPT, since last November when the board removed and then reinstated CEO Sam Altman.

    Some believe that the company may have now reached—or surpassed—the limits of its corporate structure, which is organized as a nonprofit designed to advance artificial intelligence for the benefit of “all of humanity,” although it has for-profit subsidiaries under its management.

    Jill Horwitz, a professor at UCLA School of Law who focuses on law and medicine and has researched OpenAI, stated that when there are conflicting interests in a collaborative endeavor between a nonprofit and a for-profit entity, the charitable mission must always take precedence.

    “It is the duty of the board first, and then the regulators and the judicial system, to ensure that the commitment made to the public to pursue the charitable interest is honored,” she commented.

    Altman recently acknowledged that OpenAI is contemplating a corporate restructuring, but he did not provide any detailed information.

    However, a source informed The Associated Press that the organization is exploring the option of transforming OpenAI into a public benefit corporation.

    No definitive choice has been reached by the board, and the timeline for this transition remains undetermined, according to the source.

    If the nonprofit were to lose authority over its subsidiaries, some experts believe that OpenAI might be required to compensate for the interests and assets that previously belonged to the nonprofit.

    Thus far, most analysts concur that OpenAI has strategically managed its relationships between its nonprofit and various other corporate entities to prevent that from occurring.

    Nevertheless, they also view OpenAI as vulnerable to examination from regulatory bodies, including the Internal Revenue Service and state attorneys general in Delaware, where it is incorporated, and California, where it conducts operations.

    Bret Taylor, chair of the board of the OpenAI nonprofit, stated in a press release that the board is committed to fulfilling its fiduciary responsibilities.

    “Any potential restructuring would guarantee that the nonprofit continues to exist and prosper while receiving full value for its current interest in the OpenAI for-profit, along with an improved capacity to achieve its mission,” he mentioned.

    Here are the primary inquiries from nonprofit specialists:

    How could OpenAI transition from a nonprofit model to a for-profit one?

    Nonprofit organizations that are tax-exempt may sometimes opt to alter their status.

    This process requires what the IRS terms a conversion.

    Tax regulations stipulate that money or assets contributed to a tax-exempt entity must remain within the realm of charity.

    If the original organization becomes a for-profit entity, a conversion typically necessitates that the for-profit pays fair market value for the assets to another charitable organization.

    Even if the nonprofit OpenAI continues to operate in some form, some experts assert that it would need to be compensated fair market value for any assets transferred to its for-profit subsidiaries.

    In OpenAI’s case, several questions arise: What assets are owned by the nonprofit? What is the valuation of those assets?

    Do those assets include intellectual property, patents, commercial products, and licenses? Furthermore, what is the value of relinquishing control over the for-profit subsidiaries?

    If OpenAI were to reduce the control its nonprofit has over its other business entities, a regulator might require clarification on those matters.

    Any alteration to OpenAI’s structure will necessitate compliance with the laws governing tax-exempt organizations.

    Andrew Steinberg, a counsel at Venable LLP and a member of the American Bar Association’s nonprofit organizations committee, remarked that it would be an “extraordinary” measure to modify the structure of corporate subsidiaries of a tax-exempt nonprofit.

    “It would involve a complex and detailed process with numerous legal and regulatory factors to consider,” he added. “However, it is not impossible.”

    Is OpenAI fulfilling its charitable objective?

    To obtain tax-exempt status, OpenAI had to submit an application to the IRS outlining its charitable purpose.

    OpenAI shared with The Associated Press a copy of that September 2016 application, which illustrates how drastically the group’s plans for its technology and framework have altered.

    OpenAI spokesperson Liz Bourgeois stated in an email that the organization’s missions and objectives have remained steady, even though the methods of achieving that mission have evolved alongside technological advancements.

    When OpenAI incorporated as a nonprofit in Delaware, it specified that its purpose was “to provide funding for research, development, and distribution of technology related to artificial intelligence.”

    In its tax filings, it also described its mission as creating “general-purpose artificial intelligence (AI) that safely benefits humanity, unconstrained by a need to generate financial return.”

    Steinberg indicated that the organization can change its plans as long as it accurately reports that information on its annual tax filings, which it has done.

    Some observers, including Elon Musk, a former board member and early supporter of OpenAI who has also filed a lawsuit against the organization, express doubts about its commitment to its original mission.

    Geoffrey Hinton, known as the “godfather of AI” and a co-recipient of the Nobel Prize in physics on Tuesday, has voiced concerns regarding the transformation of OpenAI, proudly mentioning that one of his past students, Ilya Sutskever, who co-founded the organization, played a role in Altman’s removal as CEO before his reinstatement.

    “OpenAI was established with a strong focus on safety. Its main goal was to create artificial general intelligence while ensuring its safety,” Hinton noted, adding that “over time, it became clear that Sam Altman prioritized profits over safety, which I find regrettable.”

    Sutskever, who previously led OpenAI’s AI safety team, departed from the organization in May and has launched his own AI venture. OpenAI, on its side, takes pride in its safety accomplishments.

    Will OpenAI’s board members manage to prevent conflicts of interest?

    This question ultimately pertains to the board of OpenAI’s nonprofit and to what degree it is working to advance the organization’s charitable goals.

    Steinberg indicated that regulators assessing a nonprofit board’s decision will mainly focus on how the board reached that decision rather than whether the conclusion itself was optimal.

    He explained that regulators “typically honor the business judgment of board members as long as the transactions don’t involve conflicts of interest for any of them and they do not have a financial stake in the transaction.”

    The possibility of any board members benefiting financially from alterations to OpenAI’s structure could also draw the attention of nonprofit regulators.

    Regarding inquiries about whether Altman might receive equity in the for-profit subsidiary during any potential restructuring, OpenAI board chair Taylor stated, “The board has discussed whether offering Sam equity could be beneficial to the company and our mission, but specific figures have not been addressed, and no decisions have been made.”

    AI search tool mimics some features of a traditional search engine but with a more conversational approach

    OpenAI has incorporated a search engine into its chatbot ChatGPT, enabling users to access current information regarding news, sports, and weather.

    This move marks the AI company’s first direct challenge to Google’s dominance in search, which was first announced in May.

    The new feature will initially be available to paying subscribers, yet OpenAI noted that it will also be accessible to free ChatGPT users in the future.

    The initial iteration of ChatGPT, launched in 2022, was trained on vast amounts of online text but was unable to answer questions about recent events outside its training data.

    In May, Google revamped its search engine, frequently featuring AI-generated summaries at the top of search results. These summaries aim to rapidly respond to user queries, potentially reducing the need for users to visit additional websites for further information.

    Google’s redesign followed a year of testing with a limited user group, but it still generated inaccurate results, highlighting the risks of relying on AI chatbots that can produce errors, often referred to as hallucinations.

    As part of OpenAI’s strategy to deliver current information, the company has collaborated with several news and data organizations, which will see their content included in results, complete with links to original sources, thereby mimicking the experience of a traditional search engine.

    OpenAI has partnered with various news organizations and publishers, such as the Associated Press, Conde Nast, the Financial Times, Hearst, Le Monde, News Corp, and Reuters. The organization anticipates adding more partners in the future.

    “The search model is a refined version of GPT-4o, enhanced using innovative synthetic data generation methods, including distilling outputs from OpenAI o1-preview,” the company mentioned in a blog post announcing the new search feature.

    “ChatGPT search utilizes third-party search providers along with content supplied directly by our partners to deliver the information users seek.”

    OpenAI’s advanced voice feature is now accessible in Europe. Here’s what it allows you to do.

    The creator of ChatGPT faced controversy after one of its voice options was similar to that of actress Scarlett Johansson in the 2013 film “Her.”

    On Tuesday, OpenAI announced that its Advanced Voice function is available in Europe, following a launch delay that may have been linked to regulatory requirements in the region.

    The Advanced Voice Mode was introduced in May and offers users the ability to communicate with the large language model (LLM) using their voice, meaning you can speak to ChatGPT via your mobile device, laptop, or PC microphone.

    Although the voice mode was launched in the United Kingdom earlier this month, it only reached the European continent now, possibly due to concerns surrounding Europe’s General Data Protection Regulation (GDPR), which mandates that certain products undergo review by the EU data commissioner prior to launch.

    “Europe is an important market for us, and we are dedicated to collaborating with European institutions to provide our products here,” an OpenAI spokesperson stated to Euronews Next earlier this month.

    OpenAI confirmed the tool’s availability in Europe in response to a query on the social media platform X, which inquired about its European rollout.

    “Indeed, all Plus users in the EU, Switzerland, Iceland, Norway, and Liechtenstein now have access to Advanced Voice,” OpenAI remarked in a post.

    The Advanced Voice feature was made accessible to OpenAI Plus subscribers last night but is still unavailable for users with free accounts.

    Advanced Voice gained attention when it was revealed that a voice named Sky closely resembled that of actress Scarlett Johansson in the film “Her.”

    Johansson’s legal team sent OpenAI letters asserting that the company lacked the authorization to use the voice. Consequently, OpenAI has temporarily halted the use of the Sky voice.

    Users have the option to request the AI to modify its accent, for instance, asking for a southern accent if they dislike the current sound.

    It is also interactive, enabling users to instruct it to speed up or slow down, and it will respond if interrupted.

    ChatGPT’s Advanced Voice Mode launched in the UK this week but has not yet been introduced in the European Union. While there have been rumors of a “ban,” it’s believed that OpenAI may have delayed the feature due to concerns that its emotion-detection capabilities might contravene the EU’s AI act, which is the first significant legislation of its kind regarding AI.

    The Advanced Voice Mode (which facilitates “live” conversations where the chatbot behaves more like a human) can interpret non-verbal signals like speech pace to provide an emotional response. The EU’s AI Act bans “the use of AI systems to infer the emotions of a natural person.”

    However, how likely is it that such regulations will inhibit innovation? And what type of regulation is considered “right” for businesses to engage with AI? The Stack consulted experts to explore these questions.

    It remains uncertain whether Advanced Voice Mode would indeed be banned under these regulations, suggesting that OpenAI might be exercising caution, according to Curtis Wilson, a staff data scientist at app security firm Synopsys Software Integrity Group.

    Wilson explains that similar “careful” responses were observable in the years following the implementation of the General Data Protection Regulation (GDPR).

    Wilson states: “It’s ambiguous if the EU AI Act actually prohibits Advanced Voice Mode at all. The aspect most frequently referenced is Article 5, especially paragraph 1f, which forbids systems from inferring emotions. However, this paragraph specifies ‘in the areas of workplace and educational institutions,’ and the associated recital clarifies that the concern is about poorly calibrated systems causing discrimination against minority groups when the model misreads their emotions.”

    Companies will likely avoid being the “guinea pig” and risk breaching such regulations, potentially opening up opportunities for businesses focused on compliance as more such regulations arise globally, according to Wilson.

    “One major directional shift I foresee with the influx of global regulations in the coming years is the emergence of a robust AI regulatory compliance sector to assist companies in navigating a complex global AI oversight environment.”

    Wilson feels that the core issue has been the ambiguity, which holds significant lessons for future regulations.

    He mentions: “Clarity is forthcoming; Article 96 mandates that the Commission provide guidelines for practical enforcement by August 2026—18 months after the rules on prohibited systems actually take effect. These guidelines should have been established beforehand.

    “Developers need to be informed about what is and isn’t covered by the regulation—ideally without needing to hire external companies or legal firms. This is why I hope to see more clear, concise, and accurate guidelines (that are updated over time to keep pace with evolving technologies) in the future.”

    Compliance in the era of Generative AI

    This case exemplifies one of the principal challenges that global companies will confront in the age of AI, according to Luke Dash, CEO of compliance firm ISMS.online.

    As more regulations concerning AI are implemented, businesses will encounter difficulties if these regulations lack uniformity across various regions.

    Dash states: “Divergent regulations among different areas will obstruct AI deployment and complicate compliance for organizations operating outside these locations. This fragmentation will compel companies to formulate region-specific strategies, which could potentially hinder global advancements while also increasing the risk of non-compliance and inconsistent execution.

    “Upcoming regulations should aim to harmonize international standards to establish a more cohesive landscape.”

    While regulations are frequently perceived as obstacles to growth, Dr. Kimberley Hardcastle, Assistant Professor at Northumbria University, argues that in the context of AI, regulation will be vital for encouraging acceptance of the technology.

    Consequently, regulation will play a key role in embedding AI within enterprises and society as a whole, she asserts.

    “Research findings, including those from the European Commission, show that effectively structured regulations not only address risks linked to bias and discrimination in AI but also promote economic growth by establishing a level playing field for innovation,” Dr. Hardcastle explains. “Thus, a solid regulatory framework is not simply an impediment, but rather a catalyst that can encourage sustainable and fair AI adoption.”

    Dr. Hardcastle contends that due to its rapid evolution, AI may necessitate a new form of regulation capable of adapting to emerging challenges with “real-time adjustments.”

    Regulators also need to take lessons learned from the era of social media into account, she emphasizes.

    She remarks, “The advancement of generative AI mirrors the initial growth of the social media sector, where swift innovation frequently outstripped regulatory responses, resulting in considerable societal impacts.

    “Similarly, the current generative AI landscape showcases a competitive atmosphere among firms striving to achieve artificial general intelligence, often at the cost of responsible development and ethical standards. This trend raises pressing concerns regarding potential harms, such as biases in AI outputs and misuse of technology.

    “To avoid repeating past mistakes, it is essential to draw lessons from the social media experience, and stakeholders must establish proactive regulatory frameworks that emphasize safety and ethics, so that the quest for technological progress does not jeopardize societal well-being.”

  • SpaceX in particular has now overtaken the Europeans with significantly cheaper technologies

    In 2024, NASA is planning an important manned test flight to the Moon. ESA wants to become more independent again with its new Ariane 6 rocket. An overview of the space travel highlights in 2024.

    The new heavy-lift rocket Ariane 6 is finally scheduled to launch in summer 2024 – four years later than planned. The pressure is high: the European Space Agency (ESA) currently has no longer its own heavy-lift rockets. The last rocket of the Ariane 5 generation took off in July 2023. SpaceX in particular has now overtaken the Europeans with significantly cheaper technologies.

    With the new Ariane 6, Europe wants secure direct access to space in the future. The complete upper stage of the new rocket type was built in Bremen. The main engine and the upper stage engine were tested by the German Aerospace Center in Lampoldshausen, Baden -Württemberg. Due to the coronavirus pandemic and technical difficulties, the maiden flight into space had to be postponed again and again. It is currently scheduled for summer 2024.

    Mars moon mission to send rocks to Earth

    The Japanese space agency JAXA will send a probe to the Martian moon Phobos in September 2024. After three years of flight, the probe will drop a rover onto Phobos from a height of about 100 meters. The low gravity of the small Martian moon makes it a particular challenge. The rover must move very slowly and carefully to avoid suddenly taking off into space.

    The rock samples collected by the rover will later be flown back to Earth. If this succeeds, it would be a first and also a test with a view to neighboring Mars. The robots have already analyzed Martian rocks on site there, but samples of them have not yet been sent back to Earth. However, initial plans for such a return mission are already in place: ESA and NASA are planning a Mars Sample Return Mission by 2033. According to current plans, China would like to launch such a return mission Even earlier.

    Hera mission to investigate asteroid defense

    How can the Earth be protected from an asteroid in an emergency? ESA’s new HERA mission is designed to investigate this question in more detail and will therefore fly to the asteroid Didymos and its moon Dimorphos in 2024. In 2022, NASA crashed a probe into the small satellite Dimorphos. The impact changed Dimorphos’ orbit around Didymos.

    The HERA probe will now examine the crater created by the impact in more detail. The results can help prevent an asteroid impact on Earth in the event of an emergency in the future and develop further missions to test defense techniques. The launch of the space probe is planned for October 2024.

    Important test for a later moon landing

    In a few years, people will land on the Moon again. Preparations for NASA’s Artemis III mission are becoming more and more concrete. In November 2024, an important test flight will take place – the Artemis II mission. Three astronauts, one male and one female, will fly to the Moon in the Orion spacecraft. A landing is not planned; the crew will orbit the Moon.

    Depending on how the mission goes, the crew will return to Earth after eight to 21 days. If everything goes well, people could land on the Moon again in the coming years, more than 50 years after the first manned Moon landing. The Apollo crew flew to the Moon’s equator, where landing is comparatively easy. At the south pole, however, the landscape is rugged. So far, NASA has selected 13 possible landing regions around the Moon’s south pole.

    The first manned test in November 2024 is considered an important milestone for the return to the Moon. Five manned test flights were completed before the first Apollo landing on the Moon. In the series of Artemis missions, only one manned test flight is currently planned before the first landing attempt, but more could follow.

    NASA mission searches for ice on the moon

    The VIPER rover will also be sent to the moon at the end of 2024. It is to land near the south pole of the Earth’s satellite and examine the Nobile crater. This lunar crater was probably formed by a collision with another celestial body. In this very icy region with temperatures below minus 200 degrees Celsius, NASA is looking for frozen water, not only on the lunar surface, but also in the underlying soil layers. The data collected will be used to create a map of the available resources on the moon. According to NASA, the findings could help in the future to use the water reserves on the moon for longer manned missions and the construction of a lunar base.

    The Chinese mission Chang’e 6 also wants to go to the Moon. The lander is to take two kilograms of soil samples from the Moon and bring them back to Earth after two months. Experts want to further unravel the history of the Moon’s formation by analyzing the rock – and they hope to find new answers to the question of how the entire solar system came into being.

    Is there life on Jupiter’s moon Europa?

    The Europa Clipper space probe will fly to Jupiter’s icy moon Europa at the end of 2024. The NASA probe is intended to examine Europa’s thick ice crust and, above all, to find oceans hidden beneath it, in which life is theoretically possible. The high radiation dose near Jupiter makes the probe a challenge. Its sensitive instruments are therefore specially shielded. The probe will fly close to Jupiter’s moon Europa 45 times in order to scan almost the entire moon. After that, the fuel will be used up or the radiation dose will have already destroyed the probe.

    A launch towards Venus is also planned for 2024. Shukrayaan-1 from the Indian Space Agency is intended to closely analyze the atmosphere of Venus and the volcanic activities on its surface.

    In addition to the fixed plans, surprises are still possible when looking into space in 2024. Not least thanks to the James Webb Space Telescope, which is constantly providing new data and producing new research results.

    “Hubble” observations

    Black hole ejected from galaxy

    According to researchers from Baltimore, a black hole with a mass of more than a billion suns has shot out of its home galaxy. It is now racing into the vastness of space at millions of kilometers per hour.

    A gigantic gravitational wave tsunami has hurled a black hole out of the heart of its home galaxy. That is at least what astronomers have concluded from observations with the Hubble Space Telescope. The black hole, with a mass of more than a billion suns, is shooting out of its home galaxy at a speed of 7.6 million kilometers per hour.

    Scientists led by Marco Chiaberge from the Space Telescope Science Institute in Baltimore used the space telescope to study a galaxy about eight billion light years away.

    A light year is the distance that light travels in one year and corresponds to almost ten trillion kilometers.

    Brighter than an entire galaxy

    The galaxy belongs to a galaxy cluster – and the astronomers were looking for signs of merging star systems. To their surprise, they observed a so-called quasar in the outskirts of the galaxy they were studying. Quasars are usually the cores of active galaxies. They are powered by a central black hole.

    The black holes themselves cannot be observed, but the quasars usually shine brighter than an entire galaxy. “Black holes live in the centers of galaxies, so it is unusual to observe a quasar in the center,” explains Chiaberge. The researchers calculated that this quasar with its supermassive black hole is around 35,000 light years away from the center of its galaxy. That is further than the distance from the sun to the center of our home galaxy, the Milky Way. The speed of the quasar with the catalog number “3C 186” suggests that it will leave its galaxy in about 20 million years and then wander through space.

    To give such a massive black hole this speed, the energy of 100 million supernovae exploding simultaneously is needed, as the astronomers explain. The scientists studied the unusual system with numerous telescopes. “When we combined the observations from Hubble, the Chandra X-ray Observatory and the Sloan Digital Sky Survey, all pointed to the same scenario,” reports Chiaberge.

    Merged with a neighboring galaxy?

    The researchers assume that the home galaxy of the unusual black hole actually merged with a neighboring galaxy. In the process, its two supermassive black holes in the center also merged. However, this did not happen all at once. Instead, the black holes orbit each other ever more closely and quickly, emitting spiral gravitational waves – similar to how a rotating lawn sprinkler distributes water. If the two black holes do not have the same mass and the same rotation rate, the gravitational waves can be more concentrated in one direction. As soon as the black holes finally merge, they stop producing gravitational waves.

    The recoil then hurls the merged black hole in the opposite direction to the previously bundled gravitational waves. According to the astronomers, this is exactly what happened with the gigantic black hole in the heart of “3C 186”. The only other explanation is that the quasar is actually located somewhere far behind the observed galaxy in another galaxy. However, there is no trace of such a background galaxy in the observations in numerous wavelength ranges.

    “2023 BU” recently discovered

    Asteroids come unusually close to Earth

    As big as a truck and as close as almost no asteroid before, “2023 BU” will race past the Earth – at an altitude of just 3,600 kilometers at the southern tip of South America. According to NASA, there is no danger of an impact.

    An asteroid will pass unusually close to Earth on Friday night. The celestial body, known as “2023 BU”, will reach the point in its orbit closest to Earth early Friday at 1:27 a.m. German time, according to the US space agency NASA. The asteroid, which is about the size of a delivery truck, will then rush past the southern tip of South America at an altitude of just 3,600 kilometers.

    According to NASA, there is no risk of an impact. For comparison: geostationary satellites are around 35,000 kilometers away from Earth, and the ISS space station is around 400 kilometers away. However, it will most likely be one of the closest approaches to Earth ever recorded, according to NASA.

    Even if the space rock came a lot closer to Earth, nothing would probably happen. According to researchers, most of the asteroid would burn up in the atmosphere. Some of the larger pieces could fall as meteorites.

    Discovered just a few days ago

    The celestial body was discovered just a few days ago by amateur astronomer Gennady Borisov at the Margo Observatory in Crimea.

    Borisov made a name for himself in 2019 when he used a self-built telescope to discover a comet that crossed our solar system on its journey from the depths of space. The comet 2I/Borisov attracted worldwide interest among astronomers at the time.

    Technical problems with US moon landing mission

    For the first time in over 50 years, a mission to the moon has been launched in the USA. However, the private lunar module has technical problems. According to operator Astrobotic, the module was unable to take the desired position in relation to the sun.

    Shortly after the launch of a US mission with the aim of making the first successful commercial landing on the moon, a malfunction occurred, according to the company. Initially, the launch went as planned and the systems worked as expected, said Pittsburgh-based company Astrobotic.

    “Unfortunately, there was a malfunction that prevented the shuttle from taking up a stable position facing the sun.” The team is reacting to the situation and will pass on further information as soon as it is available, they said. The “Peregrine” lander was launched this morning on board a “Vulcan Centaur” rocket made by ULA from the Cape Canaveral spaceport.

    The capsule was supposed to land on Earth’s satellite at the end of February in an area called Sinus Viscositatis (Bay of Stickiness), completing the first – unmanned – US moon landing since the Apollo mission over 50 years ago.

    The cargo flight to the Moon was the first of several of its kind planned for 2024. The lunar lander required for this was financed by the American space agency NASA, but is a development and property of the private US space company Astrobotic. The lander “Peregrine” has scientific equipment, technical experiments and commercial cargo from several countries on board.

    What exactly is “Peregrine”?

    The lander “Peregrine” – in English “Peregrine Falcon” – is a type of truck for transporting things to the moon. This spacecraft cannot transport people, but it can transport scientific measuring instruments, technical equipment, robots or even objects that companies want to place on the moon for a variety of reasons and business interests.

    The lunar lander is two meters high and 2.5 meters wide. “Peregrine” lands on four legs, slowed down by rocket engines. These legs carry a platform on which a total of 21 different “packages” are mounted.

    But unlike the parcel delivery service on Earth, the cargo items are not distributed after landing, but remain on board or are placed on the lunar surface in the immediate vicinity of the lander – small robots and rovers, for example.

    Peregrine stays on the moon

    Another difference between the shipping trucks: the Peregrine is a disposable transporter. The vehicle remains on the lunar surface after landing and does not return to Earth. A shuttle service between Earth and the Moon is therefore not possible.

    Incorrect altitude calculation on “Peregrine Mission 1”

    In April 2023, a Japanese company failed in a similar mission. The company Ispace stated that the reason was an incorrect calculation of the lander’s height during the landing attempt. During the “Peregrine Mission 1”, private individuals were able to buy space to transport material to the moon in the lander, which is 1.9 meters high and has a diameter of 2.5 meters.

    The US space agency NASA also wants to prepare its own expeditions to Earth’s companion with several devices on the journey. NASA wants to investigate the moon’s exosphere, among other things, during the mission. In addition, thermal properties and the hydrogen content of the material on the moon’s surface (regolith) are to be investigated.

    NASA mission “Artemis II”

    Three men and a woman fly to the moon

    A good 50 years after the last “Apollo” mission, NASA wants to send people to the moon again. The US space agency has now announced who will take part in the “Artemis II” mission at the end of 2024.

    The US space agency NASA has named four astronauts for the “Artemis II” mission, which is to fly around the moon next year: NASA astronaut Christina Koch from the USA, her US colleagues Victor Glover and Reid Wiseman, and Canadian Jeremy Hansen. They would be the first people to be near the moon since the astronauts of the last “Apollo” mission set foot on the Earth’s satellite for a while in 1972, as NASA announced when announcing the participants.

    Koch, an engineer who already holds the record for the longest uninterrupted space flight by a woman, was named as a mission specialist, along with US Navy officer Glover, who was selected as the “Artemis II” pilot. Glover would be the first black astronaut on a lunar mission.

    Hanson is the first Canadian ever selected to fly to the moon. Wiseman, a veteran of the International Space Station, was named commander of the Artemis II mission.

    Launch scheduled for late 2024

    The “Artemis II” mission is currently planned for November 2024. It will be the first manned “Artemis” launch after the successful test of the “Artemis I” mission in December. The unmanned “Orion” capsule traveled around 1.4 million miles through space, flew around the Moon and landed in the Pacific Ocean after around 26 days in space. It had collected important data.

    NASA wants to go back to the moon

    The unmanned test mission was considered an important step towards the return of humans to the moon, with the long-term goal of a trip to Mars. Around a year after “Artemis II”, another manned flight, including a moon landing, is to follow with “Artemis III”.

    NASA plans to land astronauts on the lunar surface later this decade to establish a permanent outpost that will provide a springboard for future Mars exploration. The European Space Agency (ESA) and space agencies from several other countries are involved in “Artemis.”

    Editor’s note: An earlier version of the text referred to a “colored astronaut.” We have replaced the word “colored” with “black.”

    NASA mission “Artemis II”

    “To the moon, to Mars and beyond”

    NASA has introduced four astronauts who will fly to the moon next year. As the crew of the “Artemis II” mission, they will orbit the moon for ten days – and prepare for a manned landing.

    It was a huge show that NASA put on at the Johnson Space Center in Houston, Texas. It was ceremoniously revealed that four astronauts will fly to the moon in the Orion space capsule at the end of next year.

    Three Americans, including a woman and an African American for the first time, and a Canadian will spend ten days orbiting the moon. They will also be the first people in over 50 years to see the moon up close – all in preparation for the next manned landing.

    Test mission before landing

    “The Artemis II crew represents thousands of people who are working tirelessly to take us to the stars,” said NASA chief Bill Nelson in Houston. “The world’s largest and most powerful rocket will catapult them into space. They will fly to the moon,” Nelson continued.

    Like “Artemis I”, when the Orion space capsule was launched into space unmanned as a test, “Artemis II” is also a test mission. The astronauts don’t only want to collect data, but also put the Orion’s functionality through its paces.

    “We will hear: ready for launch. And then we will fly into orbit for eight minutes on the Space Launch System rocket. During that time we will test all of Orion’s systems and observe how it maneuvers in space. And then – if everything goes well – we will go to the Moon,” said experienced astronaut Christina Hammock Koch. Like her two US colleagues, the engineer has already spent time on the International Space Station ISS.

    Preparations for “Artemis 3”

    The actual lunar landing mission “Artemis 3” is scheduled to begin at the end of 2025. According to current plans, the astronauts will then land with the space capsule on the south pole of the Moon. A place where there is ice and therefore water and thus an area that, according to NASA, could be the best place to set up a permanent lunar base.

    From there, the space agency wants to undertake further missions into space – for example, to Mars. At least NASA boss Bill Nelson is already dreaming of it: “Together we will usher in a new era of research: the Artemis generation. Together we will set off. To the moon, to Mars and even further.”

    The way back to the moon

    Findings of the “Artemis I” mission

    Three months ago, the “Orion” space capsule of NASA’s “Artemis I” moon mission landed in the Pacific Ocean. The test flight is considered an important step on humanity’s path back to the moon. NASA is now already planning the manned follow-up mission.

    After the mission is before the mission: The scientists at NASA, its partner agencies and industrial partners hardly had time to rest after the successful landing of “Artemis I”. Preparations for “Artemis II” are underway. This follow-up mission will be the first time that humans will be able to fly to the moon since the “Apollo” flights in the 1960s and 1970s.

    Good cooperation

    Thomas Zurbuchen, the former head of science at NASA, summed up in an interview with tagesschau.de: “‘Artemis I’ was an incredible success. It was a mission with a lot of risk. The rocket had never flown in this configuration before, as had the ‘Orion’ capsule with the European Service Module. It all worked out.”

    NASA broke new ground in several respects with the Artemis program: For the first time, the United States relied on another space agency, the European Space Agency (ESA), to build a very important and critical part of a manned spacecraft.

    In Europe – at Airbus in Bremen – a key element of the “Orion” spacecraft is being assembled: the European Service Module (ESM). It provides the propulsion for “Orion” and supplies the astronauts with electricity, air and water. The ESM module is the heart of the spacecraft, and the pressure on those responsible in Europe was accordingly great.

    European balance sheet

    The results are also very positive for them. At a press conference in Bremen in February, Airbus announced that the first evaluation of the test data had shown that the “Orion” spacecraft had consumed much less fuel and electrical energy than previously assumed. Marc Steckling, the head of Space Exploration at Airbus: “We used the first mission to put the module through its paces and found that it generated 15 percent more electricity because the solar cells are very good. And we found that the propulsion system is very efficient. We consumed almost two tons of fuel – which is 20 percent – less.”

    This will enable future missions with a longer duration or with more cargo, for example in the case of module transports to build the so-called “Lunar Gateway”, an intermediate station in lunar orbit. It is to be built in the 2020s and will be a kind of “bus stop” in space on the way to the lunar surface.

    Stress test for “Artemis I”

    Steckling continues: “We flew more maneuvers to simulate situations that do not normally occur. For example, we completely changed the angle of the solar cells in relation to the sun. ‘Artemis I’ was of course the obvious choice because there were no astronauts on board and the system could be put through its paces.”

    “We flew longer, faster, with greater radiation exposure and many other things. The goal was to punish the system to make sure that ‘Artemis II’ really works because there are people on it,” adds former NASA science chief Zurbuchen.

    Airbus delivered the module for the “Artemis II” mission to Florida in October 2021. It is now being tested and integrated at the Kennedy Space Center.

    Astronauts to the moon

    It is not yet clear which astronauts will be on board the “Artemis II” mission. However, it is assumed that no one from Europe will be among them, as is the case with “Artemis III”. This mission is scheduled to launch in 2025 at the earliest and will not only orbit the moon, but also land on the lunar surface. NASA wants to bring the first woman and the first “person of color” to the moon.

    ESA astronaut Alexander Gerst is considered a hot candidate for the first European moon landing. He was in Cape Canaveral when “Artemis I” was launched in November. He was fascinated by the force with which the so-called SLS moon rocket, the most powerful rocket ever built, took off. At the Airbus press conference, he talked about his impressions: “You are several kilometers away and the sound wave takes about 20 seconds to reach you. You see the rocket launch before you hear it. And the seismic wave is even faster than sound. You feel the earth shaking before the sound wave reaches you. It was an impressive thing to see the most powerful rocket that has ever flown take off.”

    NASA checks

    According to Zurbuchen, on the return flight of “Artemis I”, when the space capsule re-entered the Earth’s atmosphere, the heat shield on the underside of the capsule melted somewhat more than expected: “We have sensors on the inside that showed where the heat distribution and erosion is. Now we need to understand exactly what happened. That’s one of the things we’re working on at the moment.”

    Protection for astronauts from radiation

    Another is the evaluation of a radiation experiment inside the capsule. It is still ongoing. There have been no astronauts on board “Orion” yet, but there were just two measuring dummies. In the international research project entitled MARE (Matroshka AstroRad Radiation Experiment) under the leadership of the German Aerospace Center (DLR), thousands of detectors were installed in two “phantoms” to measure radiation in space and its effects on the human body.

    A DLR team read some of the detectors in the USA in January after the dummies were removed from the space capsule. They are now being analyzed. The results are important for better protecting astronauts on future manned missions. Such findings from “Artemis I” pave the way for follow-on missions back to the Moon.

    Russian space probe

    “Luna-25” crashed on the moon

    Setback for Russian space travel: The space probe “Luna-25” crashed shortly before the planned landing on the moon. This was announced by the space agency Roscosmos. “Luna-25” was launched into space over a week ago.

    Russia’s first moon mission since the end of the Soviet Union has failed. The probe “Luna-25” hit the lunar surface after an “unplanned situation” and ceased to exist, the Russian space agency Roscosmos announced in Moscow. The reasons for this are being clarified, it said.

    “Luna-25” is part of the Russian lunar program, which envisages the construction of its own space station there by 2040. The probe was launched into space more than a week ago and entered orbit around the Moon last Wednesday. Since then, the 1,800 kilogram spacecraft has been looking for a suitable landing site, according to Russian sources.

    On Saturday, Roscosmos reported an “unusual situation” on board “Luna-25” that made planned maneuvers impossible. The lander was scheduled to land on the moon’s south pole on Monday. This is of particular interest to scientists because it is constantly in shadow and therefore there may be frozen water there. On Thursday, Roscosmos presented a photo of the moon’s surface taken by the probe.

    Actually, “Luna” should have been on the move a long time ago. The first planned launch date of a lunar probe was in 2012. Last year, a launch was again planned for May, but was then delayed again due to technical problems. Roscosmos originally worked with the European Space Agency (ESA) on the lunar program. However, after Russia’s invasion of Ukraine in February 2022, ESA ended its cooperation with Moscow.

    Roscosmos said the expedition was designed to demonstrate that Russia is capable of putting a payload on the moon and to ensure that it has guaranteed access to the lunar surface. The sanctions imposed after the invasion of Ukraine are making it difficult for Russia to access Western technology.

    Incident involving space probe “Luna-25”

    Russia’s space agency has reported an “unplanned situation” with its space probe. Specialists are now working to determine the cause. “Luna-25” is actually supposed to land on the moon in two days.

    According to information from Moscow, two days before the planned landing on the moon, there was an unexpected incident on the Russian space probe “Luna-25”. In preparation for the imminent landing on the lunar surface, the probe was scheduled to enter a new orbit of the Earth’s satellite on Saturday at 2:10 pm Moscow time (1:10 pm CEST), the Russian space agency Roscosmos announced.

    “During the operation, an unplanned situation arose on board the automatic station, which did not allow the maneuver to be carried out under the specified parameters,” it said. Specialists are now investigating the incident and determining the cause. No further details were given. The probe is scheduled to land on the moon next Monday. It is still unclear whether this will remain the case

    Probe launched a week ago

    “Luna-25” is part of the Russian lunar program, which plans to build its own space station there by 2040. The probe was launched into space more than a week ago and entered orbit around the Moon last Wednesday. Since then, the 1,800-kilogram spacecraft has been searching for a suitable landing site, according to Russian sources. On Thursday, Roscosmos presented a photo of the lunar surface taken by the probe.

    Actually, “Luna” should have been on the move a long time ago. The first planned launch date of a lunar probe was in 2012. Last year, a launch was again planned for May, but was then delayed again due to technical problems. Roscosmos originally worked with the European Space Agency (ESA) on the lunar program. However, after Russia invaded Ukraine in February 2022, ESA ended its cooperation with Moscow.

    Last flight of Ariane 5

    Success story with a bumpy start

    The European carrier rocket Ariane 5 has launched for the last time. On board is the German satellite “Heinrich Hertz”. In its almost 30-year history, the rocket has flown many important missions. A look back.

    The 117th launch marked the end of the era of the European Ariane 5 launch vehicle. During the night, Ariane 5 took off from the Kourou spaceport in French Guiana for the last time. It is to launch two satellites into space.

    A good opportunity to look back at the history of Ariane 5 – a story that did not begin very successfully.

    Maiden flight of Ariane 5 rocket failed

    On June 4, 1996, the first Ariane 5 took off from Kourou on its maiden flight. But just 40 seconds after takeoff, the rocket exploded over the rainforest. The cause: a programming error that caused the data to falsely show a major change in the rocket’s course. The flight computer then initiated an extreme corrective maneuver in which two boosters broke off from the side of the rocket and the rocket initiated its self-destruct mechanism.

    The second launch was also only a partial success. The upper stage of the rocket did not reach full power and the satellites on board the Ariane 5 entered an orbit that was too low.

    Carrier for many scientific missions

    After that, however, the success rate rose sharply. From 1998 onwards, Ariane 5 launched a number of satellites into space, including important scientific missions. One of these is “Rosetta”. This mission was the first time that a probe landed on a comet. Since 2018, the “BepiColombo” probe has been on its way to the planet closest to the sun, Mercury. It is scheduled to arrive there in 2025 and collect information about the planet’s surface, magnetic field and core.

    At the end of 2021, an Ariane 5 sent the James Webb Space Telescope on its journey with such precision and fuel economy that the telescope’s mission life was probably more than doubled. And “Juice”, a probe that will search for signs of life on Jupiter’s icy moons, was also safely launched into space by an Ariane 5 in April 2023.

    A German satellite at its last launch

    With its last launch, Ariane 5 will put the communications satellites “Syracuse 4B” and “Heinrich Hertz” into Earth orbit. “Heinrich Hertz” is the first communications satellite in twenty years to be developed entirely in Germany. It is about the size of a minibus and is expected to spend 15 years in space.

    During this time, “Heinrich Hertz” will test new technologies for satellite-based communication in space. This is because there is a high level of radiation in orbit and temperatures fluctuate between extreme values. There are around 20 experiments on communication, antenna and satellite technology on board the satellite.

    In addition, “Heinrich Hertz” will serve as a relay station. It can forward signals from other satellites and thus maintain their contact time with ground stations for longer.

    The satellite is named after the German physicist Heinrich Hertz, who was the first to detect electromagnetic waves in Karlsruhe in 1886, thus helping modern communications technology to achieve a breakthrough. His name is best known for the unit “Hertz” named after him, which is used to measure frequencies.

    An intelligent satellite

    The special thing about the “Heinrich Hertz” satellite is that it can continue to develop in space – it can be reprogrammed, so to speak. Normally, the functions of satellites cannot be changed once they have been put into orbit. However, “Heinrich Hertz” can, for example, change its transmission and reception frequencies, redirect its antennas and thus adjust the region from which data can be received and sent.

    “With this great flexibility, it is also possible, for example, to support disaster scenarios,” says Jens Müller from the TU Ilmenau, who helped develop the satellite’s telecommunications component. “Take the flood disaster in the Ahr Valley: there we were the first to complain about the loss of the telecommunications structure.” In such cases, satellites such as “Heinrich Hertz”, which can be used flexibly, could help to quickly rebuild the communication channels.

    Ariane 6 ready for use by the end of 2023

    The launch of “Heinrich Hertz” marks the end of the Ariane 5 era. Its successor, the Ariane 6, is expected to be ready for launch at the end of 2023. According to the German Aerospace Center (DLR), the new rocket will be more powerful, more flexible and more cost-effective than the previous version.

    In order to transport both less heavy and many small satellites into space, two different models of the Ariane 6 were developed: one with two and one with four solid boosters. This will enable the new launch vehicle to transport up to eleven tonnes of payload into geostationary orbit – twice as much as the Ariane 5. With the Ariane 6, the European space community wants to secure its independent access to space and continue to transport its own missions into Earth’s orbit.

    ESA astronaut Gerst

    “Each of us has a dream”

    In 2008, Alexander Gerst was one of 8,413 applicants for admission to the ESA astronaut corps. In an interview with tagesschau.de, he promotes this unusual profession and encourages interested parties to follow his example.

    About: ESA astronaut Alexander Gerst

    Alexander Gerst has been a member of the ESA astronaut corps since 2009. In 2014 and 2018 he flew to the International Space Station ISS on board a Russian Soyuz rocket. Gerst was born in Künzelsau in 1976 and is a geophysicist, volcanologist and astronaut.

    tagesschau.de: Mr. Gerst, the ESA is looking for new astronauts. Who is this job for?

    Alexander Gerst: I think you should apply if you have always dreamed of seeing the Earth from the outside, if you are curious and enjoy technology and adventure. In my view, the main thing when applying is not how good your chances of getting the job are. We all have a dream and we owe it to our 80-year-old selves to give that dream a fair chance. I did that twelve years ago and it was worth it. In any case, you will learn a lot about yourself in a selection process like this. It is an exciting adventure to be a part of.

    Next destination: the moon?

    tagesschau.de: When you applied to ESA in 2008, a flight to the International Space Station (ISS) was the most likely mission. The future generation of astronauts has many other options open to them.

    Gerst: The destinations we will soon be flying to will become more diverse. In the future, the Moon will be added, and later Mars. First, however, the new astronauts will go into Earth orbit, perhaps to the ISS, to gain experience there. Missions to the Moon are much more complex, which is why this experience is important. The first missions to the Moon will probably be carried out by the current experienced astronauts in the corps, and the new colleagues will gradually join them. That is a great prospect for European space travel.

    tagesschau.de: Does that mean that we will soon see you in space again?

    Gerst: One thing you always ask yourself as an astronaut is: where is the next flight going and when will it be? That is an uncertainty you have to live with. When I was selected, I didn’t know when my first mission would be. I was then extremely lucky that it came soon, and even a second one. I am still active in the corps, and the obvious prospect is of course a moon mission. First of all, however, we have to build the Lunar Gateway together with our international partners. This is a space platform that orbits the moon as a base for missions to the lunar surface. As a representative of the ESA astronaut corps, I am involved in the planning and construction. The project is in full swing.

    I also lead a team at ESA that is developing future strategies for the exploration of Earth’s orbit. Part of being an astronaut is that, even if you are not specifically nominated for a mission, you keep fitting in all areas. I do sport, of course, and have training sessions in between, for example flight training and underwater training with the spacesuit. So I don’t get bored on Earth either.

    “This is a huge privilege”

    tagesschau.de: You were on the ISS for six months each in 2014 and 2018. What was different about the missions?

    Gerst: When you fly into space for the first time, it’s a special flight because you are crossing a boundary that you have never crossed before. You don’t know how you will function beyond this boundary, whether you are made for space or not.

    A second mission is completely different: you approach it more relaxed because you have a lot more experience and you know what you can do. That’s why you often get a more complex mission the second time around because you have more mental capacity free and can therefore get a little further out of your comfort zone. That’s why I was nominated as commander of the ISS. It’s a complex job; you have to look after your crew and training team long before the mission, and you bear a lot of responsibility for the success of the mission. And on my second mission I launched into space with an astronaut and an astronaut, neither of whom had ever been in space before.

    This gradual build-up of mission complexity is also the reason why experienced astronauts are selected for flights to the Moon. They have already shown that they can work well up there and who are ready for a more complex mission with greater uncertainties, which of course also entails a somewhat higher risk.

    tagesschau.de: What excites you most about the job of an astronaut?

    Gerst: I was fascinated by many different aspects: There is the international collaboration with very inspiring individuals, the mission teams, scientists from different countries who come together in a large project. But what is even more amazing for me is the perspective you gain as an astronaut: the view from the window of the spaceship onto our planet, combined with the understanding of where you are at the moment. This “worldview” is then, in the truest sense of the word, an outside perspective on our world. That is a huge privilege, I thought it was great.

    ISS anniversary

    20 years of human outposts in space

    The ISS has been permanently inhabited by astronauts since November 2, 2000. Researchers praise the multinational project. But the ISS will only serve as an outpost for humanity for a few more years.

    On cloudless nights, the ISS can be seen flying in the sky with the naked eye: a bright dot that circles the planet Earth about every 90 minutes. With its solar panels, the flying laboratory is about the size of a football field.

    Twenty years ago, the first long-term crew docked with the ISS, two Russian cosmonauts and a US astronaut. Since then, many multinational teams have followed; for ESA, the ISS has become “a true outpost of humanity”. It is jointly operated and financed by the space agencies of the USA, Russia, Canada, Japan and the European Space Agency (ESA).

    German astronauts on the ISS

    Among the ISS astronauts from 21 nations, there have been three Germans so far: Thomas Reiter, Hans Schlegel and Alexander Gerst. Both Reiter and Gerst were part of long-term crew, meaning they spent several months on the ISS. Reiter was also the first European long-term flyer in 2006. He remembers:

    Living and working together on the ISS – at the forefront of science, representing the scientific teams on Earth – is fantastic. But then being able to go out and work on the ISS is incomparable. The spacewalks are the highlight.”

    What the MIR did not have to offer

    For Reiter, his stay on the ISS was already his second visit to a space station. In 1995, the ESA astronaut had already flown to the Russian space station MIR, a space station built by the Soviet Union. It orbited the Earth from 1986 to 2001. He compares:

    The biggest difference to MIR is simply the space you have up there. The ISS is much more spacious, has more modern technology and allows you to conduct research more efficiently. At the MIR station, everything was very cramped. We had to assemble the equipment we needed to carry out the experiments and then put it away again after work.

    Still in operation for ten years

    If it runs smoothly from a technical perspective, the ISS could remain in operation for at least another ten years. During this time, it will be used increasingly commercially, giving private industry the opportunity to conduct experiments in zero gravity in its own modules.

    With many industrial partners, the research capacities on the ISS can be optimally utilized and even expanded, explains ESA astronaut Matthias Maurer, who is scheduled to be the next German to fly to the ISS in 2021. But, says Maurer: “This is always just a supplement to the current program. I don’t think we’ll see the classic providers, i.e. the space agencies, disappear.”

    The successor has already been chosen

    The ISS will be taken out of service by 2030 at the latest, and a new space station will be launched by then: the Lunar Orbital Platform-Gateway. ESA astronaut Alexander Gerst explains: “This is a small space station that humanity is now planning and building. It will orbit the moon.”

    The station, which will also be operated by the ISS partners, will not be permanently manned. Among other things, it will serve as a stopover for new manned lunar missions, a kind of bus stop in space.

    Gerst said the new project has gotten off the ground and believes that understanding and acceptance of space travel has grown in recent years:

    I think a lot of people now really understand what we’re doing out there: exploring our cosmic environment and bringing scientific knowledge back to Earth to make life better down here, for example, researching diseases or new materials.”

    China also wants to put a space station into orbit in the next few years. Cooperation with China on the ISS failed in the 1990s due to a veto by the USA.

    World politics leaves its mark

    While the international astronaut teams worked well together on the ISS, things were sometimes very turbulent down on Earth. During the Ukraine crisis in 2014, for example, the USA partially suspended its space cooperation with Russia. But NASA stressed that there should be no compromises when it comes to the operation of the ISS.

    This was certainly also due to the fact that the USA itself was not able to send astronauts into space at that time. After the end of the Space Shuttle program in 2011, the Americans were dependent on the Russian Soyuz capsules.

    This prompted the then Russian Deputy Prime Minister Dmitri Rogozin to comment that the USA would have to bring its astronauts “to the ISS on a trampoline” without Russia.

    Half a year ago, there was a late retaliation from the USA to Moscow: After the successful flight of the private US company SpaceX to the ISS, SpaceX boss Elon Musk pointedly said: “The trampoline works.”

    Anti-Asteroid Program

    Forward defense in space

    Hundreds of asteroids are on a collision course with the Earth. It’s high time for a defense program. ESA and NASA are now going on the hunt together. The contracts have now been signed.

    The probability of winning the lottery on a Saturday is 1 in 140 million. “I don’t play the lottery,” says Rolf Densing, head of the ESA control center in Darmstadt. “But if these were chances of winning, you would have to play.”

    On his monitor he sees a list of asteroids and the probability that the Earth will be hit by each of them. For example, there is the asteroid “2010RF12”. Probability of being hit is 1 in 14 in 2095. Or “2018JD”: Probability of being hit is 1 in 675 in 2067. Or “2008JL3”: Probability of being hit is 1 in 6993 in 2027. The list goes on and on.

    ESA scientists are currently observing more than 1,000 asteroids that are on a potential collision course with Earth. Most of the impacts are only likely to occur in the coming decades. But one thing is clear: something is coming for Earth.

    Injuries and damage in Chelyabinsk

    People in Chelyabinsk, Russia, have already experienced this. On a winter’s day seven years ago, an approaching asteroid broke over the city. The shock wave shattered around 200,000 square meters of glass. More than 1,000 people were injured. This event has been shown once again that it is not a question of “if”. Rather, the question is when the next impact will occur, says ESA specialist Rolf Densing.

    Course: Asteroid Dimorphos. Arrival: 2022

    Together with NASA and ESA, we are now launching an asteroid defense program. The corresponding contracts for the European part of the mission were signed today. An American probe will fly to the asteroid “Dimorphos” next summer. It is scheduled to hit the asteroid in autumn 2022. The plan is to knock the asteroid, which has a diameter of around 160 meters, off course. A European probe will then evaluate the experiment.

    Fine work in the vicinity of a small celestial body – this is what Europeans have been experts at since the “Rosetta” mission. For ten years, the ESA navigated the “Rosetta” to probe seven billion kilometers through space in order to then land on a comet and study it. A pioneering mission that ended four years ago.

    Division of labor in space

    The former head of ESA mission operations, Paolo Ferri, said: “The Americans are our masters when it comes to landing on large planets. But we are the only ones who have experience of flying to small celestial bodies such as asteroids and carrying out investigations there.” Division of Labor in Space, then. The Americans direct a probe into an asteroid, the Europeans then head towards it to examine the crater and determine whether the impact was enough to throw the asteroid off course.

    The probe is being built in Bremen

    Rehearsals are being carried out on a celestial body that cannot pose a threat to the Earth. Dimorphos flies about 1 even million kilometers past the Earth. Things have to move quickly now. After the Americans bombard the asteroid, the Europeans want to launch the control mission in 2024. The probe, which bears the name of the Greek goddess Hera, is being built in Bremen. It will be no bigger than a desk and will have small mini-satellites on board that can examine the asteroid up close. An asteroid on a collision course: Americans and Europeans want to rehearse together for an emergency. “We are currently in the middle of the Corona crisis. We don’t need a second crisis of this magnitude,” says Densing.

  • Europe is investing a record amount in space travel. The ESA has set itself some ambitious goals

    Europe is investing a record amount in space travel. The ESA has set itself some ambitious goals: preventing asteroid impacts, embarking on a moon mission and clearing away space debris.

    It was a good 20 years ago that US actor Bruce Willis set-off into space: In the Hollywood blockbuster “Armageddon”, an asteroid is racing towards the earth. The only way towards stop it is to fly it and blow it up.

    What was science fiction back then could soon be reality. The European Space Agency ESA approved the so-called HERA mission, an asteroid defense mission led by Germany, at its Council of Ministers conference at the end of November.

    Protection against asteroid impact

    The ESA, together with the US space agency NASA, will investigate how humanity can protect itself from an asteroid impact on Earth: “We know that an asteroid will hit the Earth again at some point, and we have to be prepared for that moment. So far , no one has tried to seriously deflect an asteroid to find out exactly what happens then,” says Thomas Jarzombek, the German government’s coordinator for aerospace, explaining the ambitious project.

    What is the ESA?

    European Space Agency

    Based in Paris, France, founded in 1975

    Coordinates the space activities of the 22 member states *Record budget for the next three or five years: 14.4 billion euros, with Germany being the largest contributor with 3.3 billion euros * ESA’s priorities for the next three years: “Earth observation”, “space transport” and “human and robotic exploration”

    In June 2021, a NASA probe will fly to the double asteroidDidymos/Didymoon and impact there in October 2022. An ESA probe will then set off and measure the asteroid and the resulting crater. The mission is intended to find out whether asteroids can be diverted from their trajectory.

    Cleaning up space junk

    In addition to asteroid defence, the ESA is also negotiating the issue of space debris. Former astronaut and ESA coordinator Thomas Reiter says: “For the first time worldwide, we will begin to clear away the garbage because there is more and more space debris flying around up there. It’s time to clean up, to put it very carefully.”

    Robotic moon mission

    Another flagship project is the robotic moon mission. An Ariane rocket with a landing module will fly to the moon. “This is technically demanding project because so far there are only three nations that have really managed to land on the moon in a controlled manner – the USA, Russia and China,” says Jarzombek. “This could also contribute to the Americans’ moon mission. Everyone is interested in finding out where the interesting places on the moon are for later research.”

    It is still unclear when the robotic ESA mission will set off for the moon. The Americans want to start manned lunar flights again with the Artemis program from 2024. “The goal is not, as with the Apollo missions, to fly to the moon, bring back rocks and that’s it. This time it’s about making it sustainable,” explains Reiter. That’s why cargo transporters are needed to get scientific equipment to the lunar surface and conductance research there on a permanent basis.

    To the moon and fill up

    “From the moon, with one sixth of the gravity of the earth, it is easier to travel into the depths of our solar system as well as to supply fuel to satellites in geostationary orbit,” says Reiter.

    ESA astronaut Matthias Maurer, who will be the fourth Germanto fly to the ISS in the coming years, adds: “In the future, you could fly to the moon with a smaller rocket, fill up at the gas station there and then take off, for example towards Mars.”

    Earth observation by satellite

    The largest share of the new ESA budget, around 2.5 billion euros, will go to earth observation using satellites. “Especially in times of climate change, it is important to better understand what is happening in the atmosphere and at the polar caps,” explains Jarzombek.

    Much of the data from ESA satellites on weather and climateis already available free of charge on the Internet. It is used by start-up companies that are developing satellites with special sensors, says Jarzombek.

    “For example, there is a company that supports forest fire fighting. They are developing their own satellites with very special infrared optics. In addition to their own satellite data, however, they need further information, which is why they take additional photo data from ESA.”

    New satellites and services for climate monitoring, agriculture and disaster prevention are expected to begin operations in the next few years.

    Mass ejections from the Sun

    In addition to the weather on Earth, ESA will also be increasingly studying the weather in space. One mission is looking into the so-called coronal mass ejections of the sun – charged particles that are dangerous for the Earth: “The sun occasionally hurls electrically charged particles towards the Earth, which can have enormous consequences for our modern industrial societies,” explains Reiter. The result could be disruptions in radio traffic and even blackouts in the power supply.

    That is why a mission towards the sun is to be launched in a few years: “We want to fly to a position from which we can look at the sun and the earth from the side. From this perspective, we can see immediately when a coronal mass ejection moves towards the earth,” says Reiter.

    ESA mission

    A garbage disposal for space

    The European Space Agency (ESA) is launching the world’s first cleanup mission in space. “ClearSpace-1” is designed to remove debris. It will start in 2025.

    Space may be empty of air, but it is home to a lot of garbage. Tens of thousands of pieces of debris from old satellites or rockets are flying around in space. These pieces can fall to Earth or cause collisions. A cleanup mission is intended to reduce the risk of accidents. It is set to begin in 2025, as the ESA announced in Darmstadt. This was decided by the European space ministers at their most recent conference in Seville, Spain. In Germany, Federal Minister for Economic Affairs Peter Altmaier is responsible for space travel.

    In collaboration with a commercial consortium, a Swiss startup has now been commissioned to realize “ClearSpace-1”. ESA experts will be monitoring the project from Darmstadt, and it is scheduled to start in March next year.

    Business with a future

    “This example will set a precedent,” said ESADirector General Jan Wörner. He sees a future market in the removal of space debris. Space is infrastructure that is used every day for a variety of applications, said Wörner. Protecting this infrastructure is of great value.

    Lack of space in endless expansions

    Where space is already tight today, space could soon become even tighter. Experts expect thousands more satellites to be launched in the coming years. This increases the likelihood of collisions. According to the ESA, the aim of the “ClearSpace-1” mission is for a space probe to capture an upper stage of an old ESA rocket with four grippers and return it to Earth’s orbit to burn up. Later, other space probes will capture several large pieces of debris.

    Donors Germany

    The Federal Republic is contributing twelve million euros to”ClearSpace-1″. In total, the ESA member states have decided to invest 14.4 billion euros in European space travel in the coming years. Germany is increasing its total contribution from 1.9 to 3.3 billion euros compared to the 2016 ESA conference, making it the largest contributor.

    ESA Mars mission launched

    And the red planet beckons forever

    ExoMars is running according to plan: A “Proton” rocket with a satellite and landing unit was launched from Baikonur towards Mars. In 217 days, the satellite will be at the edge of the Martian atmosphere. Sebastian Kisters explains the special features of the ESA mission.

    What does ESA want on Mars?

    More than 40 space probes have already been sent to Mars. But one question is still unanswered: Is there life on our neighboring planet? The only thing that is certain so far is that there is methane in the Martian atmosphere. On Earth, the gas is mainly released by living things. However, it could also be of volcanic origin. The “ExoMars” mission is now intended to provide clarity. “For the first time, there are instruments that are capable of doing this,” says Paolo Ferri, the head of ESA mission operations.

    On Monday at 10:31 am Central European Time, the European Space Agency (ESA) launched a satellite with a landing unit into space.Together they weigh over four tons. This means that one record is already certain: it is the heaviest Mars mission of all time. With fuel and allignition stages, the rocket has a launch weight of 697 tons.

    It will take the satellite 217 days to reach the edge of the Martian atmosphere. Various instruments will then examine its composition. The most exciting part of the mission will begin on October 19 at 4:45 pm The landing module will then race towards the red planet at a speed of 5.8 kilometers per second. A parachute and three jets will slow it down shortly before impact. The landing will be at a maximum speed of five meters per second, or about 18 km/h. If the maneuver is successful, it will provide crucial information for a second part of the mission.

    In 2018, ESA plans to launch a rover with a drill to Mars. So far, robots have only scratched the surface of the planet. In three years, however, samples from a depth of two meters will be available. If there was ever life on Mars – or still is – then it would only be at this depth. On the surface, intense UV radiation is likely to have destroyed all life.

    Who controls the mission?

    What Houston is for NASA, Darmstadt is for ESA. The rapid maneuver is controlled from the European Space Operations Center in southernHesse. However, control is relative in this mission. Each control command takes over nine minutes to reach the probe from Darmstadt, over 75 million kilometers away. For comparison: the moon is 384,000 kilometers from Earth.

    Why is the launch taking place in Baikonur?

    NASA actually wanted to take part in the mission. But in 2012 it became too expensive for the Americans. ESA was able to gain the Russian space agency Roskosmos as a new cooperation partner. So now the launch will take place from the legendary spaceport in Baikonur. On October 4, 1957, the Russians launched the first satellite from Baikonur into space:”Sputnik”, which means “companion” or “lane companion” in German. The first victory in the race into space went to the Soviets. The West suffered from the “Sputnik shock” “. The first person was also launched into space from Baikonur: Yuri Gagarin, on April 12, 1961.

    Baikonur had two advantages for the Soviets: the dry desert climate and the location. Rockets that were launched in a northeasterly direction initially flew several thousand kilometers over Soviet territory. The city was long one of the most secret places in the world. To keep spies away, Baikonur was long marked 380 kilometers to the west on Soviet maps.

    Today the spaceport is located on Kazakh territory. Russia transfers over 100 million dollars a year to be allowed to continue using the site. The place is no longer secret: astronauts and satellites from all over the world begin their journey into space here.

    How does the launch go?

    At 10:31 am the earth shakes in the Kazakh desert. The 697-ton “Proton” rocket takes off. It goes straight up for the first kilometer. The rocket needs 22 seconds to do this. One minute after takeoff it has already reached a height of ten kilometers and is already traveling at 1500km/h towards Mars. Another minute later the “Proton” is already 45 kilometers from Earth.

    The “Proton” was long considered a reliable workhorse among launch vehicles. By the turn of the millennium, there had been more than 250 launches. The success rate was 96 percent. In recent years, however, there have been repeated problems. In 2013, a satellite mission was history seconds after launch; the rocket exploded. In 2014 and 2015, satellite board “Proton” rockets also failed to reach their targets. They crashed from a great height.

    Experienced scientists are also relying on lucky charms in the control center in Darmstadt in the coming hours. They believe the decision to fly with the Russians was the right one in any case. Paolo Ferri, ESA’s head of mission operations, says: “I believe science can help solve political problems. We are showing that countries can and must cooperate. This is a sign that politicians must follow.”

    When will the first person fly to Mars?

    The technology is there. “But the costs and risks make it impossible to send people to Mars at the moment,” says Paolo Ferri, head of ESA mission operations. The biggest problem is radiation in space. Astronauts cannot yet be adequately protected from it on a long journey to Mars. They would probably die quickly of cancer.

    ESA contract for Swiss startup

    The junk in orbit should be removed

    Thousands of remnants of decommissioned rockets and satellites are whizzing around the Earth. The Swiss startup ClearSpace is now to become the world’s first company to dispose of debris in space on behalf of the European Space Agency.

    After almost sixty years of space travel with several thousand launches, a veritable cloud of junk is now orbiting the Earth. Old rockets, parts of satellites and abandoned aircraft are buzzing around in orbit. In the early days of space travel, no one thought about this, says Luc Piguet.

    The 48-year-old electronics engineer is one of the two bosses of the Swiss company ClearSpace, which was commissioned by the European Space Agency (ESA) to carry out the world’s first space cleanup mission. The garbage is a danger, says Luc Piguet. “When two objects that no longer function collide, it generates a very large amount of junk in space, and that is a risk for all missions we have in the future.”

    Scrap burns up in the atmosphere

    Cleaning up is urgently needed. There are now around 42,000 objects flying through space. Like projectiles, they can be dangerous for satellites or space stations. The company ClearSpace was founded in the vice of the EPFL Lausanne technical university. ClearSpace means free space in German.

    ClearSpace wants to help make the orbit free of junk again, says the 48-year-old electronics engineer. “What we’re actually doing is taking these objects that no longer work out of space. We pick them up and bring them back into the Earth’s atmosphere, where they burn up,” explains Piguet.

    Braking manoeuver is complicated

    For the cleanup operation, a satellite called”ClearSpace-1″ will be built and sent into space to collect the garbage. The device will find objects and slow them down. “We will reduce the speed so that it comes back down into the Earth’s atmosphere, ” says Piguet.

    In, it is known where the debris is in orbit. This is made possible by a database that is fed with coordinates from radar monitoring. The information is accurate to within about two to three kilometers. The satellite must find the exact position of the objects themselves and then seize them for the braking process.

    “This is one of the biggest challenges of the mission,” explains Piguet. Since there is no gravitational pull, there is no frictional resistance between the objects. “A very small touch can trigger large movements. And when the object is moving, it becomes complicated to even touch it.”

    Hope for a new market

    The subsequent burning up of the scrap pieces upon entry into the Earth’s atmosphere, however, is relatively problem-free, because new satellites in particular are specially designed to be seen in a kind of shooting star shower.

    The fact that ESA has awarded the 100 million euro cleanup contract to the Swiss company ClearSpace is a real accolade for the company, which has around 30 employees. “This is fantastic for us,” says Piguet. However, the Swiss cannot handle the task alone, so they are working with major partners – such as Airbus.

    Everyone involved hopes that in the long term a market will emerge for scrap disposal in orbit. Whether the system actually works will be seen from 2025 onwards. That is when the “ClearSpace-1” satellite is due to be sent into space.

    Anti-Asteroid Program

    Forward defense in space

    Hundreds of asteroids are on a collision course with the Earth. It is high time for a defense program. ESA and NASA are now going on the hunt together. The contracts have now been signed.

    The probability of winning the lottery on a Saturday is 1 in 140 million. “I don’t play the lottery,” says Rolf Densing, head of the ESA control center in Darmstadt. “But if these were chances of winning, you would have to play.”

    On his monitor he sees a list of asteroids and the probability that the Earth will be hit by each of them. For example, there is the asteroid “2010RF12”. Probability of being hit is 1 in 14 in 2095.Or “2018JD”: Probability of being hit is 1 in 675 in 2067. Or”2008JL3″: Probability of being hit is 1 in 6993 in 2027. The list goes on and on.

    ESA scientists are currently observing more than 1,000 asteroids that are on a potential collision course with Earth. Most of the impacts are only likely to occur in the coming decades. But one thing is clear: something is coming for Earth.

    Injuries and damage in Chelyabinsk

    People in Chelyabinsk, Russia, have already experienced this. On a winter’s day seven years ago, an approaching asteroid broke over the city. The shock wave shattered around 200,000 square meters of glass. More than 1,000 people were injured. This event has shown once again that it is not a question of “if”. Rather, the question is when the next impact will occur, says ESA specialist Rolf Densing.

    Course: Asteroid Dimorphos. Arrival: 2022

    Together with NASA, ESA is now launching an asteroid defense program. The corresponding contracts for the European part of the mission we resigned today. An American probe will fly to the asteroid “Dimorphos”next summer. It is scheduled to hit the asteroid in autumn 2022. The plan is to knock the asteroid, which has a diameter of around 160 meters, off course. An European probe will then evaluate the experiment.

    Fine work in the vicinity of a small celestial body – this is what Europeans have been experts at since the “Rosetta” mission. For ten years, the ESA navigated the “Rosetta” probe seven billion kilometers through space in order to then land on a comet and study it. A pioneering mission that ended four years ago.

    Division of labor in space

    The former head of ESA mission operations, Paolo Ferri, says: “The Americans are our masters when it comes to landing on large planets. But we are the only ones who have experience of flying to small celestial bodies such as asteroids and carrying out investigations there.”Division of labor in space, then. The Americans directed a probe into an asteroid, the Europeans then head towards it to examine the crater and determine whether the impact was enough to throw the asteroid off course.

    The probe is being built in Bremen

    Rehearsals are being carried out on a celestial body that cannot pose a threat to the Earth. Dimorphos flies about eleven million kilometers past the Earth. Things have to move quickly now. After the Americans bombard the asteroid, the Europeans want to launch the control mission in 2024. The probe, which bears the name of the Greek goddess Hera, is being built in Bremen.

    It will be no bigger than a desk and will have small mini-satellites onboard that can examine the asteroid up close. An asteroid on a collision course: Americans and Europeans want to rehearse together for an emergency.”We are currently in the middle of the Corona crisis. We don’t need a second crisis of this magnitude,” says Densing.

    Space law

    Vacuum, but not lawless

    There’s a lot going on in space: celestial bodies, hundreds of satellites and, last but not least, a lot of junk. That’s why rules apply even in infinite distances. An ESA conference is now discussing who is allowed to do what in space.

    What international agreements are there?

    Space does not belong to the sovereign territory of any state. Therefore, five United Nations agreements form the basis of space law. The most important is the Outer Space Treaty of 1967, which more than 100 states have ratified. Among other things, it regulates that every nation has free access to space and that no country may make territorial claims on other celestial bodies.

    The Moon Treaty of 1979 declares the moon and other celestial bodies to be the common heritage of mankind. However, it has so far been ratified by less than 20 states, partly because of possible economic interests in the extraction of raw materials in space.

    What rules apply to raw materials in space?

    Many celestial bodies are brimming with useful resources, such as rare metals. The USA and Luxembourg have already passed laws that potentially declare raw materials mined in space to be their property.

    According to Stephan Hobe, head of the Institute for AirLaw, Space Law and Cyber Law at the University of Cologne, this is “simply and plainly illegal”. One can only pass laws on something that one has control over. And according to the Outer Space Treaty, celestial bodies andtheir raw materials are not included.

    What happens to space junk?

    There is currently no agreement that obliges spacefaring nations to avoid waste or to dispose of the waste that has already been generated. In recent years, awareness of the problem of waste in space has grown, but legal regulation is still a major challenge, says space lawyer Hobe. It must try to retrospectively define responsibility for actions that werepreviously permitted. “At the moment, no one is responsible for the wastein orbit.”

    According to the European Space Agency (ESA), the spaceinstitutions have agreed – without any legal obligation – to keep particularlyimportant orbits free of debris. At an ESA conference in Darmstadt, lawyers, engineers and business representatives are now discussing further regulations on space debris and safety.

    Is there a space traffic regulation?

    There are currently no right-of-way or other traffic rules for spacecraft, explains Holger Krag, head of the ESA Space Debris Office.”A healthy pragmatism has developed in operations.” Around 95 percentof potentially dangerous encounters are with inactive objects such as debris ordeactivated satellites, so the question of right-of-way does not arise. In the remaining cases, the operators of the respective satellites coordinate with each other, reports Krag.

    Who is liable for damage caused by space travel?

    The UN agreements concerning the legal status of space oblige states to be liable for damage caused by their space activities. A distinction is made between two cases: In the case of damage in space, for example the collision of two satellites, a state is only liable if it can be proven that it was at fault for the accident. If there is damage on earth – for example if a satellite crashes – the state that operates the satellite is always liable.

    Can space be used for military purposes?

    Space is largely demilitarized. This means that no weapons may be stationed in Earth’s orbit or on other celestial bodies. This is stipulated by UN agreements. However, these regulations are now considered to be outdated.

    Because the US government is pushing ahead with its plans to create its own armed forces in space, concerns are growing about an arms race in space. In 2014, Russia and China therefore presented a draft treaty to theUN Disarmament Conference in Geneva that would ban certain weapons. But becausethe development of weapons is progressing so rapidly, many believe that this is not enough – also because harmless satellites can become weapons if malicious intent is used.

    Who is allowed to conduct research in space?

    For peaceful purposes, every nation may build stations in orbit and on other celestial bodies. The principle of mutual openness applies: no state may completely seal off its research station from other states.

    How is space law enforced?

    There is no separate space court. States could appeal to the International Court of Justice in The Hague in the event of violations of the UN space treaties. Companies or private individuals could, under certain circumstances, bring claims against third parties in national civil courts -provided that there is a legal regulation governing the dispute.

    US space company commissioned

    NASA is developing suits for the moon

    The private company Axiom Space is producing new space suits for $228.5 million. NASA commissioned the company because it wants to send people back to the moon with the Artemis mission.

    The private US space company Axiom Space is to develop a space suit system that NASA plans to use for its first moon landing in over 50 years. As part of the “Artemis” program, the “Artemis III” mission is to fly people to the moon and land on the Earth’s satellite in 2025 at the earliest.

    Axiom Space, based in Houston, Texas, will be responsiblefor the design, development, certification and ultimately production of the space suits and associated equipment, the US space agency said.

    The contract also includes testing the suits in a”space-like environment.” However, acceptance and approval ultimately rests with NASA itself.

    NASA struggles with technical problems

    Axiom made headlines in April with the first private charter flight to the International Space Station (ISS). Three private individuals flew into space in a “SpaceX” space capsule with an experienced pilot -ticket prices were reportedly 50 million US dollars.

    Meanwhile, NASA’s “Artemis” space program is struggling with problems. The launch of the “Artemis I” mission has already been aborted twice due to technical problems with the newly developed SLS rocket.

    Radiation experiment

    Dolls in Space

    Two dolls flew into space and back in a spaceship as part of a radiation experiment. Thomas Berger from the German Aerospace Center explains why the Artemis I mission is important for the future of manned space travel.

    tagesschau.de: In December, the two manikins landed safely back on Earth in the “Orion” space capsule. How happy were you?

    Thomas Berger: Very happy. The two dolls, which we scientists have called “Helga” and “Zohar”, have travelled more than two million kilometres in space after around 25 days. Orion is a – in English they say “human-rated” spaceship, a spaceship built for humans that has travelled further away from the Earth than any such spaceship has ever been before. After landing in the Pacific, Orion was brought back to NASA, to the Kennedy Space Center in Florida. Then “Helga” and”Zohar” were removed from the spaceship by our NASA colleagues and arrived safely at the DLR in Cologne a good two weeks ago.

    Dr. Thomas Berger is head of the “Biophysics” working group at the German Aerospace Center in Cologne. His research focuses on radiation protection and the development of radiation detectors.

    tagesschau.de: Thousands of detectors were built into the two measuring dummies. They measured how much radiation affected the dummies I nspace. How did that work?

    Berger: The dolls are made up of 38 plastic discs that have different densities and thus replicate the different organs of the human body. We measure the radiation with active and passive detectors: The active measuring devices are battery-operated. They save the measured radiation do seat a specific point in time, so every five minutes in the case of our detectors.

    This means that after evaluating the measurement data, we can say how high the radiation dose was in free space at a specific point in time, for example when Orion flew past the moon. The active detectors, therefore, provide a time-resolved distribution of the radiation over the entire mission. That is the advantage of these measuring devices.

    tagesschau.de: At which points on the dolls did the detectors measure the radiation?

    Berger: Several active measuring devices were installed inthe lungs, stomach, uterus and spinal cord. These are the most radiation-sensitive organs in the human body and therefore, after the evaluation, we can say, for example, how high the radiation dose in the lungs was on the fifth day of the mission.

    6000 detectors in each doll

    tagesschau.de: And what was the purpose of the passive detectors?

    Berger: We installed around 6,000 passive detectors – these are small crystals – in each Phantom, so a total of 12,000. 80 percent came from the DLR, 20 percent from NASA. The entire spacecraft was equipped with detectors like a Christmas tree. The crystals are called thermo luminescence detectors. In principle, they store the ionizing radiation that hit them in space in their crystal lattices. When we heat them for analysis, they emit thestored energy in the form of light, and this light can be measured. The light emitted by the crystals is proportional to the radiation dose that they absorbed over the course of the experiment. This means that we can use these crystals to map the total dose over the mission and ultimately obtain a three-dimensional radiation distribution.

    tagesschau.de: What are these crystals made of?

    Berger: The crystals are made of lithium fluoride. These detectors have been used in radiation protection for ages. For example, if you work in a nuclear power plant or in radiology in a hospital, you also wear dosimeters that measure your radiation dose. These crystals are also built into them.

    Test dolls have women’s bodies

    tagesschau.de: The test dummies you developed are women’sbodies. Why?

    Berger: On the one hand, there are now more female astronauts, which is absolutely to be welcomed. Unfortunately, however, the general radiation-induced cancer risk is higher for women than for men. The body’s overall cancer risk is made up of organ cancer risks, and for women, breast cancer is an additional risk. And that is why the overall risk is higher for women than for men. There are not yet so many basic measurements for the female body, and that is why we flew these two female bodies.

    tagesschau.de: One of the two dolls wore a radiation protection vest during the flight. What insights are you hoping to gain?

    Berger: The doll “Zohar”, which is financed by the Israeli Space Agency (ISA), wore a 26 kilogram vest, the AstoRad vest, which was developed by the Israeli company StemRad and is made of high-density polyethylene. The basic idea behind this vest is that it should give astronauts, when they fly in space – as part of the Orion mission or on future exploration missions to Mars – additional radiation protection, especially in the event of a solar flare. Last week, for example, the northern lights could be seen in northern Germany.

    This is a clear sign that charged particles were emitted by the sun, which then created the northern lights by interacting with the molecules in the atmosphere. Then you see the beautiful lights. But at the same time, these particles emitted by the sun pose a high risk: astronauts could get radiation sickness. This vest was developed to provide additional protection against this dose of radiation. Since only one of the two dolls wore this protective vest, we can see what the vest really does.

    Analysis of results is ongoing

    tagesschau.de: When do you expect to see the results of your evaluations? After all, people are supposed to be on board the Orion spacecraft for Artemis II in 2024.

    Berger: We read the active detectors in January at the Kennedy Space Center immediately after the handover, so we saved the data for posterity and science, so to speak. We are currently working on the analysis. The passive detectors, which were mainly provided by the DLR, will certainly keep us busy for the next six months or a year.

    Former astronaut has an accident

    “Earthrise” photographer Anders dies

    “Earthrise” is the name of one of the most famous photos of the Earth. It was taken by astronaut William Anders on Christmas Eve1968. The Apollo 8 participant has now died in an accident.

    Former US astronaut William Anders, who took one of the most iconic photos in space history, has died. He died when a small plane he was piloting crashed into the sea northwest of the US coastal metropolis of Seattle, US media reported, citing his son Greg. Anders was 90 years old. He was flying alone in the plane, it was said. The crash is being investigated by the authorities.

    A special photo on Christmas Eve

    In 1968, Anders was part of Apollo 8, the first manned flight to the moon, although it did not land. On Christmas Eve, he happened totake the photo “Earthrise,” which shows the moon in the foreground and the Earth rising in the background. Along with Anders, the astronauts Frank Borman and James Lovell were on board.

    During their mission, the astronauts were supposed to take photos of the moon. “I always call it ‘ironic,’” Anders told the Seattle Times. “We went there to discover the moon. But what we really discovered is the Earth.”

    The US space agency NASA later published Anders’ photo under the simple number “AS08-14-2383”. It changed humanity’s view of the earth and became a symbol of the earth’s fragility. It is sometimes seen as the trigger for the environmental movement.

    “Here we are, on an insignificant planet”

    It was taken with a Hasselblad camera. Anders himself was dissatisfied with the quality of the picture. It wasn’t completely sharp, he told the Seattle Times. But the sight of the small, blue globe, half in shadow behind the horizon of the gray moon, also changed him and his way of thinking.

    Here we are, on an insignificant planet orbiting a not particularly significant star in a galaxy of millions of stars, which is not significant when there are millions and millions of galaxies in the universe -so are we really that significant? I don’t think so.

    “Anders has given humanity one of the most profound gifts an astronaut can give,” commented NASA chief Bill Nelson via the online platform X.

    Career after space career

    After his career at NASA, Anders worked as a presidential advisor, US ambassador, and for various nuclear energy and aviation companies. He most recently lived on the San Juan Islands on the northwest coast of the USA. Anders was married and had six children.

    NASA astronaut

    Apollo 8 commander Frank Borman dies

    With his crew, he completed the first mission around the moon – and thus paved the way for the landing on the Earth’s satellite a few months later: Now NASA astronaut and Apollo 8 commander Frank Borman has died.

    The US astronaut and commander of the Apollo 8 mission Frank Borman has died. Borman died on Tuesday in the US state of Montana at the age of 95, the space agency NASA announced on Thursday (local time). “Today were member one of NASA’s finest. Astronaut Frank Borman was a true American hero,” said NASA Director Bill Nelson.

    From the Cape Canaveral spaceport, Commander Borman and his colleagues James Lovell and William Anders launched to the moon on December 21,1968 as part of the Apollo program, paving the way for the moon landing a few months later.

    Christmas message from lunar orbit

    The trio’s journey lasted three days, and on Christmas Eve Apollo 8 entered lunar orbit. The astronauts sent a Christmas message with Bible quotations to Earth via live broadcast. Mission leader Borman concluded by saying: “And from the crew of Apollo 8, we close with good night, goodluck, and Merry Christmas, may God bless you all – all of you on the good Earth.”

    Apollo 8 orbited the moon ten times. On the fourth orbit,Anders took the iconic “Earthrise” photo, which showed the Earth rising above the lunar horizon. Borman later described what humanity’s home planet looked like from a distance.

    We were the first people to see the world in its majestic totality, a tremendously emotional experience for each of us. We said nothing to each other, but I was sure our thoughts were identical – they were of our families on this spinning globe. And perhaps we shared another thought I had: This must be what God sees.

    Later head of the Apollo program

    A native of Indiana, Borman developed a passion for airplanes at the age of 15, which eventually led him to the Air Force and later to NASA.

    From 1950 he was a fighter pilot and later also an assistant professor of thermodynamics at the West Point Military Academy. In 1967 he was a member of a committee that investigated the Apollo spacecraft fire that killed three astronauts. He was later appointed director of the Apollo program and led the team that redesigned the Apollo spacecraft.

    Borman with numerous honors

    After leaving the Air Force in 1970, Borman became a consultant to Eastern Airline before holding various senior positions there and becoming its president. Bormann also served as a special envoy to the U.S.President.

    He received numerous awards, including the Congressional Space Medal of Honor. In 1990, Borman was inducted into the International Aerospace Hall of Fame and in 1993 into the US Astronaut Hall of Fame. Borman was married and had two sons and four grand children.

    Rocket from SpaceX

    “Starship” test flight successful on fourth attempt

    Transport to the Moon – and Mars?

    It worked on the fourth attempt: the giant rocket”Starship” has returned to Earth after a successful test flight. Itis expected to fly humans to the moon for the first time in 2026.

    The US space company SpaceX has successfully completed a test flight of its mega rocket “Starship” for the first time on its fourth attempt. The 120-meter-long spacecraft successfully splashed down in the Indian Ocean after a flight around the Earth for about an hour, as the company of multi-billionaire Elon Musk announced on the online service X.

    The rockets were lost during the first three test flights.”Splash down confirmed! Congratulations to the entire SpaceX team for an exciting fourth test flight of Starship!” the company said. The landing on the sea was also shown in a live stream.

    The rocket was launched in the morning (local time) from the Starbase spaceport in Boca Chica, Texas. According to SpaceX, the focus of the fourth test was on the controlled and safe return of the rocket to Earth and its reusability.

    SpaceX hopes to eventually take people to Mars with the”Starship” rocket. Before that, the new mega-rocket, which is supposed to be completely recyclable, will transport NASA astronauts to the moon.

    In 2026, the US space agency plans to use the”Starship” rocket to bring astronauts to Earth’s satellite for the first time in more than 50 years. For the mission called “Artemis 3”, the new rocket must be able to be brought into orbit and then refueled with several “Starship tankers” for its long journey.

    Three failed attempts

    During its first test flight in April 2023, the”Starship” rocket staggered and blew up in the air. During the second attempt in November 2023, the rocket exploded after separating from the propulsion stage after a successful launch.

    The third test flight took place in mid-March 2024. The rocket traveled a significantly longer distance than in the first two test flights before SpaceX reported its “loss” 49 minutes after launch.

    Explosions shortly after takeoff

    Second Starship test also ends prematurely

    One day, “Starship” rockets will take people to the moon and beyond. But SpaceX’s second test flight also went differently than hoped: Both rocket stages exploded shortly after launch.

    The largest rocket system ever built in the history of space travel was also unable to complete a second test flight. The unmanned 120-meter-high rocket took off from SpaceX’s Starbase spaceport in the US state of Texas. However, eight minutes later, both rocket stages exploded, SpaceX reported.

    A few minutes after launch, the two rocket stages separated. The lower rocket stage exploded shortly afterwards and did not land in the Gulf of Mexico as planned. The upper rocket stage continued to fly for a while and apparently also reached space for the first time. But then contact was lost.”We lost the data from the second stage – we think we may have lost the second stage,” said moderator John Insprucker in the SpaceX live stream. Itwas actually supposed to climb to an altitude of 240 kilometers and land in the Pacific near Hawaii after about an hour and a half.

    A SpaceX spokesman described the test flight as an”incredibly successful day” despite the “rapid unplanned disassembly” of the propulsion stage and rocket.

    First test flight ended after four minutes

    In mid-April, a “Starship” set off for an unmannedtest launch for the first time – and four minutes later, it staggered and exploded and broke apart. Billionaire Elon Musk, whose company SpaceX developed and built the “Starship” rocket system, did not see this as a setback, but rather as an “exciting test launch”.

    The US aviation authority FAA investigated the incident and demanded numerous improvements as a condition for the next test launch. SpaceX announced that it had implemented numerous improvements and changes. The FAA then allowed further tests.

    The second test launch with a “Starship” was originally scheduled for Friday, but was then postponed to this Saturday. Musk stated that a drive unit on a control surface had to be replaced before hand.

    SpaceX wants to go to Mars with “Starship”

    The “Starship” – consisting of the approximately 70-meter-long “Super Heavy” booster and the approximately 50-meter-long upper stage, also called “Starship” – is intended to enable manned missions to the moon and Mars. The system is designed so that the spaceship and rocket can be reused after returning to Earth. The system, which is around 120 meters long in total, is expected to be able to transport well over 100 tons of cargo in the future. NASA wants to use the”Starship” to bring astronauts to the moon on the “Artemis” mission. SpaceX hopes to reach Mars one day.

    “Starship” rocket explodes during first test flight

    The first test flight of the longest “Starship”rocket system ever built ended with an unplanned explosion just minutes afterlaunch. The space company SpaceX nevertheless rated the test as a success.

    The largest and most powerful rocket system ever built,”Starship”, broke apart during its first test flight just minutes after launch.

    Elon Musk’s private space company SpaceX nevertheless rated the test a success. “As if the flight test wasn’t exciting enough, there was a rapid unplanned breakup of Starship before stage separation,” SpaceX said on Twitter. “In a test like this, success depends on what we learn,and today’s test will help us improve Starship’s reliability.”

    The team will now extensively evaluate the test data and work towards the next test launch.

    Test should have lasted another 90 minutes

    The “Starship” initially lifted off from the earthin Brownsville, Texas, with a lot of fire and smoke. A few minutes later -shortly before the two rocket stages separated  the live images showed how the rocket system began to tumble and finally burst into flames.

    The test was actually supposed to last another 90 minutes and end with the landing of both stages. During the test flight, the first rocket stage was supposed to fall into the Gulf of Mexico after separation. The”Starship” space shuttle was supposed to fly to an altitude of more than 150 kilometers, orbit almost the entire Earth and then fall into the Pacific.

    “We wanted to make it to the end, but we’re happy we made it this far,” SpaceX officials said.

    “Starship” is intended to enable manned missions to the Moon and Mars

    The test flight had been eagerly awaited.”Starship” – consisting of the approximately 70-meter-long”Super Heavy” booster and the approximately 50-meter-long upper stage, also called “Starship” – is intended to enable manned missions to the Moon and Mars.

    The first test launch had actually been planned for Monday, but was then postponed until Thursday due to a problem with a valve.  Approvalf or the test flight was only granted by the US aviation authority last Friday.

    The “Starship” system is designed so that the spacecraft and rocket can be reused after returning to Earth. The upper stagehad already been tested several times on its own, but this was the first joint flight test of the entire rocket system.

    System total length around 120 meters

    On paper, Starship, with a total height of 120 meters, is almost twice as powerful as the “Space Launch System” of the US space agency NASA, which launched the Orion capsule on an unmanned flight around the moon in November.

    The “Starship” system, which is around 120 meters long in total, will be able to transport well over 100 tons of cargo in the future. The rocket will be powered by 33 “Raptor” engines that run on liquid methane and liquid oxygen. The system will be able to be refueled in space.

    NASA has selected “Starship” to fly humans to the moon for the first time in more than 50 years on the Artemis 3 mission at the end of 2025. The rocket should even be able to make flights to Mars and beyond.

    Launch of “Starship” rocket planned

    SpaceX plans to launch the largest rocket ever built today. The “Starship” is set to embark on a test flight after the first attempt failed in April. NASA is relying on the system for future moon missions.

    Will it work this time? This question accompanies the launch of the “Starship” rocket system planned for today. Today, the world’s largest rocket ever built, with a height of 120 meters, is scheduled to takeoff from the SpaceX space port in Boca Chica in southern Texas. It will then almost orbit the earth and finally land in the ocean off Hawaii.

    SpaceX sent a noise warning to residents of the surrounding Cameron County. It is possible that a “loud noise” will be heard from the 33 engines. What one hears on site will also depend on the weather and other conditions. There is a time window of just 20 minutes for the launch. The test flight was actually planned for Friday, but had to be postponed due to technical problems.

    First test flight failed

    A first test flight of the giant rocket in April failed spectacularly. The “Starship” got out of control due to several engine failures and had to be blown up just minutes after takeoff. Ash rained down miles away and a fire broke out in a park.

    Environmentalists filed a lawsuit against the US Federal Aviation Administration (FAA), which had approved the launch. The FAA investigated the incident and demanded numerous improvements as a condition for another test launch. SpaceX announced that it had implemented many improvements and changes.

    SpaceX boss Elon Musk called the failed first test not a setback, but an “exciting test launch.”

    NASA hopes for “Starship”

    The “Starship” – consisting of the approximately 70-meter-long “Super Heavy” booster and the approximately 50-meter-long upper stage, also called “Starship” – is intended to enable manned missions to the Moon and Mars and to be able to transport well over 100 tons of cargo. The system is designed so that the spaceship and rocket can be reused after returning to Earth. NASA wants to use the rocket system for its “Artemis” moon mission in 2025. It is considered unlikely that the schedule can be met.

    Maiden flight is postponed

    It is the largest rocket in the world to date: US billionaire Elon Musk’s “Starship” is intended to make transportation into space cheaper. But the first test flight had to be postponed due to technical problems.

    The first test flight of the “Starship” rocket from tech billionaire Elon Musk’s private space company SpaceX had to be canceled at the last minute. Shortly before launch, there was a technical problem, the company announced during the live broadcast scheduled for the launch.

    Musk himself wrote on Twitter that a pressure valve had apparently frozen. According to SpaceX, a new launch attempt will not take place until Wednesday at the earliest.

    The 120-meter-high heavy-lift rocket was supposed to take off in the afternoon Central European Time from the company’s own spaceport, Starbase, in Boca Chica, Texas. The “Starship” is considered the largest and most powerful rocket ever built. It is planned that it will fly for about 90 minutes after launch.

    Both rocket stages in use for the first time

    In the future, the rocket system will be able to transport well over 100 tons of cargo. SpaceX’s new rocket consists of two stages: the”Super Heavy” propulsion stage with 33 engines and the upper stage, the actual “Starship”, with a further six engines. It also serves as a spaceship for cargo and passengers. The “Starship” is designed so that the spaceship and rocket can be reused after landing on Earth.

    During the test flight into Earth orbit, both rocket stages will be used together for the first time. The first rocket stage will separate from the space craft just under three minutes after launch and then sink into the sea. The second stage will continue its flight, orbit the Earth once and then land in the sea off Hawaii.

    Plans for flights to the Moon and Mars

    There had been repeated set backs in tests in the past, with prototypes of the rocket exploding several times. It was only in February that almost all of the engines of the first “Starship” stage were successfully ignited for the first time.

    There are high hopes for the giant rocket. The US space agency NASA wants to use it to bring astronauts back to the moon. It has also commissioned the private company to develop a commercial lunar lander. This is intended to bring astronauts to the Earth’s satellite. The contract is worth 2.89 billion dollars.

    The moon mission is considered a stepping stone for the first manned flight to Mars. The USA is the only country to date to have brought twelve astronauts to the Earth’s satellite with the Apollo missions.

    Musk wants to reduce the cost of space travel

    But Elon Musk also has a declared economic goal: transport should become cheaper. Unlike the test flight, in future both rocket stages will return to the launch pad. If this succeeds, “Starship” would be SpaceX’s first rocket that could be completely reused. This could make rocket launches even cheaper.

    Musk hopes that an orbital flight of this spacecraft could cost less than ten million dollars in the future. Customers currently pay around 67 million dollars for a launch of SpaceX’s “Falcon 9” rocket. Musk’s private space company is already a leader among commercial providers.

    SpaceX worth an estimated $137 billion

    Musk, who was involved in the founding of the payment service provider PayPal and is the head of the electric car manufacturer Tesla, founded SpaceX in 2002. The space company was created with the goal of making cheaper space flights possible.

    SpaceX is one of the largest private companies in the world. The space company has an estimated value of $137 billion.

  • NASA Decides to Bring Starliner Spacecraft Back to Earth

    The manned “Starliner” space capsule has reached its destination and docked with the ISS. Due to a series of problems, the project is years behind schedule – and this time, too, not everything went smoothly.

    The “Starliner”, carrying two NASA astronauts, has docked at the International Space Station ISS – but on the way there it once again struggled with numerous technical problems. A helium leak had already been discovered before the launch, but according to NASA’s assessment it did not pose a danger to the flight.

    However, two more leaks occurred during the flight. Later, four of the capsule’s 28 engines failed. NASA astronauts Butch Wilmore and Suni Williams managed to restart three of them, so there were enough safety reserves for the rest of the flight.

    The “Starliner” missed the first opportunity to dock and circled next to the station for another hour. It was only on the second attempt that the spacecraft managed to dock with the ISS. All three leaks are now being monitored, valves have been closed and the spacecraft is still stable, NASA said.

    Project is still behind schedule

    The “Starliner” set off on its first manned test flight on Wednesday after years of delays. The test flight had previously been postponed several times due to various technical problems with the spacecraft and rocket. In May 2022, the “Starliner”, developed and built by the US aircraft manufacturer Boeing, completed its first successful unmanned flight to the ISS and spent four days there – an important test for the spacecraft.

    In the future, it will transport astronauts to the ISS as an alternative to SpaceX’s “Crew Dragon” space capsule. However, due to a number of problems, the project is far behind schedule, while the “Crew Dragon” has been regularly bringing astronauts to the ISS for several years.

    Two astronauts on board

    “Starliner” sets off on first manned test flight

    After years of delays, the “Starliner” space capsule from the US company Boeing has launched into space with humans for the first time. The rocket took off from the Cape Canaveral spaceport and is on its way to the ISS.

    After many delays, the crisis-ridden spacecraft “Starliner” has set off on its first manned test flight to the International Space Station ISS. With NASA astronauts Barry Wilmore and Suni Williams on board, the spacecraft launched from the Cape Canaveral spaceport in the US state of Florida using an “Atlas V” rocket, as live images from the US space agency NASA showed.

    “Here we go, Atlas! Here we go, Starliner! Good luck, Butch and Suni!” came the call from the control center shortly before the rocket took off.

    “Two brave NASA astronauts are on their way on this historic first test flight of a brand new spacecraft,” said NASA boss Bill Nelson. Boeing’s “Starliner” marks a new chapter in American research. “Manned spaceflight is a bold task – but that’s why it’s worth it. It’s an exciting time for NASA, our commercial partners and the future of exploration.”

    Test flights previously postponed several times

    The test flight had previously been postponed several times due to various technical problems with the spacecraft and rocket, and had been canceled twice in recent weeks just minutes before a scheduled launch. Among other things, there were problems with a helium leak and a faulty computer system on the ground.

    The “Starliner” – a partially reusable spacecraft that consists of a capsule for the crew that is around three meters high and a service module and, unlike the “Crew Dragon”, lands on the ground rather than on water – is expected to arrive at the ISS on Thursday. Wilmore and Williams are expected to stay there for around a week.

    Williams, 58, and Wilmore, 61, have each been in space twice before. “It almost feels unreal,” Williams said in advance of her return to the ISS. The “Crew Dragon” space capsule currently docked at the ISS was moved to another docking station for the arrival of the “Starliner.”

    Spare part is on board

    In short notice, the “Starliner” also took an urgently needed spare part for the ISS: a urine pump that helps convert the astronauts’ urine into drinking water broke earlier than expected, NASA said. A quickly obtained spare part has now been packed into the “Starliner.”

    In May 2022, the Starliner, developed and built by US aircraft manufacturer Boeing, completed its first successful unmanned flight to the ISS and spent four days there – an important test for the spacecraft.

    During the first test without a crew in 2019, the “Starliner” did not even arrive at the ISS. In the future, it will transport astronauts to the ISS as an alternative to SpaceX’s “Crew Dragon” space capsule. However, due to a number of problems, the project is far behind schedule, while the Crew Dragon has been regularly bringing astronauts to the ISS for several years.

    Boeing space capsule

    “Starliner” reaches ISS for the first time

    The crisis-ridden Boeing space capsule “Starliner” has made it to the International Space Station ISS for the first time. It docked during the night. Several previous attempts had failed.

    After launching from the Cape Canaveral spaceport, Boeing’s “Starliner” has arrived at the ISS for the first time. The unmanned spacecraft docked at the human outpost around 400 kilometers above Earth during the night after a flight lasting just over a day.

    The critical test for the crisis-ridden “Starliner” has thus been successful for the time being. The project is more than two years behind schedule due to a number of problems.

    Project is well behind schedule

    Boeing’s spacecraft will be used as an alternative to SpaceX’s “Crew Dragon” space capsule to transport astronauts to the ISS. This should have happened a long time ago, but during an initial test in December 2019, the spacecraft did not make it to the ISS – partly because of a problem with the automatic ignition of the engines.

    Last year, the mission was postponed several times – and then the “Starliner” finally had to go back to the workshop due to valve problems.

    NASA speaks of “milestone”

    NASA described the launch as a “milestone.” On board are over 300 kilograms of cargo, including supplies for the ISS crew. The “Starliner” is a partially reusable spacecraft consisting of a capsule for the crew and a service module that can be launched with a rocket. Once approved for the transport of people, it can carry up to four crew members to the ISS.

    Test flight to ISS

    Boeing’s “Starliner” misses orbit

    It was supposed to be good news for the struggling US aircraft manufacturer Boeing – but the test flight of its “Starliner” spacecraft to the ISS failed. The capsule missed the correct orbit.

    The unmanned test flight of Boeing’s new Starliner space capsule to the International Space Station (ISS) has failed after a serious error.

    After the launch of a rocket with the unmanned space capsule on board at the US space station Cape Canaveral, a problem arose with the automatic ignition of the engines due to a timer that was running incorrectly. As a result, the spacecraft consumed so much fuel that it could no longer reach the orbit of the ISS.

    Early return on Sunday

    The spacecraft was scheduled to dock with the ISS for the first time on Saturday and return to Earth in eight days. On board is a doll called “Rosie” as an astronaut dummy, as well as Christmas treats and gifts for the ISS crew. The capsule is now scheduled to be returned to Earth on Sunday and land in the US state of New Mexico.

    Boeing developed the “Starliner” for NASA. It is intended to bring American astronauts to the ISS in the future. NASA is currently dependent on Russian Soyuz rockets for manned missions. In 2011, it discontinued its own shuttle program after three decades. Since then, NASA astronauts have flown from the Russian space agency’s launch site in Kazakhstan. This cost the US space agency up to 86 million dollars (77 million euros) per flight.

    Strategy change in US space policy

    Under former US President Barack Obama, a change in strategy was introduced: instead of developing new rockets themselves, it was decided to hire private companies to do the work. Boeing and its competitor SpaceX subsequently received billions to develop manned space shuttles “Made in the USA”.

    Both US companies are two years behind schedule, but both now appear to be almost finished with development. The first manned flight of the capsule, which has a diameter of 4.50 meters, was previously planned for next year. However, final tests such as the “Starliner” flight are still necessary before their space shuttles can be approved for manned space travel.

    SpaceX test in March was successful

    SpaceX has already completed a successful test flight to the ISS in March with its space shuttle “CrewDragon”, which is intended for manned missions. Also on board was a dummy, named “Ripley” after the main character in the “Alien” films. The dummies are equipped with sensors to replicate the flight conditions for real people.

    The US aircraft manufacturer Boeing recently came under massive pressure due to the disaster surrounding its 737 Max jet. After two devastating crashes, the aircraft was banned from taking off. It is uncertain when it will be able to take off again.

    “Starliner” capsule on its way to the ISS

    Boeing’s troubled space program can finally report a success: A “Starliner” capsule launched on an unmanned test flight to the ISS. However, there were problems this time too.

    The space shuttle was initially launched at 6:54 pm Eastern Time from the spaceport at Cape Canaveral in the US state of Florida. From then on, the unmanned “Starliner” needed a total of 32 minutes to reach a safe orbit around the Earth. It was powered by an Atlas 5 rocket, which was finally successfully separated from the capsule in various stages.

    Two nozzles defective

    When positioning the “Starliner”, only ten of the twelve correction nozzles worked, Mark Stich of the US space agency NASA said at a press conference after the launch. For the others, it must be determined whether it is the instruments that are causing the commands to arrive. “At the moment, we can carry out all maneuvers with the remaining correction nozzles.”

    From Earth orbit to the ISS

    From orbit, the “Starline” is approaching the International Space Station (ISS) and initially docked – a maneuver that is not expected to take place until Saturday night. But for now, the successful launch was a relief for NASA and the manufacturer Boeing – the mission had been marked by failures up to now.

    In December 2019, a software error meant that a flight attempt had to be aborted before the docking maneuver with the ISS. Another launch last August had to be stopped during the countdown because valves did not open. It was only the third attempt that brought the hoped-for breakthrough.

    “There will be a few more sleepless nights.”

    Now the team is relieved. However, Boeing’s Mark Nappi warns against premature euphoria. The mission has four phases. “The first includes launch preparations and launch. And that went really well. Then the orbit and docking. And that will be another exciting day. So there will be a few more sleepless nights.”

    The aim is to use the “Starliner” to bring both astronauts and cargo to the space station in the future. Even though the current mission is unmanned, a test dummy called “Rosie the Rocketeer” sits in one of the chairs. There is also food and other goods on board for the ISS crew.

    US astronauts and cosmonauts reach ISS

    Space is one of the few areas in which the US and Russia have continued to work together since the beginning of the Ukraine war around two years ago. Three US astronauts and one cosmonaut have now arrived on the ISS space station.

    Three US astronauts and a Russian cosmonaut have arrived at the International Space Station ISS after their journey through space. The space capsule with the space station’s new occupants docked at the ISS.

    After the hatch was opened, the new arrivals Matthew Dominick, Jeanette Epps and Michael Barratt from the USA and Alexander Grebenkin from Russia were warmly welcomed by the current ISS crew, as could be seen on the video broadcast by the US space agency NASA.

    New crew to stay for half a year

    The crew members who have now arrived will replace four of the seven current ISS occupants. After a transition phase of a few days, the four crew members from the USA, Denmark, Japan and Russia will return to Earth.

    The new crew will work on the ISS for around six months, and various medical experiments are planned, including those involving stem cells. For Epps, Dominick and Grebenkin, this is their first space flight, and Barratt will be visiting the space station for the third time.

    Success for space company SpaceX

    As in the past, tech billionaire Elon Musk’s private space company SpaceX carried out the mission to replace personnel on the ISS space station. The so-called “Crew-8” took off on Sunday on board a “Crew Dragon” capsule with the help of a Falcon 9 rocket from the Kennedy Space Center in the US state of Florida.

    The flight had been postponed several times. It had originally been planned for February 22nd. This was the second time since the start of Russia’s war of aggression against Ukraine and the resulting tensions between the USA and Russia that astronauts from both countries went into space together. Space is one of the few areas in which both countries still work together.

    Russia and USA continue joint flights to the ISS

    The International Space Station is one of the few projects on which the USA and Russia are still working together. Both sides have now signed an agreement to continue joint flights to the ISS until 2025.

    Russia and the USA have agreed to extend their agreement for joint flights to the International Space Station ISS. In July and December, both sides signed two additional agreements to continue the cross-flights until 2025, the Russian space agency Roscosmos announced.

    The aim is to maintain the reliability of the ISS’s operations, Roscosmos explained. This will also ensure that at least one NASA astronaut and one Russian cosmonaut are on the station at any one time.

    In the so-called cross-flights, a US astronaut flies to the ISS as part of the crew of a Russian spacecraft and vice versa. In September, two Russian cosmonauts and a US astronaut flew to the ISS on board a Russian Soyuz capsule from the Kazakh spaceport Baikonur.

    Russia withdraws from ISS

    The ISS, which has been orbiting the Earth since 1998, is one of the few areas in which Russia and the USA are still cooperating even after the start of the Russian war of aggression in Ukraine in February 2022. Its operation was originally planned until 2024, but the US space agency NASA does not want to decommission it until 2030. However, Roscosmos had already announced in July 2022 that it would withdraw from the project after 2024. Instead, Moscow is planning its own space station.

    President Vladimir Putin announced in October that the first module for the Russian space station would be launched into space in 2027. At the same time, he said that despite the recent setbacks, he would stick to the previous space program goals, including above all the Russian moon program.

    Russia plans to leave the International Space Station (ISS) in 2024. This was announced by the new head of the space agency, Borissov. Cooperation with the USA has been repeatedly called into question in the course of the Ukraine war.

    Russia plans to exit the International Space Station ISS after 2024. This was stated by the new head of the Russian space agency Roscosmos, Yuri Borisov, at a meeting with Russian President Vladimir Putin.

    Tensions between Moscow and Washington

    “Of course, we will fulfill all our obligations to our partners, but the decision to withdraw from this station after 2024 has been made,” said Borisov. The former Deputy Prime Minister Borisov has been the head of the Russian space agency since mid-July, succeeding agency head Dmitry Rogozin.

    Rogozin had recently repeatedly questioned cooperation with the USA in light of political tensions between Moscow and Washington in the wake of the war in Ukraine. After the start of the Russian offensive in Ukraine, the West imposed a series of sanctions against Russia. However, cooperation continued on the ISS.

    Borissov said that construction of a Russian space station would begin before the exit. The ISS had recently become old and had repeatedly caused a stir due to breakdowns. Rogozin had previously not ruled out the possibility of detaching the Russian module from the ISS and continuing to operate it independently. He also hinted at the possible use of the station for military earth observation.

    NASA: No official notification yet

    The US space agency, NASA, said that Roscosmos had not yet officially communicated the decision. “We have not received any official notification from our Russian partners,” said Robyn Gatens, the ISS director based at NASA headquarters. She also made it clear that the US had no interest in ending its cooperation with Russia: the Russians, like all other parties involved, have always been good partners, and they want to continue that as agreed until the end of this decade.

    US astronaut Kjell Lindgren spoke from the ISS itself: The crew has not yet received any official notification, he said. However, they will continue to work together successfully on the ISS to achieve scientific results and ensure the safety of the crew and the space station.

    Following the war of aggression against Ukraine ordered by Kremlin leader Putin, Western countries have also imposed sanctions on the Russian space industry. The ISS is a joint project of the USA, Canada, Japan, the European Space Agency (ESA) and Russia. It is divided into a Russian and a US segment.

    Moscow questions cooperation with ISS

    Russia still has plans to work on the International Space Station (ISS) until 2024. What will happen after that is unclear. Russia could detach its module and use it for military earth observation.

    In view of the war in Ukraine, Russia has questioned its cooperation in the operation of the International Space Station (ISS). “The decision on the fate of the ISS will depend largely on the situation in and around our country,” said the head of the Russian space agency, Dmitri Rogozin, to the Tass news agency. In the event of an exit, however, Moscow will inform its partners a year in advance.

    Russia’s participation in the station is provisionally planned until 2024. Previously, Rogozin had not ruled out the possibility of disconnecting the Russian module from the ISS and continuing to operate it independently. He also hinted at a possible use of the station for military earth observation.

    Russian space travel also sanctioned

    The Russian government also wants to decide by mid-May whether it will continue to take Western astronauts to the ISS on its spaceships. Following the war of aggression against Ukraine ordered by Kremlin chief Vladimir Putin, Western countries have also imposed sanctions on the Russian space industry.

    Russia’s war and its consequences in space

    The ISS was initiated as an international peace project – but now Russia is waging war. What does that mean for the future of the space station? And what for the seven astronauts on board?

    Four Americans, two Russians and one German – that is the current crew on the International Space Station ISS. How is the international crew dealing with the war in Ukraine? And how is the German ESA astronaut Matthias Maurer, who flew to the ISS in November, doing? A press conference at which the astronaut actually wanted to answer journalists’ questions today was cancelled by the ESA.

    “We get along well.”

    Instead, Josef Aschbacher, the ESA Director General, provides information and explains that Matthias Maurer is doing well. In an interview with tagesschau.de, he talks about a phone call he had with him:

    Mr Maurer says: ‘Don’t worry about me. We get on well.’ But he says that of course they in the space station also see what is happening here on the ground. That is very worrying. One message he wants to convey is that from the perspective of space you can see how fragile our beautiful planet is. It is incomprehensible that war is being waged here.

    ISS planned a peace project

    The ESA boss has left open what will happen to the International Space Station as a whole. The space station is operated by NASA, the Russian space agency Roscosmos and the European Space Agency (ESA), among others. “The system is very closely intertwined, we are dependent on each other,” Aschbacher explains. “That was exactly the purpose of the ISS when it was built, that it was a peace project between different nations, peoples and cultures and that worked very well for more than 20 years. Now we have a new situation.”

    He does not go into more detail, but cooperation seems to be assured, at least in the short term. An immediate withdrawal by the Russians could be problematic, explains former astronaut Ulrich Walter: “That would be difficult because a central module of the space station, a kind of control module, is from the Russians. So if the Russians were to withdraw immediately, we would have relatively big problems. But we don’t need to worry about that at the moment, because the Russians have not said they are withdrawing.”

    Connected in space

    The example of the ISS shows how interconnected the cooperation between states in space is. SWR science editor Uwe Gradwohl gives examples: “American rockets fly with Russian engines. A company based in the USA markets the launches of the Russian Proton carrier rocket. The first stage of the Antares rocket is built in Ukraine, with Russian propulsion and launches from Florida.”

    Since the end of the Cold War, ESA has always been careful to carry out joint projects with many players, be it the Americans, Russians or the Chinese. ESA astronaut Maurer took part in survival training for astronauts in China and flew to the ISS in a space capsule from the US company SpaceX. His predecessor Alexander Gerst launched into space with the Russians from the Baikonur Cosmodrome in 2014 and 2018.

    Sanctions against Russia also affect Mars mission

    Now states and space agencies like the ESA have to reorganize themselves as the sanctions against Russia take effect. In concrete terms, the European-Russian Mars mission “ExoMars” is currently in jeopardy. This autumn, a European Mars rover was supposed to launch from Baikonur on board a Russian Proton rocket.

    “The Mars rover is an ESA project, but it also contains Russian hardware. And without the Russian landing platform, the rover cannot roll onto the surface of Mars,” explains science editor Gradwohl. Replacing the Russian part of the project with European developments is not possible any time soon. “Even the time until the next launch opportunity to Mars in 2024 is probably too short.”

    “Chinese are now better than the Russians”

    The Crimean crisis in 2014 already had long-term consequences for space travel. At that time, for example, the US Congress decided that US military and government payloads could no longer flow into space on rockets equipped with Russian rocket engines. However, a very long transition period until the end of 2022 was set.

    The question of which alliances will form in space in the future is exciting. For example, are Russia and China getting closer? Former astronaut Walter does not believe that: “The Chinese have been copying Russian space technology since the 1960s and in the 1990s improved Russian technology to such an extent that they are now better than the Russians.” The Russians have always lacked money – unlike the Chinese, who have a lot of money. “They are massively expanding space travel. They are now really superior to the Russians, and in some cases even to the Americans, in space travel. They do not need partners,” says Walter.

    ESA astronaut Matthias will probably return from the ISS in May – again on board a SpaceX capsule. Before that, his US astronaut colleague Mark Vande Hei was scheduled to make the return flight. He flew into space on board a Soyuz capsule last year. The plan was for him to fly back with the Russians. It will be interesting to see how this story ends.

    NASA cancels test flight with “Starliner” spacecraft

    The “Starliner” was supposed to transport astronauts long ago, but the project is far behind schedule. The first manned test flight has now had to be postponed again.

    The eagerly awaited launch of the first manned Boeing “Starliner” rocket into space has had to be postponed due to technical problems. The test flight, which had already been postponed several times, will now be rescheduled for Friday at the earliest, Boeing announced.

    The flight was aborted less than two hours before the evening countdown (local time) because a problem with a valve in the second rocket stage could not be fixed in time, NASA said in its live stream.

    The CST-200 “Starliner” was to be launched on an “Atlas-V” rocket from the United Launch Alliance, a joint venture between Boeing and Lockheed Martin, from the Kennedy Space Center of the US space agency NASA in Florida and bring two astronauts to the International Space Station ISS.

    The two NASA astronauts, Barry Wilmore and Suni Williams, left the “Starliner” capsule and returned to their crew quarters at the Cape Canaveral spaceport.

    New start attempt on Friday

    The new launch date is now set for Friday. The “Crew Dragon” docked at the ISS and moved to another docking station for the arrival of the “Starliner.”

    In May 2022, the “Starliner”, developed and built by the US aircraft manufacturer Boeing, completed its first successful unmanned flight to the ISS and spent four days there – an important test for the spacecraft.

    Prestige project for Boeing

    After a series of crises in commercial aviation, Boeing is already facing intense public scrutiny and a comprehensive review of its quality control. With the “Starliner”, the company wanted to send a signal of success.

    In order to save the program, which is years behind schedule, the planned costs have so far been exceeded by more than $1.5 billion. While Boeing is struggling, Tesla boss Elon Musk’s SpaceX has become a reliable partner for NASA in transporting space.

    Three Americans and one Russian launched to the ISS

    For the second time since the attack on Ukraine, astronauts from Russia and the USA have flown into space together. On board the ISS, they will work together on research projects and grow “organoids.”

    After several postponements, three US astronauts and a Russian cosmonaut have launched on a six-month mission on the International Space Station ISS. A “Falcon 9” rocket from tech billionaire Elon Musk’s private US space company SpaceX lifted off late Sunday evening (local time) from the Kennedy Space Center in Florida.

    A launch attempt planned for the previous day had been cancelled due to strong winds. This was not the first postponement of the flight: the launch had originally been planned for February 22nd.

    With the start of the mission, astronauts from both countries flew into space together for the umpteenth time, despite the Russian war of aggression against Ukraine and the resulting immense tensions between the USA and Russia.

    Space flight for research purposes

    NASA astronauts Matthew Dominick, Michael Barratt, their colleague Jeanette Epps and cosmonaut Alexander Grebenkin are expected to arrive at the ISS on Tuesday morning (CET). For Epps, Grebenkin and Dominick, this will be their first space flight, and Barratt will be visiting the ISS for the third time.

    There are currently seven crew members on board the space station. In a few days, four of the existing crew members, from the USA, Denmark, Japan and Russia, will return to Earth.

    The new crew wants to conduct experiments on the International Space Station with stem cells and grow so-called organoids. Such tiny organs can be used to research organ diseases, among other things.

    First private space mission launched

    The private space company SpaceX has sent paying guests to the International Space Station for the first time. Three businessmen and a former NASA astronaut are scheduled to dock with the ISS early Saturday morning.

    The first private space mission, Axiom AX1, took off for the International Space Station ISS at exactly 11:17 am local time. The launch of the SpaceX “Falcon 9 rocket” with the crew capsule “Dragon” at its top from the Kennedy Space Center in Florida went smoothly under a bright blue sky. As planned, the capsule with the four astronauts on board separated from the rocket shortly after takeoff, which then successfully returned to Earth.

    20 hours through space

    The four astronauts – three businessmen from the USA, Canada and Israel, as well as a former NASA astronaut – are now on a 20-hour journey through space to the International Space Station. They are scheduled to dock there early Saturday morning and be brought on board the ISS after the two-hour maneuver.

    The four private citizens will then stay on the ISS for eight days and carry out various experiments – including cancer research, the aging process of cells and everything related to human life in space.

    700 hours of training

    It is the first space mission by the private Texas company Axiom in collaboration with SpaceX and NASA. Axiom plans to add its own module to the ISS in two years to enable private space travel and research. Axiom is planning well over 100 such missions in the coming years.

    The three private astronauts trained around 700 hours for their eight-day trip into space and each paid 50 million US dollars.

    US-Americans and Russians fly to the ISS

    Even in times of political tension, we can achieve something together, says US astronaut Rubio before launching to the International Space Station. He did not speak to his two Russian crew colleagues about the war in Ukraine beforehand.

    Despite the Russian war of aggression in Ukraine, an American and two Russians are flying together to the International Space Station (ISS) today. The three are scheduled to take off from the Russian spaceport Baikonur in the Republic of Kazakhstan on board a Soyuz rocket this afternoon. This was announced by the Russian space agency Roscosmos.

    It is the first joint flight since the start of the Russian war of aggression in Ukraine on February 24. In October, Russian cosmonaut Anna Kikina is also scheduled to fly to the ISS from the USA. Both flights are intended to give hope that cooperation on the ISS will continue. Russia recently announced that it wants to withdraw from the project after 2024. No exact date has been announced so far.

    Supplies on board

    In addition to the crew, there are also 120 kilograms of supplies for the ISS on board the Soyuz capsule – for example hygiene and medical items, scientific equipment and the astronauts’ personal belongings. For the Russian cosmonaut Sergei Prokopyev, this is the second flight to the ISS, and for his colleague Dmitri Petelin and US astronaut Frank Rubio, this is the first time.

    “The program is quite full – in addition to the rapid docking, five spacewalks are planned,” Prokopyev said before the start of the 68th ISS mission. 48 experiments are planned – including one with a 3D printer in zero gravity. Various figures are to be printed out of different materials.

    The three do not talk about the war

    “It’s an honor for me to follow in the footsteps of previous astronauts,” said NASA astronaut Rubio. Space travel is an opportunity to achieve something together, even in times of political tension.

    Apparently the crew did not talk about the war beforehand. All three made it clear that they would miss their families most of all during the six months they would be on the ISS.

    The commander of the 67th expedition, Oleg Artemyev, the cosmonauts Denis Matveyev and Sergei Korsakov, the NASA astronauts Bob Hines, Kjell Lindgren, Jessica Watkins and the Italian Samantha Cristoforetti from the European Space Agency ESA are already on the ISS. They are orbiting around 400 kilometers above the Earth.

    NASA and Roscosmos cooperate

    Back to the ISS together

    US astronauts and Russian cosmonauts will fly to the International Space Station together again in the future – sometimes rockets will be launched in Baikonur, sometimes in Florida. This is intended to ensure their presence on the ISS.

    The space agencies of the USA and Russia are working more closely together again. From September onwards, flights to the International Space Station ISS will take place with a mixed crew in Russian and US capsules.

    Roscosmos said the agreement was in the interests of both countries and would deepen cooperation in the area of ​​the ISS. This would ensure that in the event of delayed or cancelled rocket launches, there would always be at least one Roscosmos and one NASA member on the ISS. NASA said the decision was made for security reasons and to ensure the “US presence in space.”

    Unusual timing?

    The agreement was made public shortly after Russian President Vladimir Putin replaced the head of Roscosmos. The previous head ofthe agency, Dmitry Rogozin, had taken a confrontational stance towards the West since the beginning of the Ukraine war. He threatened to withdraw from the ISS and described cooperation with the West as “impossible”.

    SpaceX now also flies cosmonauts

    In September, astronaut Frank Rubio is scheduled to launch from the Baikonur Cosmodrome together with Sergei Prokopyev and Dimitri Petelinon board a Russian capsule.

    Later, Russian cosmonaut Anna Kikina will be brought to theISS from Florida together with two US and one Japanese astronaut. She will be the first female cosmonaut to launch into space on a SpaceX rocket.

    US astronauts had already flown into space on Russian rockets from the Baikonur Cosmodrome in Kazakhstan by 2020. After that, mannedISS flights also took off from US soil again.

    Soyuz capsule launched for retrieval operation

    In mid-December, astronauts on the ISS discovered a leak in their return rocket – they have been stuck ever since. An unmanned Russian Soyuz capsule has now launched to the space station earlier than planned to bring it back to Earth.

    An unmanned Soyuz rocket has launched on a return mission to the International Space Station ISS. The Soyuz MS-23 took off from the spaceport in Baikonur, Kazakhstan, on Friday morning, as live images from the US space agency NASA showed. The spacecraft is scheduled to dock with the ISS on Sunday and bring three astronauts stranded on the space station back toEarth in September.

    The launch and entry into orbit of the spacecraft “went normally,” said the Russian space agency Roscosmos. Docking with the ISS is scheduled for February 26 at 1:01 a.m. GMT (2:01 a.m. CET).

    Soyuz MS-23 was originally scheduled to launch in March

    Originally, US astronaut Frank Rubio and the two Russian cosmonauts Sergei Prokopyev and Dmitri Petel in were supposed to fly back to Earth in the Soyuz MS-22 space capsule at the end of March. However, this was apparently damaged in mid-December by a small meteorite that caused a leak in the spacecraft’s cooling system. The mission of the three astronauts was extended by six months due to the breakdown. The Soyuz MS-23 space capsule was actually only supposed to launch to the ISS in mid-March with three successors for Rubio, Propkolev and Petelin on board. However, due to the damage to the MS-22 capsule, the launch was brought forward and now took place without a crew.

    The astronaut and the two cosmonauts will now spend almost a year on the space station. The capsules will usually remain docked at the station for the entire duration of the crew’s research mission in order to be able to return astronauts to Earth in the event of an emergency.

    In addition to the three affected crew members, there are four other people on board the ISS: the Russian Anna Kikina, the US astronaut Nicole Mann and her compatriot Josh Cassada, and the Japanese Koichi Wakata. They flew into space in October as part of the so-called Crew-5 mission on board a space capsule belonging to the US space company SpaceX, owned by multi-billionaire Elon Musk.

    Next week, the four participants of the subsequent Crew 6 mission – two US citizens, an Emirati and a Russian – will join them. They are also to arrive in a SpaceX capsule that is scheduled to launch from Florida on Monday. After a “handover” lasting several days, the Crew 5 astronauts will then be to return to Earth.

    A glimmer of hope for rapprochement?

    German astronaut Reinhold Ewald did not want to talk of a “rescue mission” in the case of the current Soyuz mission. “The crew is not stranded somewhere. Even if many systems fail, the Soyuz has ways and means of steering the capsule home.” In one way, the problems on Earth are greater than in space. “This is a major effort for Russian space travel. The Soyuz, which is being sent up unmanned, was intended for a crew. This is already a significant disruption to the sequence. Russia does not produce them in advance,” said the 66-year-old.

    Ewald, who flew to the Russian space station “Mir” in a Soyuz capsule in 1997 and conducted research for three weeks, pointed to the pragmatic ongoing cooperation between NASA and Roscosmos despite the war in Ukraine: “The situation is bad enough. This is perhaps a glimmer of hope that we are coming closer together again.” He does not believe that Russia will soon withdraw from the ISS program. “Moscow only recently sent up a science module. I think that Russia will use its investment in the station for as long as possible.”

    Since the start of the Russian offensive in Ukraine a year ago, space has been one of the few areas in which Russia and the USA still work together. The ISS has been in operation since 1998.

    ISS astronauts will not return until September

    In mid-December, astronauts discovered a leak in their capsule – and have been stuck on the ISS ever since. A new Russian space capsule is scheduled to launch next Friday to pick up the three men – and will not return until September.

    The Russian space agency Roscosmos plans to bring three astronauts stuck on the International Space Station ISS back to Earth in September instead of next month – one year after their departure.

    Roscosmos announced that the return flight with the Soyuz MS-23 space capsule is currently planned for September. Originally, the US astronaut Frank Rubio and the two Russian cosmonauts Sergei Prokopyev and Dmitri Petelin were supposed to fly back to Earth with the Soyuz MS-22 at the end of March.

    Micrometeorite probably caused the leak

    After a leak was discovered in the spacecraft’s cooling system in mid-December, the astronauts who have been stationed on the ISS since September are now to be picked up by the Soyuz MS-23. According to Roscosmos, the Soyuz MS-23 is scheduled to launch next Friday from the spaceport in Baikonur, Kazakhstan, but will not return to Earth until September.

    Both the US space agencies NASA and Roscosmos believe the leak was caused by a micrometeorite, a tiny particle of space rock that hit the capsule at high speed.

    According to Roscosmos, the astronauts have responded “positively” to the extension of their mission on the ISS, and the longer stay in space does not pose any danger to their health.

    No “Starliner” test flight again

    The aviation group Boeing has again postponed the test flight of the “Starliner” space capsule, which was planned for today. The company is also aiming for an important role in space travel – and is therefore under pressure to succeed.

    Boeing’s Starliner space capsule will not launch today either: The aviation company, which is also aiming for space, has cancelled the launch planned for 7:20 pm Central European Time. Boeing Space confirmed the media reports on Twitter. The reason is apparently problems with valves in the space capsule’s propulsion system. In a written statement, it says that the company is considering launching tomorrow.

    “We are disappointed by today’s events and the need to once again postpone the Starliner launch,” said John Vollmer, manager of Boeing’s Commercial Crew Program. Manned spaceflight is a complex, precise undertaking that does not forgive mistakes. “The Boeing and NASA teams are taking the time they need to ensure the safety and integrity of the spacecraft and to achieve the mission’s goals.”

    Keeping pace in a fast-growing, competitive industry

    Today’s launch was actually supposed to be an important step for the US aviation group to keep up with the growing competition in the commercial space travel business. In addition to the aviation group, Tesla CEO Elon Musk’s SpaceX, Amazon founder Jeff Bezos’ Blue Origin and British entrepreneur Richard Branson’s Virgin Galactic are also involved. The industry is growing rapidly.

    First attempt failed

    With the second launch attempt, Boeing wanted to get its space program back on track. The spacecraft was to be launched on an Atlas V rocket from the Cape Canaveral spaceport in Florida. 24 hours later, the “Starliner” was scheduled to dock with the International Space Station ISS. The space capsule was to bring 180 kilograms of cargo to the station, including supplies for the current ISS crew. “Starliner” would then have returned to Earth with around 250 kilograms of cargo, including air tanks.

    A first unmanned test flight of the “Starliner” capsule failed in 2019 due to software problems. The spacecraft was unable to reach the ISS at the time and returned to Earth prematurely.

    Most recently, the date for the second flight to the ISS, originally scheduled for last Friday, had to be postponed until today. The reason was an incident when the new Russian laboratory module “Nauka” docked with the ISS. However, the weather forecast was not ideal for today’s attempt either.

    Space business promises stable sales

    For Boeing, the success of the “Starliner” is of enormous importance because, in addition to the aviation division with its aircraft fleet, the defense, space and security division, in which the “Starliner” project is also located, is expected to ensure stable earnings for the group in the future. The division already generates around a third of the group’s revenues. And in contrast to the aircraft division, which saw dramatic declines last year due to Corona, Boeing came through the crisis well with tanker aircraft for the military and commercial satellites and services in this area.

    The “Starliner” project goes back to a decision made by the US space agency NASA ten years ago. NASA had discontinued its shuttle program due to high costs and after two accidents, was subsequently dependent on Russian rockets to get to the ISS. In order to become more independent from Russia again, the US government under then-President Barack Obama commissioned Elon Musk’s SpaceX company and the aviation giant Boeing to build space shuttles.

    First cargo, then astronauts

    SpaceX has already brought people to the ISS. In May 2020, two astronauts flew to the ISS for the first time on board the “Crew Dragon” capsule. In April 2021, four astronauts were on board for the trip to the station. As soon as the unmanned “Starliner” flight is successful, the Boeing capsule will also be able to transport people after further tests at the end of the year.

    NASA deliberately wants to be able to rely on several companies for its space flights. A recent dispute over a NASA contract for a lunar lander showed that other competitors are also putting out feelers for billion-dollar contracts in space travel: Blue Origin, with Bezos at the helm, wanted to snatch a contract away from SpaceX and even offered to cover development costs running into billions. A complaint by Blue Origin against the NASA contract to SpaceX with the US General Audit Office GOA was rejected a few days ago.

    Ten years after commissioning

    First manned Starliner flight to the ISS

    After years of delays, the Starliner space capsule from the US company Boeing is set to fly into space with humans for the first time. The rocket will launch early Tuesday morning from the Cape Canaveral spaceport.

    Ten years ago, then NASA chief Charles Bolden announced a program with which the USA would again conduct its own flights to the International Space Station. The idea was not to be dependent on Russia when it came to transporting people and goods to the ISS after the end of the Space Shuttle era.

    At this time, the US space agency did not want to develop and build the space shuttles itself, but commissioned the aircraft manufacturer Boeing and the company SpaceX owned by tech billionaire Elon Musk. From that point of view, Boeing seemed to have the advantage with its “Starliner” spacecraft. The company received an order worth more than four billion dollars, SpaceX 2.6 billion. But things did not go smoothly at Boeing, and there were crises, problems and delays.

    Difficulties with the tests

    During a first unmanned test in 2019, the Starliner did not even arrive at the ISS due to software problems. A second unmanned test flight was successful in 2022, but then numerous problems arose again, which further delayed a planned manned test flight.

    Now, on Tuesday morning at 4:34 a.m. Central European Summer Time, the first manned Starliner flight is scheduled to take off from the Cape Canaveral spaceport in the US state of Florida, with NASA astronauts Barry Wilmore and Sunita Williams on board. The capsule is to be launched into space by an Atlas V rocket developed by Boeing and Lockheed Martin.

    Boeing’s Starliner is a partially reusable spacecraft consisting of a crew capsule approximately three meters high and a service module, designed to carry four crew members and, unlike the Crew Dragon, lands on Earth rather than on water.

    “We are ready, the spaceship is ready.”

    The spacecraft is expected to arrive at the ISS on Wednesday, where Wilmore and Williams are expected to stay for about a week. “It almost feels unreal,” said 58-year-old Williams, who has already been on board the ISS twice, at a press conference beforehand. “We wouldn’t be here if we weren’t ready,” said her 61-year-old astronaut colleague Wilmore, who has also been in space twice. “We are ready. The spacecraft is ready and the team is ready.”

    SpaceX won the race

    Meanwhile, competitor SpaceX has long since overtaken Boeing. In 2020, its Crew Dragon successfully undertook its first manned test to the ISS, and since then astronaut transports have become routine. There are currently eight regular crew members on the space station, having arrived there with the Dragon.

    ISS crew

    Back on Earth

    After almost half a year in space, part of the crew of the International Space Station is back on Earth. The four astronauts, from the USA, Russia and Japan, landed in the sea off the coast of Florida.

    A space capsule has brought two women and two men back to Earth from the International Space Station ISS. On board a “Crew Dragon” of the private space company SpaceX, Nicole Mann and Josh Cassada from the USA, the Russian Anna Kikina and the Japanese Koichi Wakata landed in the sea off the coast of the US state of Florida, as live images from the US space agency NASA showed.

    “That was an amazing ride,” Mann radioed after landing. “We’re glad to be home again,” said the commander of “Crew-5” – the fifth NASA flight of a “Crew Dragon” spacecraft with crew. Around 20 hours earlier, the space capsule had undocked from the ISS after the crew had to wait several days for the return flight due to bad weather.

    First joint mission since attack on Ukraine

    The “Crew-5” was launched in October from the Cape Canaveral spaceport. It was the first joint launch of NASA astronauts and a Russian cosmonaut from American soil since the beginning of the Russian war of aggression against Ukraine.

    Also on board the ISS are the Russian cosmonauts Sergei Prokopyev and Dmitri Petelin as well as NASA astronaut Frank Rubio and the crew of “Crew-6”, which arrived just a few days ago and consists of the Americans Stephen Bowen and Warren Hoburg, the Russian Andrei Fedyaev and the Emirati Sultan al-Nijadi.

    The “Starliner” breakdown list has gotten longer

    Boeing does not only develop airplanes, but also spaceships. But the “Starliner” is particularly impressive due to its mishaps. The launch date for the first manned flight has been postponed again.

    The list of problems with the “Starliner” is long. In December 2019, it was supposed to fly to the International Space Station (ISS) for the first time without a crew. But the spacecraft never arrived at the ISS during this flight. Due to a software problem, the “Starliner” used up too much fuel too early, was unable to reach the ISS’s altitude and had to abort its flight after two days.

    The repeat of the flight in August 2021 ended even before it began. The humid weather in Florida had caused valves in the fuel lines to corrode, so that several of these valves were jammed before takeoff. The “Starliner” rolled from the launch site back to the workshop.

    It was only on the third attempt that the “Starliner” reached the ISS, two and a half years late. However, there was no crew on board for all of these flights. The first flight with people on board was actually planned for this summer – but the American space agency NASA and the Boeing company announced at a press conference that the launch would be postponed until spring 2024.

    Parachutes and faulty adhesive tapes

    What is not working and is preventing the one remaining, crucial qualification flight of the “Starliner” to the International Space Station are, on the one hand, fastenings for the parachutes and, on the other hand, adhesive tape. The fiberglass tape used to bundle electrical cables in the “Starliner” has proven to be flammable under very specific circumstances – this was not clearly noted in the aerospace engineers’ databases. The tape was installed in good conscience. Now it has to be removed.

    With the parachutes, it turned out that connections between the parachute lines and the return capsule could be overloaded.

    The design and material of the connection on the parachutes have already been changed. Whether this solves the problems will be determined during a test landing after being dropped from a high altitude, but this cannot be taken place until November. The unexpectedly flammable tape has already has been largely removed from the upper half of the space capsule or covered with non-flammable material. It will take a few more weeks to replace it in the lower half of the “Starliner.”

    “Starliner” still a loss-making business for Boeing

    If the next flight, the first with a crew, is successful, Boeing would be the second company alongside SpaceX to have the license to transport space personnel to the ISS. With an estimated 50 to 60 million dollars per passenger, this would be a lucrative business.

    But it remains to be seen whether Boeing will ever make a profit with the “Starliner” with these impressive ticket prices. NASA agreed a fixed price of five billion dollars with Boeing for the development of the “Starliner” – but the many breakdowns and delays have resulted in costs of more than six billion dollars for Boeing.

    NASA sticks to project

    Nevertheless, the project will not be abandoned – not only because the next, decisive flight could make the project a success and make all the mishaps forgotten. But also because NASA started its commercial spacecraft program more than ten years ago so that one day it would no longer be dependent on just one type of spacecraft from one manufacturer.

    Although SpaceX has been reliably bringing new crew to the ISS every six months, NASA believes that splitting this business between the two companies would make sense. Memories are still fresh of the years when the USA had no spaceships of its own to transport men and women to the ISS because the space shuttles were unusable and had to buy flight opportunities from the Russian space agency.

  • Spaceship costs Boeing hundreds of millions of dollars

    On the second completely private mission, a four-person crew set off for the International Space Station ISS for around ten days. For the first time, a woman from Saudi Arabia is on her way into space.

    A four-person crew flew to the International Space Station ISS on a rocket from the US company SpaceX. Former NASA astronaut Peggy Whitson, ex-racing driver John Shoffner, Saudi Arabian astronaut Rayyanah Barnawi and her colleague Ali Alqarni flew into space on board a “Dragon” capsule. They are expected to stay in the ISS for around ten days as part of the “Axiom-2” mission and carry out scientific experiments.

    Passengers have to pay themselves

    The trip is organized by the private space company Axiom Space in cooperation with NASA and SpaceX. According to media reports, the passengers will each pay around 50 million euros for the trip. Axiom Space, founded in 2016 in Houston, Texas, by former NASA manager Michael Suffredini and Iranian-American entrepreneur Kam Ghaffarian, sees itself as a future major player in the space market.

    In April 2022, “Axiom-1” was the first completely private mission to the ISS. At that time, the Spanish-American astronaut Michael López-Alegría, the US entrepreneur Larry Connor, the Israeli entrepreneur Eytan Stibbe and the Canadian investor Mark Pathy flew to the ISS. There had been individual space tourists on the ISS several times before, but the “Axiom-1” mission was the first completely private crew.

    First woman from Saudi Arabia in space

    Researcher Barnaui is the first woman from Saudi Arabia to take part in a space flight. She and Air Force pilot Ali al-Karni are also the first representatives of their country to do so since 1985, when a Saudi prince took off on a space flight aboard the space shuttle Discovery as part of NASA’s space program.

    The mission was led by former NASA astronaut Whitson, who now works for SpaceX. The former first commander of the ISS held her country’s record for the longest total stay in space at 665 days. Also on board was US businessman John Shoffner.

    “Starliner” astronaut flight postponed

    Astronauts are due to be launched to the ISS on Boeing’s “Starliner” a long time ago. Now the spacecraft will not be ready for its first manned flight until March next year at the earliest.

    Contrary to plans, the Starliner spacecraft will not take off for its first astronaut flight this year. The space capsule will not be ready for launch until early next year at the earliest. This was announced by the US space agency NASA and the US aircraft manufacturer Boeing at a press conference.

    Boeing’s program manager Nappi explained: “According to current plans, we expect to be ready with the spacecraft at the beginning of March.”

    However, this does not mean that there is already a launch date for early March. This will have to be set at a later date and will depend on the seats on the ISS and the capabilities of the United Launch Alliance company, which is supplying the rocket for the launch of the spacecraft.

    More glitches in final test phase

    New technical problems arose during the final test phase in the spring: with the parachute system and with an adhesive tape that proved to be flammable. This was actually intended to protect the capsule’s wiring.

    The first manned flight of the spacecraft was supposed to take place in July. This date had also been postponed several times before.

    So far, the only unmanned test flights have been successful

    Almost ten years ago, NASA commissioned Boeing to bring astronauts to the International Space Station (ISS) and back to Earth. This was achieved for the first time in 2022 – but only unmanned. At the time, NASA spoke of a “milestone”.

    Spaceship costs Boeing hundreds of millions of dollars

    NASA wants to use Boeing’s space capsule as an alternative to SpaceX’s Dragon space capsule for transporting to the ISS. The ongoing delays to the first flight of the “Starliner” have already cost Boeing an additional $257 million, as the US aircraft manufacturer announced last month The “Starliner” is a partially reusable spacecraft that

    consists of a capsule for the crew and a service module that can be launched with a rocket. Once approved for transporting people, it can transport up to four crew members to the ISS.

    “Starliner” back on Earth

    Test successfulBoeing’s “Starliner” was actually supposed to transport astronauts to and from the ISS a long time ago. But the project is plagued by a number of problems and is far behind schedule. Now a test flight has been successful for the first time.

    The “Starliner” spacecraft, developed by Boeing on behalf of the US space agency NASA, has completed its first successful test flight to the International Space Station ISS. After around four days at the ISS, the unmanned spacecraft undocked from the space station on Wednesday as planned and landed a few hours later in the US state of New Mexico, Boeing and NASA announced.

    It was an important test for the crisis-ridden “Starliner”. The spacecraft is intended to transport astronauts to the ISS in the future – as an alternative to SpaceX’s “Crew Dragon” space capsule.

    Project plagued by problems for a long time

    However, due to a number of problems, the project is far behind schedule. The “Starliner” is supposed to have been transporting astronauts to and from the ISS long ago, but during an initial test in December 2019, the spacecraft did not make it to the ISS – partly due to a problem with the automatic ignition of the engines.

    Last year, the tests were postponed several times – and then the “Starliner” finally had to go back to the workshop due to valve problems.

    NASA speaks of “milestone”

    NASA described the successful launch from Cape Canaveral Spaceport last week as a “milestone.” The spacecraft was carrying over 300 kilograms of cargo, including supplies for the ISS crew.

    The Starliner is a partially reusable spacecraft consisting of a capsule for the crew and a service module that can be launched by rocket. Once approved for human transport, it can carry up to four crew members to the ISS.

    Boeing’s “Starliner” launch aborted again

    Boeing’s “Starliner” was supposed to transport astronauts to the ISS long ago. But the project is plagued by crises. Now another launch has had to be canceled.

    A planned first manned test flight of the crisis-ridden spacecraft “Starliner” was aborted again shortly before takeoff. Around three minutes before the space capsule was due to take off from the Cape Canaveral spaceport in the US state of Florida, there was initially a temporary stop, then the launch was canceled entirely by the US space agency NASA.

    The reasons for this were initially unclear. There was probably an error in a computer system on the ground, the space agency said.

    “Starliner” as an alternative to the “Dragon” capsule

    The test flight was aborted shortly before takeoff about a month ago and has been postponed again and again since then. NASA astronauts Barry Wilmore and Suni Williams were actually supposed to set off for the International Space Station ISS on board the “Starliner” and stay there for about a week. According to NASA, there would be further launch opportunities on Sunday and on June 5 and 6. However, it was initially unclear whether one could be used.

    NASA plans to regularly use the “Starliner” alongside SpaceX’s “Dragon” capsule to transport astronauts to the space station. The “Dragon” capsule from multi-billionaire Elon Musk’s company has been transporting astronauts to the ISS on behalf of NASA for several years. The “Starliner” program, on the other hand, has suffered a number of setbacks in recent years.

    SpaceX mission

    Launch of giant rocket “Starship” postponed

    Things are not going well with the “Starship” of Elon Musk’s space company SpaceX: After the explosion during the first test flight, there are now also problems with the second attempt at the largest rocket in the world.

    The second test of the largest rocket system ever built in the history of space travel has been postponed from Friday to Saturday. A drive unit on a control surface – a so-called grid fin actuator – still needs to be replaced, wrote Elon Musk on the short message service X, formerly Twitter, which he owns. “That’s why the launch has been postponed to Saturday.” A grid fin is a component that is intended to stabilize the rocket when it returns to Earth and when it lands. It is brought into position by the actuator.

    SpaceX announced shortly afterwards that a 20-minute launch window for the “Starship” would open at 2 pm Central European Time on Saturday. Musk’s space company also sent a noise warning to the population of the surrounding Cameron County in the US state of Texas: There may be a loud noise from the 33 engines. What one hears on site also depends on the weather and other conditions. The second launch attempt was actually scheduled for Friday afternoon Central European Time.

    Requirements of the supervisory authority before second launch

    The “Starship” took off for an unmanned test launch for the first time in mid-April – and four minutes later, it staggered and exploded and broke apart. Billionaire Musk, whose company SpaceX developed and built the “Starship” rocket system, did not see this as a setback, but rather as an “exciting test launch”. Musk had always been stressed that another attempt with another “Starship” should follow as soon as possible.

    However, after an investigation, the US aviation authority FAA called for numerous improvements to be made before the next test launch could take place. SpaceX has now announced that it has implemented a number of changes. The FAA then gave the green light. The company has now “met all the necessary criteria, particularly with regard to safety, the environment and financial responsibility,” the authority explained.

    Environmentalists have nevertheless opposed the second launch attempt. “We fear that the second launch will once again cause significant environmental damage,” said Jared Margolis, a lawyer for the environmental organization Center for Biological Diversity. A lawsuit is pending against the FAA because of the first “Starship” launch attempt. Several environmental organizations accuse the agency of not having properly assessed the environmental risks posed by the new giant rocket.

    “Starship” to bring astronauts to the moon

    The “Starship” – consisting of the approximately 70-meter-long “Super Heavy” booster and the approximately 50-meter-long upper stage, also called “Starship” – is intended to enable manned missions to the Moon and Mars. The system is designed so that the spacecraft and rocket can be reused after returning to Earth.

    The system, which is about 120 meters long in total, will be able to transport well over a hundred tons of cargo in the future. NASA wants to use the “Starship” to bring astronauts to the moon. SpaceX hopes to reach Mars one day.

    The “Starliner” breakdown list has gotten longer

    Boeing does not only develop airplanes, but also spaceships. But the “Starliner” is particularly impressive due to its mishaps. The launch date for the first manned flight has been postponed again.

    The list of problems with the “Starliner” is long. In December 2019, it was supposed to fly to the International Space Station (ISS) for the first time without a crew. But the spacecraft never arrived at the ISS during this flight. Due to a software problem, the “Starliner” used up too much fuel too early, was unable to reach the ISS’s altitude and had to abort its flight after two days.

    The repeat of the flight in August 2021 ended even before it began. The humid weather in Florida had caused valves in the fuel lines to corrode, so that several of these valves were jammed before takeoff. The “Starliner” rolled from the launch site back to the workshop.

    It was only on the third attempt that the “Starliner” reached the ISS, two and a half years late. However, there was no crew on board for all of these flights. The first flight with people on board was actually planned for this summer – but the American space agency NASA and the Boeing company announced at a press conference that the launch would be postponed until spring 2024.

    Parachutes and faulty adhesive tapes

    What is not working and is preventing the one remaining, crucial qualification flight of the “Starliner” to the International Space Station are, on the one hand, fastenings for the parachutes and, on the other hand, adhesive tape. The fiberglass tape used to bundle electrical cables in the “Starliner” has proven to be flammable under very specific circumstances – this was not clearly noted in the aerospace engineers’ databases. The tape was installed in good conscience. Now it has to be removed.

    With the parachutes, it turned out that connections between the parachute lines and the return capsule could be overloaded.

    The design and material of the connection on the parachutes have already been changed. Whether this solves the problems will be determined during a test landing after being dropped from a high altitude, but this cannot be taken place until November. The unexpectedly flammable tape has already been largely removed from the upper half of the space capsule or covered with non-flammable material. It will take a few more weeks to replace it in the lower half of the “Starliner.”

    “Starliner” still a loss-making business for Boeing

    If the next flight, the first with a crew, is successful, Boeing would be the second company alongside SpaceX to have the license to transport space personnel to the ISS. With an estimated 50 to 60 million dollars per passenger, this would be a lucrative business.

    But it remains to be seen whether Boeing will ever make a profit with the “Starliner” with these impressive ticket prices. NASA agreed a fixed price of five billion dollars with Boeing for the development of the “Starliner” – but the many breakdowns and delays have resulted in costs of more than six billion dollars for Boeing.

    NASA sticks to project

    Nevertheless, the project will not be abandoned – not only because the next, decisive flight could make the project a success and make all the mishaps forgotten. But also because NASA started its commercial spacecraft program more than ten years ago so that one day it would no longer be dependent on just one type of spacecraft from one manufacturer.

    Although SpaceX has been reliably bringing new crew to the ISS every six months, NASA believes that splitting this business between the two companies would make sense. Memories are still fresh of the years when the USA had no spaceships of its own to transport men and women to the ISS because the space shuttles were unusable and had to buy flight opportunities from the Russian space agency.

    Possible ISS replacement

    Airbus joins private space station projectAirbus is getting involved in the development of a private space station that could replace the ISS. The aircraft manufacturer will work with the US company Voyager Space to advance the Starlab project.

    The European aircraft manufacturer and aerospace company Airbus is participating in a private space station that could replace the aging International Space Station ISS. Airbus wants to participate in the development, construction and operation of the Starlab station of the US company Voyager Space.

    Starlab to be sent into orbit in 2028

    A joint venture is planned, as Voyager Space and Airbus announced. According to current plans, Starlab is to enter orbit in 2028.

    The space station will provide space for various research laboratories and will therefore play an important role in space research. In addition to Voyager Space, the Hilton hotel group is also involved as a partner in the planning of the residential quarters.

    Replacement of the ISS by several space stations

    The ISS is gradually dismantled and crashed at the end of the decade. It is becoming apparent that there will be several stations instead of one huge structure.

    The Voyager company Nanoracks received $160 million from the US space agency NASA at the end of 2021 to develop Starlab. Airbus has already commissioned Voyager Space in January to provide technical support and know-how for the project.

    Longest ISS mission

    Astronauts set an involuntary recordThe astronauts Rubio, Prokopyev and Petelin were actually supposed to stay in space for 180 days. But a glitch delayed their return home. US astronaut Rubio is now especially looking forward to hugging his children.

    US astronaut Frank Rubio and the two Russian cosmonauts Sergei Prokopyev and Dmitri Petelin have landed safely back on Earth – after more than a year in space.

    The three astronauts, who had launched to the ISS on September 21 last year, landed in the steppe of the Central Asian Republic of Kazakhstan on Wednesday in the Soyuz MS-23 space capsule. The Russian space agency Roscosmos showed photos and videos of the picture-perfect landing. The astronauts were reportedly doing well.

    With 371 days in space, it was the longest mission on the International Space Station ISS, according to Roscosmos. However, it was not actually planned that way. The astronauts were supposed to return after six months, but damage to their Soyuz capsule prevented them from returning home as planned. A replacement spacecraft had to be found, which meant that the length of their stay was doubled.

    Only two cosmonauts have spent longer periods in space: Valery Polyakov spent a total of 437 days in space from 1994 to 1995, and Sergei Avdeev spent 379 days from 1998 to 1999. Both had worked on the ISS’s predecessor station, Mir.

    US record

    Meanwhile, American astronaut Frank Rubio set a US record. Due to the involuntary extension of the ISS mission, Rubio stayed in space for 371 days instead of the planned 180 days, thus beating the record of his compatriot Mark Vande Hei.

    “It’s good to be home,” said Rubio. If he had known in advance how long the mission would be, he would not have agreed. “Hugging my wife and children will be the most important thing, and that’s probably what I’ll focus on in the first few days,” said the military doctor and helicopter pilot.

    Damage to the Soyuz capsule

    A leak was discovered in the trio’s Soyuz MS-22 capsule in December, which experts believe was caused by the impact of a micrometeorite. This had damaged the outer coating and a cooling unit. Both Roscosmos and NASA assured at the time that they saw no danger to the operation of the ISS. Due to the damage, the return planned for March was postponed. The Soyuz MS-23 was sent to the ISS from the Baikonur Cosmodrome in Kazakhstan as a replacement spacecraft.

    International Space Station

    New crew launched at the ISS

    A four-person crew under the command of American Jasmin Moghbeli is on its way to the International Space Station ISS. The astronauts bring different personal backgrounds with them on board.

    This morning, a four-person crew took off from the spaceport of the US space agency NASA toward the International Space Station ISS. Just under 24 hours later, they are expected to dock with the ISS around 400 kilometers above the Earth. The launch was originally planned for Friday, but was then postponed by a day.

    Until now, the US space agency NASA has always had two or three of its astronauts on each of the SpaceX flights from the USA to the ISS. This time it is just NASA astronaut Jasmin Moghbeli. In addition to the commander, Andreas Mogensen from the European Space Agency ESA, the Japanese Satoshi Furukawa and the Russian Konstantin Borissow are also on board. They took off on board the SpaceX capsule “Dragon” from the Kennedy Space Center in Cape Canaveral, powered by a Falcon 9 rocket.

    Different experiences

    Moghbeli was born in Germany and grew up in New York. Her parents fled Iran during the 1979 revolution. She flew combat helicopters in Afghanistan for the US military. Now she is embarking on her first space flight. She wanted to show Iranian girls that they could have big goals, she said before the flight. “Believing in yourself is something very powerful.”

    Mogensen used to work on oil drilling platforms off the West African coast. He is sure that this experience led him to be selected as Denmark’s first astronaut. Furukawa worked as a surgeon for ten years before becoming an astronaut. He and Mogensen have both been on the ISS. Borissov is new to space travel. He runs a freediving school in Moscow and is a judge for the sport. Persian stew, Danish chocolate and Japanese fish will be among the dishes the astronauts will take to the ISS.

    Seventh rotation of the IS crew since 2020

    A few days after docking, the “Dragon” capsule is scheduled to bring another four-person crew, consisting of the Americans Stephen Bowen and Warren Hoburg, the Russian Andrei Fedyaev and the Emirati Sultan al-Nijadi, back to Earth. A total of seven people are currently on the International Space Station.

    According to NASA, the current mission is the eighth manned flight of the “Dragon” spacecraft from multi-billionaire Elon Musk’s SpaceX company. The aircraft manufacturer Boeing was also hired for the mission, but has not yet transported any astronauts. Due to problems, the Boeing crew capsule will remain on the ground until 2024.

    According to NASA, this is the seventh rotation of the ISS crew since 2020. The new crew members will conduct research in space for six months. Since the start of the Russian offensive in Ukraine a year ago, the International Space Station is one of the few areas in which Russia and the USA still work together.

    On the far side

    China announces successful landing of lunar probeThe mission is considered a milestone in Chinese space travel: Another probe has landed on the far side of the Moon. “Chang’e 6” is intended to collect soil and rock samples there and bring them back to Earth.

    According to state media, the Chinese lunar probe “Chang’e-6” has successfully landed on the far side of the moon. As the state news agency Xinhua reported, citing the Chinese space agency, “Chang’e-6” landed as planned in the so-called South Pole-Aitken Basin, a huge lunar crater.

    The probe is intended to collect rock samples from the far side of the Moon for the first time and bring them to Earth – it would be the first time in human history that this could be achieved.

    China launched its next mission to explore the far side of the moon at the beginning of May. A carrier rocket carrying the probe named after the Chinese moon goddess took off from the Wenchang spaceport on the southern Chinese island of Hainan on May 3. The technically complex mission is expected to last a total of 53 days. The probe is expected to collect a total of 2 kilograms of rock and soil samples in the South Pole-Aitken Basin and carry out other experiments.

    First landing in 2019

    China also wants to become a major power in space travel and is investing billions in its space program. And with success: in 2019, the People’s Republic succeeded in landing a probe on the far side of the moon for the first time with “Chang’e-4”. In the follow-up mission “Chang’e-5” in 2020, China became the third country after the USA and the Soviet Union to bring moon rocks to Earth – albeit from the front of the moon. Now China wants to be the first country to analyze samples from the far side of the moon.

    Scientists have high hopes for research into the far side of the Moon because its surface is not covered in lava. It could therefore be easier to access rocks that could provide information about the formation of the Earth’s satellite.

    China reaches for moon rocks

    It will be one of China’s most difficult space flights to date: a spacecraft is to bring moon rocks to Earth for the first time in 40 years. The mission is seen as China’s forerunner to Mars.

    China has launched an unmanned spacecraft to land on the moon. The flight is intended to bring rock samples back to Earth for the first time in more than four decades. The “Long March 5” rocket took off early Tuesday morning (local time) from the space station in Wenchang on the southern Chinese island of Hainan. The spacecraft, named after the Chinese moon goddess “Chang’e”, is expected to bring a lander to the moon on Sunday to collect rock and drill samples.

    If the return is successful, China would be the third spacefaring nation to succeed in such a venture, after the USA and the Soviet Union in the 1960s and 1970s. The spacecraft is to land in the volcanic region named after the German astronomer Karl Rümker (1788-1862). It is located in the “Ocean of Storms” in the upper left part of the side of the moon facing the earth.

    Two weeks time

    The mission is considered one of the most complicated that China’s space program has ever undertaken: for the first time, a Chinese ascent stage would launch from the Moon, take rock samples with it and perform a docking maneuver in the orbit of the Earth’s satellite before the return capsule flies back to Earth. The landing module can only stay on the lunar surface for one lunar day – about 14 Earth days – because it cannot be heated and cannot withstand the freezing cold nights.

    The 8,200 kilogram spacecraft consists of four modules: the orbiter with the return capsule and the lander with the ascent stage. After touching down on the lunar surface, the lander will use a long arm to collect around two kilograms of lunar rock and samples from boreholes up to two meters deep and store them in a chamber.

    Researchers hope that the samples will provide important new insights into the volcanic activity and history of the Moon. The USA’s Apollo missions brought back around 380 kilograms of moon rock. The Soviet Union collected a total of 300 grams – most recently with the unmanned “Luna 24” mission in 1976, when around 170 grams of moon dust were brought to Earth.

    Reaching for Mars?

    If the mission is successful, it would be a major step forward for China’s space program. Some experts believe it could pave the way for a similar mission to Mars or even a manned mission to the Moon.

    Astronomer Jonathan McDowell of the Harvard-Smithsonian Center for Astrophysics pointed out that China has already landed on the moon twice with the “Chang’e 3” and “Chang’e 4” missions. With a “Chang’e 5” test mission in 2014, it proved that it can navigate back to Earth and land a capsule there. Now it just needs to show that it can take samples on the moon and launch from there again. “That’s why I’m pretty optimistic that China can pull it off,” said McDowell.

    China is pursuing an ambitious space program with missions to the Moon and Mars as well as the construction of its own space station. In January 2019, China became the first spacefaring nation to land “Chang’e 4” on the relatively unexplored far side of the Moon. A rover was deployed to continue exploring the surface.

    Numerous moon missions planned

    The renaissance of manned space travel63 years ago today, the first human flew into space – the first manned mission to the Moon followed a few years later. Now several countries are planning to return to Earth’s satellite, first with probes and rovers, then with humans.

    The USA, China, Russia, India – several countries want to return to our Earth’s satellite. The race to the moon is in full swing. It’s about prestige, science and economic interests. 55 years after NASA flew astronauts to the moon for the first time, the return is imminent. In 2025, the USA wants to orbit the moon with four astronauts on the “Artemis II” mission. The manned moon landing is planned for 2026.

    “Astronautical space travel is currently experiencing a renaissance,” says ESA Director General Josef Aschbacher. Humans will be irreplaceable in the exploration of the Moon, together with robotics and artificial intelligence: “You need a good combination of both. A human exploring the terrain on the Moon, for example, can see more than a robot with all imaginable sensors and cameras. They think much further, much more holistically, in other words more holistically, than all the software and robots we can program. Apart from that, an astronaut can cover much greater distances to explore areas than a rover.”

    Robots lack scientific intuition

    ESA astronaut Alexander Gerst refers to the American “Viking” probes that flew to Mars in the 1970s to clarify, among other things, the question of whether there is water on the Red Planet.

    He explains: “Traces of water have actually been found, but we now know that water ice is often found directly beneath the Earth’s surface. If the excavator had dug just a few centimeters deeper, it would probably have found bare ice.” Robots simply do not have the scientific intuition that trained astronauts have.

    NASA plans new lunar vehicles

    NASA has just announced that it has awarded three space companies the contract to build a lunar vehicle. It is still unclear who will win the contract, but one thing is clear: the so-called LTV (lunar terrain vehicle) will be able to travel over the lunar surface with and without astronauts and cover long distances. The NASA press release states:

    The LTV will be able to cope with the extreme conditions of the Moon’s south pole and will feature advanced technologies for energy management, autonomous driving, and state-of-the-art communications and navigation systems. Crews will use the LTV to explore, transport scientific equipment, and collect samples from the lunar surface much further than they could go on foot, allowing for greater scientific yield.

    The first man in space

    63 years ago today, Soviet cosmonaut Yuri Gagarin became the first person to orbit the Earth. His flight in the Vostok 1 space capsule lasted one hour and 48 minutes. Since then, generations of astronauts have flown into space. With the construction of the International Space Station (ISS) in 1998, a certain routine set in. Long-term stays of six months are common practice. However, the planned moon flights now present the astronauts with completely different challenges.

    Astronaut Gerst says that the planned moon flights will open a completely new chapter in astronautics: “The planned moon missions are complex, with many logistical steps. You have to land there and are trapped on the moon by its gravity. The environment is hostile to life, with temperatures well below minus 100 degrees on the night side and well over 100 degrees on the day side, plus space radiation and vacuum.”

    In contrast to the “Apollo” missions, the “Artemis” missions are now flying to the moon to conduct sustainable science in international cooperation. The first moon flights between 1969 and 1972 were national solo efforts by NASA and took place during the Cold War. USA against the Soviet Union – back then it was about prestige and the fight for spheres of influence, with astronauts as heroes of the respective nation.

    Reasons for returning to the moon

    Why do so many nations actually want to go back to the moon? After the Apollo missions, NASA stopped the program. Today it is clear: the moon is more than just a collection of dead rock.

    ESA Director General Aschbacher explains: “With lunar exploration, completely new economic sectors will develop in the future. Mineral resources will be mined. There is ice on the moon, which can be converted into rocket fuel or oxygen for astronauts. It’s about setting up a camp in which to conduct research. Research and economic aspects will go hand in hand. We don’t yet know what the moon has to offer.”

    For him, there is no question that Europe must also participate in the exploration of the moon in order not to be left behind technologically and to remain competitive: “What is at stake is how Europe positions itself in the next ten, twenty years, as a continent that develops technology. We have excellent technologies in the automotive sector, in mechanical engineering, in many other areas, and space travel will be a domain of the future.”

    Space travel is expensive

    Aschbacher justifies the immense costs of space travel and explains that the ESA is not only investing money in the training and flights of astronauts:

    “We must not forget that ESA spends about half of its money on Earth observation, navigation and telecommunications. Our focus is on planet Earth. NASA has focused more on exploration, i.e. on astronaut space travel, to the Moon and Mars. This shows the slightly different priorities of the space agencies.” He calculates that the ESA budget in 2024 will be around 7.8 billion euros – money that will be used to provide a lot of technology, but also information about the Earth, the universe and the existence of humanity.

    Astronautic space travel is considered THE symbol of exploration in space. ESA astronaut Gerst is aware of the responsibility that his work is financed with public money. He compares the moon missions with the exploration of Antarctica: “In the beginning, it was about putting up a flag at the South Pole, with great political support. Then there was little research for decades, until there was a big push in the middle of the 20th century because people realized that there were scientific reasons to explore Antarctica. We are now facing this second wave on the moon.”

    Preparing for the moon

    Gerst does not yet know when or if he will be one of the astronauts who will fly to the moon. He says that he is head of the European astronaut corps for the new generation of ESA astronauts, but that he is preparing himself for a possible moon mission:

    “I have regular training sessions planned to prepare myself and my colleagues for such missions. For example, working in a space suit or flight training. We also practice conducting science in difficult environments, such as in the Antarctic, in underground caves or in volcanic areas.”

    The moon in the distance?

    On Monday, the US space probe “Peregrine” set off for the moon, but it will never arrive there. Now the next setback: NASA is postponing a manned flight to the moon. What is the status of the current missions?

    Everything started well. The launch of “Peregrine” on Monday was supposed to usher in a “new era of space travel to the moon and beyond,” the spokeswoman said during the lift-off broadcast. The rocket worked perfectly – but its payload, the probe on top of it, did not.

    Their on-board engines are acting up so much that they make it impossible to land on the lunar surface.

    German lunar experiment delivers data

    But even without the crowning achievement of a successful landing on the moon, the mission is already a success for the German Aerospace Center (DLR). There, in the Department of Radiation Biology, Thomas Berger is pleased with the progress of the mission so far.

    The DLR contributed to an experiment called M42 – named after the Orion Nebula in the night sky – to the “Peregrine” mission. “It measures the radiation on the way to the Moon and is now sending this measurement data back to Earth,” Berger said. The instruments will probably be managed until Thursday, before “Peregrine” runs out of fuel. Even without landing, at least this part of the mission was successful.

    But the bad news from the moon is still not letting up: Yesterday, the US space agency NASA announced that it does not want to send four astronauts around the moon at the end of the year – for the first time since the 1970s. Just over a year ago, everyone saw the test flight of “Artemis I”, said Bill Nelson. “It was so successful that further tests are necessary,” was the strange logic of the NASA boss.

    No need for haste

    Amit Kshatriya from NASA’s Exploration Systems Division explains exactly what went wrong: “During the return to Earth, more material came loose from the heat shield of the Orion crew capsule than the engineers had expected. We are currently investigating the cause and hope to have completed our investigations in the spring.”

    “Artemis I” took place in December 2022. Since then, NASA has been studying the heat shield. And studying. And studying. NASA chief Nelson justifies these delays: “In order to give the Artemis teams more time, we will not launch ‘Artemis II’ this year, but towards the end of next year. The follow-up mission “Artemis III” is not scheduled to bring people to the lunar south pole until the end of 2026.

    Other countries overtake the USA

    Delays have lasted several years for all sorts of reasons that a private space company like SpaceX could have possibly resolved within weeks. Elon Musk’s company plans to send its new Starship into space for the third time next month.

    After these recent delays, it’s more questionable whether the next people on the moon will have a Stars and Stripes flag in their luggage – but not for Bill Nelson: “I’m not worried that China could land on the moon before us,” says the NASA administrator.

    China has a very ambitious program. And sending people to the moon before the USA would be a huge PR coup. “But I don’t think they’ll manage it.” Neither the Americans nor the Chinese are next anyway: On January 19, the unmanned Japanese probe SLIM will land on the moon – or not.

    Third test flight unsuccessful

    SpaceX reports loss of giant rocket

    The third test flight of the gigantic SpaceX rocket “Starship” has also failed. It was apparently destroyed during its re-entry into the Earth’s atmosphere. One day it will transport people to the moon and Mars.

    The largest rocket system ever built was also unable to complete a third test flight. The unmanned “Starship” was made into space after launching from SpaceX’s Starbase spaceport in Boca Chica, Texas. However, it was destroyed upon re-entering the Earth’s atmosphere, according to commentators on the live stream from the space company SpaceX.

    A test flight lasting around an hour was planned, after which both rocket stages were to land in the sea. The lower rocket stage also attempted to land, but then failed, as the commentators reported: “We lost the rocket stage, so we have to look at the data again.”

    More goals achieved than in previous tests

    The upper stage of the rocket was the first to enter space, where it probably remained for about half an hour before being lost during re-entry into the Earth’s atmosphere. This means that more goals were achieved in the third test than in the previous two.

    In the first test last April, the entire rocket system exploded after just a few minutes. In the second test in November, the two rocket stages were separated and the upper one continued to fly, but both exploded separately shortly afterwards. SpaceX always emphasizes that the aim of the tests is to collect data.

    Manned missions to the Moon and Mars

    In the future, the system, which will be around 120 meters long, will be able to transport well over 100 tons of cargo. It consists of a booster around 70 meters long and an upper stage around 50 meters long and is intended to enable manned missions to the Moon and Mars. The US space agency NASA also plans to use “Starship” for its Artemis moon landing mission.

    The system is designed so that the spacecraft and rocket can be reused after returning to Earth. This should make space missions cheaper.

    SpaceX to build new lunar module

    It was the first manned flight to the Earth’s satellite since 1972: The US space agency NASA has commissioned Tesla founder Elon Musk’s company SpaceX to build a lunar module.

    NASA continues to rely on Tesla founder Elon Musk for manned space travel. The US space agency commissioned Musk’s company SpaceX to build the next lunar module for flights to the Moon – the first since the Apollo missions 50 years ago.

    SpaceX is already using the Falcon 9 to transport astronauts to the International Space Station ISS for NASA. The special thing about Musk’s lunar module “Starship” is that all parts of the carrier system are reusable, which will significantly reduce the costs of each individual flight in the medium term.

    The price was apparently also an argument for NASA: According to the Washington Post, SpaceX made the cheapest offer at just under three billion dollars – and thus beat the competition, including “Blue Origin”, the space company of Amazon founder Jeff Bezos.

    Test flights not yet successful

    However, there is still a lot to do before the “Starship” can actually fly people to the moon as part of the Artemis mission. The prototype’s previous test flights either ended in a crash landing or the lander exploded in mid-air.

    It remains unclear when the first manned flight to the moon since 1972 will take place. Former US President Donald Trump had set NASA a deadline of 2024. The government of his successor Joe Biden is currently having the schedule reviewed again.

    Editor’s note: A previous version mentioned a moon rocket that Musk is supposed to build for NASA. However, this is the part of the rocket that is supposed to land on the moon’s surface at the end, i.e. the lunar module.

    Life like on the Red Planet

    Humans have reached Mars: The Red Planet is located in Texas and measures 160 square meters. With the help of a one-year simulation, NASA wants to prepare for a real Mars mission – even if it is still a long way off.

    An area of ​​​​red sand that is intended to simulate the Martian landscape. Four bedrooms, a living room, several work units as well as a mini gym, a vertical farm for growing vegetables and a medical area. Video recordings from Reuters and other media provide insights into the so-called Mars Habitat Simulator, in which four volunteers are to simulate life on the Red Planet for a year.

    NASA wants to investigate how people behave in total isolation far away from Earth, says the head of the research project, Grace Douglas, to the news agency Reuters.

    The “Crew Health and Performance Exploration Analog” (CHAPEA) project was developed as a simulation of a one-year stay on Mars. The intention is to observe a crew in isolation with many restrictions. “We want to understand how these restrictions affect their health and performance,” Douglas said.

    Mars on 160 square meters

    The nearly 160 square meter “Mars Dune Alpha” comes from the 3D printer and was built in a converted hangar at the Johnson Space Center in Houston, Texas. The experiment is intended to prepare for a future Mars mission, says NASA scientist Suzanne Bell: “In addition to the isolation and the confined space, we are also studying how the four test subjects get along with each other. When you live in a confined space with three other people, it can be exhausting.”

    The four paid volunteers will have to live under realistic Mars conditions. This means they will not have much water, no fresh air, very little luggage and very limited contact with friends and family – just as people might live on a Mars station in the distant future.

    Walking on Mars via Virtual Reality

    The only thing that cannot be simulated in “Mars Dune Alpha” is the gravitational pull of Mars, says Douglas in an interview with the television station Fox 13 News.

    A special zone has been set up for this purpose, adds NASA human performance engineer Alex Baughman: “Here we have created a space where we can take space or Mars walks using virtual reality and carry out simulated Mars experiments. For example, taking geological samples, removing dust from space suits or solar panels, or making repairs to the house.”

    Volunteers must meet astronaut requirements

    The four people who will volunteer to be locked in the Mars house for 365 days will be announced in the next few weeks. According to NASA, they must meet the standard requirements for astronauts.

    In case there is a ruckus in the Mars house or if one of them gives up, two replacement astronauts are ready to step in in an emergency. The simulation is set to begin in June, with two more experiments to follow. Things will get serious in 2040. NASA boss Bill Nelson is already dreaming of sending a manned spacecraft to Mars by then at the latest.

    “Pizza and pastries are forbidden”

    What can ESA astronaut Matthias Maurer expect from his six-month stay on the ISS? Before his launch with SpaceX, he told tagesschau.de what he plans to do on board – and what he will have to do without.

    tagesschau.de: Mr. Maurer, your start date has been postponed several times. How did you spend the waiting time?

    Matthias Maurer: In the past year and a half, I have hardly had the opportunity to go on holiday due to the intensive training in preparation for my mission. So I have used the waiting time to relax a bit. Of course, we are continuing to train: we are doing sports, practicing various tasks again, such as “catch and capture”, where we virtually operate a robot arm and similar things. Of course, we are also going through various procedures in detail. A few days ago, we had an emergency training session on how to behave if something were to happen after the Dragon capsule took off, such as a fire breaking out or a loss of pressure in the Dragon.

    As a crew we obviously spend a lot of time together, but we also had the opportunity to see our families a few times. When things got more or less boring, we started collecting plastic waste on the beach. We have a so-called beach house that is available to us and there is a private section of the beach that is not open to the public. The beach house sounds great, by the way, but it is more of a conference center. The mood within the crew is still good and couldn’t be better. We are of course looking forward to things getting started soon. But we are not allowing frustration or impatience to build up. We quickly came to an agreement as a crew.

    tagesschau.de: Do you also feel some excitement before the start?

    Matthias Maurer: Not yet, actually. I think the excitement comes when the hatch is closed and we hear the rocket being filled. That makes a lot of noises, and we’ve been prepared for that. But of course we haven’t really heard or felt it yet. I’ve already been in my capsule once. We had to test whether the spacesuit fits with the connections in the capsule.

    Weightlessness – also a problem for astronauts

    tagesschau.de: Despite the long preparation: Are you afraid that you won’t be able to cope with the weightlessness in space?

    Maurer: According to statistics, 85 percent of my colleagues cannot tolerate space. So I am actually quite sure that I will not be able to tolerate space for the first three days either. It would be like being on a ship with a bit more sea than you are used to. But we take medication for motion sickness, which means I will be able to dampen it down. With a bit of luck, I will be one of the 15 percent who laugh and say: It’s not that bad. But I am prepared to suffer for a few days. That is part of it.

    tagesschau.de: The weather has to be right for your launch. And not just in Florida, but also over the Atlantic west of Ireland. Why is that?

    Maurer: We’ll take off from Cape Canaveral and then fly parallel to the US coast towards Canada. Then we’ll continue towards England and Ireland. And when we reach the airspace over England, we’ll be in orbit. But if we abort the launch, we’ll fall into the sea. That’s why the weather has to be good at the landing sites where we’re planning to abort the launch. Even in the best conditions in Florida, the launch could be delayed due to bad weather in the emergency landing areas.

    Experiments that can only be carried out in space

    tagesschau.de: Let’s talk about your work in space, such as the experiments. Why is it so important to experiment in space?

    Maurer: Gravity here on Earth causes us to have physical effects that we don’t have in space. If you take water and pour some oil over it, for example, an oil film will float on top of the water here on Earth because the oil film is lighter. In space, the oil film doesn’t float, but behaves differently because of the weightlessness. We want to look into this and study additional interactions between the water and the oil, such as surface energy or interface effects. I have a lot of experiments with me in my luggage, and we’re also doing combustion experiments, for example. Because combustion also takes place differently in space than on Earth.

    tagesschau.de: On the ISS you also practice for long-term stays on the Moon and Mars. Can you give us an example?

    Maurer: Bringing resources to the ISS is already expensive. You can assume that every liter of water that we fly up costs around 10,000 euros – for transport costs and the entire logistics chain that goes with it. If we had to bring water to the moon, that is even more expensive. That means: every drop of water that we can actively save helps to make the mission possible and sustainable. So we need devices that create closed cycles. The water that we drink on the ISS becomes urine, which is then purified again to make drinking water. At the moment we recycle almost 90 percent of the water – that’s very good, but it’s not 100 percent yet. We have to get there.

    “People on the ground should become part of my mission.”

    tagesschau.de: You have been asked a lot in recent weeks what you are taking with you to the ISS. So let’s turn the question around: What would you like to take with you but are not allowed to?

    Maurer: We have a few holidays up there, Christmas and my birthday. A glass to toast with something you drink at a party would have been nice. But we’re not allowed to take that with us. Even something crunchy, like pizza or pastries, is forbidden – because of the crumbs. If they don’t fall on the floor, we could inhale them. Then they would be a real danger.

    tagesschau.de: Your German predecessor on the ISS always shared his mission with us through photos he posted from space. Do you plan to do the same?

    Maurer: When I applied to become an astronaut in 2008, I had no idea that social media would become so popular. Now it’s part of the astronauts’ job. Luckily, I have a team that supports me. But of course I have to ” feed” this team from above: I have to and will create images and video clips and put the emotions I have up there into words. I do this not only because I have to, but also because photography is one of my hobbies and hopefully the people on the ground will also become part of my mission.

  • What Is Cyberbullying? Cyberbullying is the use of technology to harass, threaten, embarrass, or target another person

    According to a study, almost two million schoolchildren are affected by cyberbullying. It is now a “permanent problem”. The coronavirus pandemic has exacerbated the situation.

    Cyberbullying is widespread among children and young people: more than 1.8 million schoolchildren (16.7 per cent) have already been affected, according to a study published today by the Techniker Krankenkasse and the Alliance against Cyberbullying.

    However, the proportion of students between the ages of eight and 21 who said they had already been affected by cyberbullying fell slightly compared to the previous study in 2020. Two years ago, it was 17.3 per cent. However, the number is still at a high level. In 2017, before the coronavirus pandemic, the proportion was 12.7 per cent.

    “The results show that cyberbullying has become a permanent problem in schools and in the private lives of children and young people,” said Uwe Leest, chairman of the Alliance Against Cyberbullying. The consequences are underestimated, and the perpetrators do not have to face any consequences.

    Young people spend more time online due to the pandemic.

    According to the survey, the pandemic exacerbated the problem. Around seven out of ten students (65 per cent) said that cyberbullying had increased since Corona. Parents and teachers see it similarly, with 46 per cent each. In 2022, 355 teachers, 1,053 parents, and 3,011 students nationwide participated in the online survey, which was carried out in cooperation with Techniker Krankenkasse.

    “I actually carry the bullying in my pocket all the time,” explains Hendrikje Schmidt from Crisis Chat, a psychosocial counseling service for children and young people, referring to her cell phone. This leads to many young people feeling helpless and powerless. “It never ends, and I can’t undo or stop it anymore.”

    Cyberbullying can have serious consequences.

    Jens Baas, CEO of Techniker Krankenkasse, explained that homeschooling and contact restrictions during the pandemic have led children and young people to spend even more time online. “This means conflicts are also being resolved more frequently over the internet.”

    Cyberbullying is a psychological burden and can have serious health consequences. Physical complaints such as headaches or stomach aches are accompanied by anxiety and sleep disorders as well as dejection or depression, said Baas. Around one in six minors (15 per cent) have resorted to alcohol, pills or drugs out of desperation. Almost one in four of those affected express suicidal thoughts (24 per cent).

    The survey looks not only at those affected but also at the perpetrators. Six per cent of the students surveyed said they had bullied someone online. What is striking is that the roles of perpetrator and victim can overlap. Almost one in five perpetrators have suffered from cyberbullying themselves.

    More media literacy is needed.

    The alliance called for strengthening prevention work against cyberbullying. Children must learn “social behaviour on the Internet.” It would also be desirable to have nationwide bullying advice centres and anonymous hotlines where people seeking help can turn.

    The Alliance Against Cyberbullying, founded in 2011, is a network of parents, educators, lawyers, doctors, and researchers. It educates people about cyberbullying and offers help on the Internet to those affected.

    Bullied and insulted

    For queer people, school is often a time of suffering. According to a survey, almost half of them have experienced bullying. Teachers sometimes do not intervene when insults such as “faggot” are used.

    The streets of Altenkirchen in the Westerwald are still empty when Greta Janzen walks through the town centre on a Sunday morning. Greta is 17, in the eleventh grade, and wants to take her A-levels. She went to secondary school until last summer when she came out as a lesbian. Some people had a problem with that. “It was always little things: when I was around, people called me a faggot or a transvestite, very often a transvestite. That sometimes hurts a little,” says Greta.

    The insults were not constant, “but that was just how it was; it never went away.” Teachers hardly intervened when insults like “faggot” or “tranny” were used, but they did when insults like “slut” were used, says Greta.

    And then I ask myself, what’s the difference? Why is a girl punished for calling another girl a slut, but not another person who insults me by calling me a transvestite or a faggot?

    Almost half were bullied, according to a survey.

    In 2020, the EU Fundamental Rights Agency FRA surveyed more than 16,000 queer people in Germany about their experiences – that is, everyone who is trans, intersex, non-binary or non-heterosexual. 48 per cent said they were bullied during their school years. 46 per cent said they never experienced anyone supporting or defending them during their school years. International studies also show that queer young people have a higher risk of suicide than non-queer young people.

    “Bound to traditional role clichés.”

    13 of the 16 state student representatives and the teachers’ union GEW told NDR that too little is being done in German schools to promote the acceptance of queer people. “Unfortunately, queerism, homophobia, transphobia and discrimination are still part of everyday life in schools,” says Julius van der Burg from the North Rhine-Westphalia state student representatives. The subject is highly taboo, especially in the lower grades, and coming out without negative consequences is often not possible. Only three state student representatives were unable to report any negative experiences.

    The GEW criticises that not enough is being done to ensure that queer young people can have a fear-free time at school. Part of the problem is that queer topics are not covered enough, even in sex education classes. “The focus in schools is still on heterosexual love between the opposite sex, often tied to traditional role clichés,” says Janina Glaeser from the GEW executive board.

    Many state student representatives also complain about a lack of knowledge and understanding among teachers. “Unfortunately, many students are just lucky to meet committed teachers who address these issues and promote diversity at their school,” says Fabia Klein, spokesperson for the Bavarian State Student Council.

    Curricula not up to date

    The Secretary General of the Federal Student Conference, Katharina Swinka, says that students need “visibility and safe places to go, especially during puberty, in the phase of finding themselves, to be able to identify themselves.” Not only teachers, but also social workers must be contact persons and confidants for queer students.

    The chairman of the Saxony-Anhalt State Student Council, Moritz Eichelmann, is calling for “urgent workshops, training and further educational campaigns” for teachers. In addition, curricula and textbooks must be brought up to date.

    A spokesman for the Standing Conference of the Ministers of Education and Cultural Affairs of the Länder in the Federal Republic of Germany (KMK) describes the criticism as “assessments of the practice in schools to which the KMK has difficulty responding” and refers to the individual states. He cannot name any particular concepts for queer students.

    Germany only ranks 15th in Europe.

    According to surveys, people in Germany have generally become more tolerant in recent decades and years, and the legal situation has also improved significantly. But queer people continue to be disadvantaged here.

    The European Rainbow Index measures the level of legal equality for queer people in 49 countries. Germany ranks 15th in this year’s ranking – behind countries such as Montenegro, Malta, Denmark and France.

    Discrimination through Transsexual Law

    One discriminatory law in Germany is the Transsexual Law of 1980. According to this law, trans people must obtain two expert opinions and appear in court to change their first name. Those affected criticise that the most intimate questions are repeatedly asked, for example, about sexual preferences. According to the Federal Ministry for Family Affairs, the Transsexual Law will be abolished by the end of the year.

    Greta Janzen now goes to a different school – in Cologne. There, the teachers support queer young people. “Our school’s slogan is also: school without homophobia, school with diversity,” she says. In this school, Greta can be who she is.

    Everyday terror

    Bullying is no longer a marginal phenomenon but affects many students in their authentic and digital lives. This can result in severe mental illnesses. They often receive too little help.

    All parents probably want their children to feel comfortable at school. But for many students, everyday life is different: there is pressure in the classroom, not just because of the demand for performance. Bullying makes life difficult for many young people. According to a 2017 PISA study, one in six students aged 15 in Germany is affected by hostility.

    Twice as many young people are afraid of violence, bullying or exclusion in the classroom and on the school playground or are so scared of getting into unpleasant situations and being exposed to danger on the way to school, according to a study by the Bertelsmann Foundation.

    The risk of mental illness increases.

    Scientists from the Alliance Against Cyberbullying speak of bullying when a person is exposed to targeted and systematic attacks such as hostility, harassment or discrimination that occur repeatedly and extend over a longer period.

    The consequences can be severe if the attacks from classmates do not stop. Researchers have found that bullying increases the risk of mental illness many times over. According to a survey by Heidelberg University Hospital, 28 per cent of regular bullying victims report self-harm. Almost 47 per cent of young adults who are bullied even talk about suicidality, i.e. a desire for death, suicidal thoughts and suicide attempts.

    Measures against bullying
    What can parents and teachers do?

    The most important thing is trust, says Tom Lehel, founder of the “Stop Bullying! Strengthen Children!” foundation. “Children need to have people they trust and can tell when something isn’t going right, and they don’t feel comfortable.” Parents are responsible for saying: “I am the rock in the storm. I listen to what you tell me, and I’m behind you.”

    Because children spend more time at school, teachers must take on this role much more, says Lehel. “That’s why teachers also need training to recognise bullying and build this trust in the class.”

    How do parents or teachers recognise bullying?

    “If I notice that a behaviour change is taking place, that is, my child is becoming quieter, has lost his appetite, is withdrawing, says he has a stomachache and doesn’t want to go to school anymore,” recommends Lehel. Parents should simply ask their children what is wrong. Children want to protect their parents, he says.

    No parent likes to hear that their child is a victim or a bully. “We have to get over that. We just have to speak up, learn that it’s in society, and do something about it. And we have to start with the parents, and I can only give this advice: admitting weakness is a strength, not a weakness.”

    Changing schools doesn’t help with cyberbullying.

    But bullying is no longer just a problem in the classroom. With the digital age, it has taken on a new dimension. Students are now also exposed to hostility at home – via smartphones, social networks, email or phone calls.

    If bullying occurs digitally, even changing schools often doesn’t help. The number of potential bullies on the internet is limitless. In addition, perpetrators usually remain anonymous online, which can lower the inhibition threshold. So-called cyberbullying can lead to massive invasions of privacy by making intimate photos, messages, or addresses public.

    Many perpetrators were themselves victims.

    Suppose you look at the figures from a study by the Alliance Against Cyberbullying in Germany, Austria, and German-speaking Switzerland. In that case, bullying on the internet seems to be spreading like a virus. Eighty per cent of the perpetrators recorded there have themselves been victims of bullying or cyberbullying. The idea of the evil bully is, therefore, too short-sighted. The boundaries between perpetrator and victim are fluid.

    Appropriate assistance is often unavailable or inaccessible. The Alliance Against Cyberbullying has found that only some schools implement systematic preventive measures. In a 2017 study, students reported preventive activities but also saw action potential. According to students, 29 per cent of schools offer assistance on how to deal with cyberbullying.

    Bullies need an audience.

    But it is not just children and young people who suffer from bullying. Bullying can occur anywhere where groups meet regularly. The issue can, therefore, also affect people in the workplace. Even teachers, who should ideally be experts in this area, are not exempt from bullying.

    However, bullying is not just a dynamic between perpetrator and victim. Those who watch – the so-called bystanders – play a crucial role. Bullies usually need an audience. If no bystanders intervene in the situation, it has the effect of silent legitimisation – for the perpetrator and the victim.

    Up to a third of young people are affected
    When bullies come home

    If bullies hack a profile online or use personal data for blackmail, it is no longer a joke but cyberbullying. One in three young people is said to be affected by this. The psychological burden is excellent. Nevertheless, some victims are perpetrators themselves.

    When Lea gets a call from her sister, she initially doesn’t know what it’s about. Her name has been changed to “Dirty Slut” on an online network. Lea’s profile picture shows her sleeping. Below, it says: “Break between all my guys.” The seventh-grader, whose name we changed, is desperate. Who would do something like that to her? Her friends, perhaps, with whom she recently had an argument? She is paralysed and tries to clear her head.

    But she can’t make any progress on her own. Only with the help of her sister does Lea turn to her principal and delete her profile. She creates a new one and chooses a different password. But she has hacked again. Friends are written to from her account and asked if they want to come to her. She has the house to herself and has a lot planned for them.

    Violence, even in the children’s room

    Psychologist Stephanie Pieschl has looked into Lea’s case. She says it is clearly cybermobbing – or, in technical terms, cyberbullying – because it involves repeated acts in which the balance of power is unbalanced.

    “Cyberbullying is the continuation of bullying from the schoolyard through cyberspace into the children’s room,” says Uwe Leest, chairman of the Alliance Against Cyberbullying. Many young bullies underestimate what they are doing to their victims. Often, they just want to try it out. While children and young people usually don’t know what they are doing, it is different for adults, but cyberbullying is increasing among them, too.

    Anonymity on the Internet makes cyberbullying brutal.

    According to a recent study by the Alliance Against Cyberbullying, 17 per cent of all young people have been victims, and 19 per cent have been perpetrators. A global survey by Microsoft states that an average of 37 per cent of the young people surveyed have been affected by cyberbullying; in Germany, 39 per cent of children between the ages of eight and 17.

    In general, the victim of cyberbullying is under a lot of stress. Since both sides can remain anonymous, the perpetrator only sees the victim’s reaction if the victim requests it. The perpetrator can remain undetected. “On the Internet, the inhibition to cause suffering is not as strong because you cannot see the effects directly,” says Angela Ittel, Professor of Educational Psychology at TU Berlin. In addition, the brutality has increased because an unpredictable number of people are involved via the Internet. If the victim also becomes the perpetrator, a spiral of violence develops.

    Victims become perpetrators themselves.

    Ruth Festl is researching cyberbullying at the University of Hohenheim. “So far, psychological aspects have been the main focus of research, but we would like to specifically look at the social aspects of cyberbullying,” she says. The Baden-Württemberg study found that a third of all young people have already been confronted with cyberbullying. Within this group, there are a third of victims, a third of perpetrators, and a third of young people who are both victims and perpetrators.

    Festl is particularly interested in the perpetrator-victim group. These so-called aggressive victims, who also defend themselves or take revenge, are primarily secondary school students. A preliminary study showed that they are relatively well integrated into their classes, have many friends, and are often caught between several parties.

    First point of contact: adult and confident

    Lea does not belong to the group of victim perpetrators. She accepted the bullying as a stupid prank because she wanted to keep the peace at her secondary school. On the one hand, she did not want a public conflict that would have brought even more attention to her changed profile, and on the other hand, she had to see her classmates every day. In general, however, the Alliance Against Cyberbullying eV advises publicising all incidents to demonstrate that they will not be tolerated.

    That’s why victims should seek help immediately to find the right solution. Answering or bullying back would be counterproductive. They should talk to teachers and parents and advise Ittel. “Young people can’t stop cyberbullying on their own, so they have to turn to adults.” She also sees great potential in peers who witness the bullying. They often know more about it than adults, so their help can be more effective. But to do that, they would have to really intervene.

    If parents know about it, they should not ban the Internet or social media. “You have to signal that you understand the need and not primarily question the use of the network,” says Ittel.

    Networks are part of everyday life.

    Lea chose the direct route and confronted her former friends with her accusation after a few days. The attacks stopped. But they remain in Lea’s head. “I constantly look at my networks to see if anything has changed.” Since the incident, however, she has also been more careful with her accounts: “I am more careful with my passwords, and I no longer stay logged in to other people’s accounts,” she says. But she has not logged out of networks completely; they connect her with her friends.

    The study by the Barmer Health Insurance company
    Young people are increasingly experiencing cyberbullying

    Hostility and insults on WhatsApp, Instagram or TikTok: Young people are increasingly becoming victims of bullying online. According to a Barmer health insurance company study, one in two people knows someone who has experienced this.

    Young people are increasingly becoming victims of cyberbullying. This is the assessment made by the Barmer health insurance company in a survey reported by the AFP news agency. According to the study, in 2022, every second young person surveyed noticed that someone in their immediate environment had been affected by cyberbullying – that is, by hostility and insults on the Internet.

    This figure was 43 per cent a year ago. The proportion of girls and boys who had not experienced any bullying on the Internet had fallen from 32 to 28 per cent.

    Insults, rumours and stalking

    According to the so-called Sinus Youth Study by Barmer, bullying is most frequently experienced on the messaging service WhatsApp, followed by the social networks Instagram, TikTok and Facebook. Bullying incidents on the TikTok platform have increased the most, by twelve percentage points. At 38 per cent, it is now the third most common place for cyberbullying among young people.

    According to the study, young people reported insults as the most common form of cyberbullying – namely, 74 per cent. This was followed by spreading rumours and exclusion from groups, posting embarrassing videos or pictures and harassment. However, stalking and identity theft also continued to play a role.

    Help from teachers and police.

    Barmer CEO Christoph Straub called for easy access to help and contact points that those affected can trust. “The problem of cyberbullying is getting worse,” he explained. According to the survey, 19 per cent of those affected have received no help at all with cyberbullying attacks.

    According to Barmer, teachers, online counselling services, and the police could also help. According to information, around 2,000 young people between the ages of 14 and 17 were surveyed nationwide for the study in October.

    Cyberbullying continues to increase.

    The number of cases of cyberbullying among young people is increasing. According to a study, young people are particularly exposed to insults, lies or the distribution of embarrassing images via the messaging service WhatsApp.

    Young people in Germany are increasingly becoming victims of cyberbullying, according to a study commissioned by the health insurance company Barmer. Last year, around 16 per cent of adolescents were affected by bullying on the Internet, according to a survey by the Sinus Institute. In 2021, the figure was 14 per cent. More than ever, young people experience cyberbullying in their immediate personal environment. In 2021, the figure was 43 per cent.

    Around 2,000 14- to 17-year-olds were surveyed for the youth study last year. Twenty-eight per cent of respondents said they had never heard of cyberbullying – four percentage points less than two years previously.

    “Social” networks are bullying hotspots.

    Fifteen per cent of those surveyed said they had yet to receive any help with cyberbullying attacks. “The faster the victims receive trustworthy help from parents, friends, schools, police, or advice centres, the better,” said Barmer CEO Christoph Straub. The victims must not be left alone.”

    74 per cent of those affected reported insults as the most common form of bullying. Other forms included sharing rumours (52 per cent) and exclusion from groups (33 per cent). 32 per cent complained about posting embarrassing pictures and videos.

    Bullying can spread uncontrollably on the Internet.

    According to the study, cyberbullying most frequently occurs via the messaging service WhatsApp (52 per cent). This is followed by the networks TikTok and Instagram with 34 and 33 per cent, respectively.

    Cyberbullying is the term used to describe bullying on the Internet. It is considered particularly dangerous because there are hardly any safe places to retreat to in online networks. The group of potential perpetrators and accomplices is also more significant because rumours, lies, and images can spread uncontrollably on the Internet.

    Bullying at school – How can I help my child?

    When do we speak of bullying?

    Bullying at school is defined as repeated and regular harassment of individual schoolmates. Bullying can take place in different places. It often occurs at school, but insults also continue on the Internet, and young people are exposed and bullied. Bullying can take different forms at school. There are two types of bullying:

    Direct bullying at school: threats and insults, public teasing, exposing the victim
    Indirect bullying at school: exclusion, spreading false rumours about the victim, damaging property.

    How does bullying arise?

    The causes of bullying are complex and vary significantly from case to case. Anyone can be affected by bullying, regardless of age, type of school or self-confidence. Bullying can, therefore, occur in any class.

    Causes of bullying can include excessive or insufficient demands, self-esteem problems on the part of the perpetrator, and a disturbed school or classroom climate. Certain personality traits and behaviours contribute to whether someone becomes a victim or a perpetrator.

    Potential perpetrators look for weaknesses and “sore spots” in their victims where they are vulnerable and cannot defend themselves.

    There are, among other things, the following reasons for so-called perpetrator behaviour:

    Which children are “classic” victims of bullying?

    Studies show that children are more likely to be bullied if, for example:

    • have low self-esteem
    • are physically weak
    • are anxious, over-adapted and insecure
    • belong to the sensitive and quiet personalities
    • are excitable, aggressive and unwilling to adapt
    • have characteristics that deviate from the class norm, such as skin colour, disability, weight, clothing
    • be classified as an outsider, a “nerd”, or a “newcomer” in the class
    • owning or doing something that causes social envy (e.g. expensive clothes, expensive hobbies, new technical devices)

    How do parents know if their child is being bullied?

    Children show that they are suffering from something in very different ways. Some parents should be sensitive to changes in their child’s behaviour. One indication may be that your child insists on being taken to school, suddenly shows an unwillingness to go to school, pretends to be ill, or even doesn’t go to school without your knowledge.

    Other signs may include difficulty concentrating and learning, a deterioration in school performance, and atypical psychological changes such as irritability, nervousness, and hypersensitivity. In these cases, it is essential to determine whether bullying or perhaps another problem is the cause.

    Can parents prevent bullying?

    Growing up in a world characterised by trust, appreciation, and empathy, as well as setting appropriate boundaries, contributes to developing healthy self-confidence that does not come at the expense of others. This helps the child to deal constructively and successfully with everyday problems, including conflicts at school.

    My advice:

    • Listen carefully to your child when he or she talks about school. Ask questions to find out if everything is OK at school.
    • Show your child how important he or she is to you by taking time and offering help, especially when he or she is sad.
    • Maintain a lively and trusting exchange of conversations within the family.
    • Give your child’s worries and problems equal importance.
    • Praise your child (and not only when he or she gets good grades) and offer support when he or she doesn’t succeed at something.
    • Be a role model in your tone of voice and in the constructive resolution of conflicts.
    • Show interest in your child’s life and their circle of friends. Talk to them about friendships, how they spend their free time and what happens on the way to school.
    • Talk to your child about bullying. Encourage them to speak up about bullying in the classroom, support the victim and inform teachers if necessary. Make it clear to them that this is not snitching.
    • Stay in touch with teachers and the school. Use parent evenings, parent-teacher meetings, and office hours to ask about grades and discuss your child’s social behaviour and integration into the class.

    What can parents do if their child is bullied?

    If you believe your child is being bullied, notify the school and arrange an appointment with the class teacher. They should then work with you to address the child’s problems.
    The child should not be encouraged to fight back. This could make the difficulties worse. Such behaviour could also be at odds with the child’s personality.
    Instead, the child should be encouraged to make new friends. A child who has friends is less likely to be attacked.
    Together with the teachers, they should think about steps that will help the child and are suitable for supporting him or her inside and outside of school.
    No one should let the school fob them off but should insist that something be done. The best thing to do is to seek out a social worker at the child’s school. They can hold individual discussions in consultation with the class teacher.

    How can parents and students stop bullying?

    Since bullying often happens in secret at school, it can be challenging for teachers and parents to recognise the cases. Many victims, therefore, are faced with the question of where they can get help if they are the victims of bullying by their classmates. Even if this is often difficult, the best way is to confide in an adult. This person can then take appropriate steps and report the incident to the school, for example.

    Then, it is the teachers’ duty to investigate the matter. If bullying at school has already caused serious psychological damage, a visit to a therapist can address these problems. Many schools have so-called trusted teachers. They can also be suitable contacts if students are being bullied.

    Bullying among children and young people

    Many children and young people have experienced bullying in their own classes. It is a phenomenon that shapes their everyday lives. Sometimes, it seems that the term is used indiscriminately. Any form of violence within the class that may affect an individual is then referred to as”bullying”. Experts advise taking a close look and not reacting too quickly.

    Being bullied – what is it?

    A student is bullied when an individual classmate or a group of students repeatedly bullies him or her over a more extended time (weeks and months), for example

    says mean or unpleasant things to her/him,
    teases him or her in a mean way again and again,
    makes him or her look ridiculous in front of others,
    pretends he/she doesn’t exist,
    threatens or puts pressure on him or her,
    locking him or her in a room or
    take things away from him or her.

    Over time, bullying behaviour increases in frequency and intensity. However, bullying does not mean that two students of almost equal strength argue, fight, or conflict with each other for fun.

    How does bullying arise?

    The causes of bullying are complex and vary greatly depending on the case. Anyone can be affected by bullying, regardless of age, type of school, or self-confidence. Bullying can occur in any class.

    Bullying is usually caused by problems that can have different triggers.These include, for example, the perpetrator being over- or under-challenged, having self-esteem problems, or having a disturbed school or class climate. Certain personality traits and behaviours can influence whether someone becomes a victim or a perpetrator.

    Potential perpetrators look for “sore spots” in potential victims, where the victim is vulnerable, cannot defend themselves, or deviates from the usual norm.If the perpetrator senses no resistance to the bullying, he or she feels encouraged to take further action against the victim. There are various reasons for this type of behaviour:

    • Boredom and the need for variety,
    • Taking out anger and rage on others,
    • Compensating for one’s own weaknesses, fear and feelings of powerlessness by devaluing others,
      own dissatisfaction at school (friendships, performance),
    • Passing on experienced injustice (e.g. violence in the family) to others,
      lack of conflict resolution skills,
    • Interest in exercising power and control over others,
    • Desire for recognition in the class,
      personal motives, such as competition, envy, xenophobia, etc.,
    • own experience as a victim of bullying.

    Studies show that children often become victims of bullying when they …

    • have low self-esteem,
    • are physically weak,
    • are anxious, over-adapted and insecure,
    • belong to the sensitive and quiet personalities,
    • are excitable, aggressive and unwilling to adapt,
    • have characteristics that deviate from the class norm (e.g., colour, disability, weight, clothing),
    • be classified as an outsider, a “nerd”, a “newcomer” in the class,
    • owning or doing something that causes social envy (e.g.expensive clothes, expensive hobbies, new technical devices) or
    • Conflict-avoiding behaviour or over-adaptation learned in the family.

    Parents can prevent

    A parenting style characterised by trust, appreciation, and empathy, as well as by setting appropriate boundaries, contributes to the development of healthy self-confidence that does not come at the expense of others. This helps the child deal constructively and successfully with everyday problems, including conflicts at school.

    To practice this type of parenting style and to be helpful to your own children, it is worth following these tips:

    • Listen carefully to your child when he or she talks about school. Ask questions to find out if everything is OK at school.
    • Show your child how important he or she is to you by taking time and offering help, especially when he or she is sad.
    • Maintain a lively and trusting exchange of conversations within the family.
    • Give your child’s worries and problems equal importance.
    • Praise your child (and not only when he or she gets good grades) and offer support when he or she doesn’t succeed at something.
    • Be a role model in your tone of voice and in the constructive resolution of conflicts.

    For example, address problems immediately, using I statements instead of your accusations: “I’m annoyed that you didn’t do the dishes; that gives me extra work.” Instead of “You didn’t do the dishes again!” Avoid blaming and devaluing; find a solution to problems together.

    Show interest in your child’s life and their circle of friends. Talk to them about friendships, how they spend their free time, and what happens on the way to school.

    Talk to your child about bullying. Encourage them to speak up about bullying in class, support the victim, and inform teachers if necessary. Make it clear to them that this is not snitching!

    Stay in touch with teachers and the school. Use parent evenings,parent-teacher meetings, and office hours to ask about grades and discuss your child’s social behaviour and integration into the class.

    How do I know if my child is a victim of bullying?

    Children show that they are suffering from something in very different ways. Some symptoms that indicate bullying can also have other causes. As a parent, you should pay close attention to changes in your child’s behaviour. One indication may be that your child absolutely wants to go to school, suddenly shows an unwillingness to go to school, pretends to fail, or even doesn’t go to school without your knowledge.

    Other signs may include concentration and learning difficulties or a deterioration in school performance, as well as atypical psychological changes such as irritability, nervousness, and hypersensitivity. In these cases, it is important to sensitively determine whether bullying or perhaps another problem is the cause.

    Have you noticed changes in your child’s social behaviour due to extreme withdrawal and secrecy? Is contact with classmates even decreasing? Does your child frequently experience headaches or stomach aches, loss of appetite, and sleep disorders?

    These individual symptoms indicate that your child has a problem. Investigate these abnormalities and find out the reason for the change.

    Your child may be demanding more pocket money or their clothes may be damaged or even physically injured. These can be signs of advanced bullying that require immediate intervention. In some cases, criminal offences such as extortion or physical assault may already have been committed, which must be reported.

    Could you be sure to speak to your child’s school about this?

    What should I do if my child is being bullied?

    • It is an emotional challenge when your own child is affected or involved in bullying processes. Listening carefully and asking questions are just as important as well-considered measures, which are best carried out in consultation with your child and the teachers.
    • If you believe your child is a victim of bullying, notify the school and ask for an appointment with the class teacher, who should address your child’s problems.
    • If the bullying against your child continues, keep a diary. This is painful, but it will help answer important questions: WHAT EXACTLY happened, HOW, WHERE and WHEN, and WHO was involved? Who are the WITNESSES?
    • Encourage your child not to hit back. That would worsen the problems, and such behaviour could also be at odds with your child’s personality.
    • Instead, encourage your child to make new friends. A child who has friends is less likely to be attacked.
    • Work with the responsible teacher to consider measures that will help your child and that are suitable for supporting him or her inside and outside of school. Be persistent and insist that something be done.
    • Contact the social worker at your child’s school. They can hold individual discussions with your child and the perpetrator in consultation with the class teacher.
    • Contact the parents’ representatives if you do not receive help from the teachers. Ask for a parents’ evening to be called.
    • Inform a higher authority (advisory teacher, school management, school psychological service, school board) if you feel that the bullying problem is not being taken seriously and resolved by the class teacher or the parents’ representatives.
    • Seek external advice and help if your child is suffering so much from the situation that he or she is exhibiting severe psychosomatic symptoms (educational advice centres, school psychologist, child and youth therapist).
    • Register your child for a stand-up training session if he or she wants to
    • Exchange experiences with other affected people, e.g. on the Internet.

    Short-term tips for your child

    Contacting the perpetrator’s parents immediately is not helpful, as experience shows that this can exacerbate the conflict.

    Necessary for a parent-child conversation:

    • A bullied child needs trust in the adults and wants
    • that you listen to him and believe him,
    • that it can talk openly about the incidents
    • that it can control the situation better
    • that it becomes more self-confident and
    • that it regains self-confidence and self-esteem.

    Helpful parental responses:

    • Help your child become aware of his or her feelings and thoughts about the bullying incident: “How are you? What are you feeling? What are you thinking?”
    • Accept your child’s feelings, such as anger, rage and shame.
    • Try to get more information: “What happened? What will happen next?”
    • Observe or ask whether your child shows physical or psychological symptoms.
    • Avoid making accusations or downplaying the situation: “You have to defend yourself and not just put up with everything! They must have been annoying you for a reason! It will pass on its own; just wait and see.”
    • Help your child to think about his or her behaviour: “What could you have done? And what could that have caused?” Make it clear, however, that your child is not to blame for being bullied.
    • Discuss possible solutions with your child: “What could you do now? What else would be possible?”
    • Help your child decide on a solution and offer your help: “That’s a good idea. How can I help you?”

    Generally, as a parent, you should also be prepared to think critically about your parenting behaviour: Have you perhaps not encouraged your child’s self-confident behaviour enough? Or have you left your child too burdened to deal with unpleasant things? Or are you possibly unsure or afraid of conflict when it comes to dealing with conflicts?
    If you want to effectively change your parenting behaviour, you should seek professional advice or, if necessary, therapeutic support.

    Practical tips for your child:

    • If bullying happens on the way to school, take a different route or join neighbours’ children.
    • Avoid the perpetrator and his/her clique (this is usually only possible to a limited extent).
    • Do not take valuable items to school.
    • When using hurtful language, pretend you didn’t hear or understand the language.
    • Making confusing or distracting comments such as, “It could be,” or “If you mean it.”
    • Stabilize your self-esteem by practising phrases like: “That’s their problem, not mine.” “I’m OK.” or “If you show off, you must.”
    • Instead of reacting when someone attacks, ask an older student or teacher for help. This is not snitching; it is a human right!
    • If classmates have helped before, ask them immediately for help again.
    • Look the “attacker” clearly and firmly in the eye and say: “I don’t want this; stop immediately.” And then, if possible, walk away without showing any noticeable haste.
    • Practice specific breathing techniques that reduce stress and help with more confident body language.
    • Also, talk to the teachers and think about practical solutions that do not worsen the child’s situation.

    Medium term: Strengthening the class community

    Bullying is not an individual problem, but it often occurs when there are no good class community and no rules for social behaviour.
    Therefore, it is essential that the whole class works on bullying and strengthening the class community, either by a teacher or external specialists, who can be provided by the district and city youth protection officers. The contact persons can be found on the relevant websites.

    Get other parents in the class involved and support the teachers in such projects!

    Long-term: institutionalise prevention

    If necessary, with the support of the parents’ council, you can also suggest that social learning and bullying prevention be included in the school program and firmly institutionalised. For example, a school-wide intervention chain can regulate how to proceed in cases of bullying. An anti-bullying convention can also be used to agree on how all groups involved in the school should behave.

    Important: Prevent the victim role from becoming entrenched!

    If a child is the victim of bullying attacks, they will initially withdraw to avoid attacks. If these do not stop, they will feel helpless and incompetent. Further attacks contribute to the child accepting the victim role and possibly blaming themselves. The result is often loneliness. They withdraw from the class community and from their circle of friends. Their self-esteem is severely affected, and they lose the ability to make contact with other people.

    Studies have shown that if bullying victims are not dealt with, they will “take” their victim problem with them to another class or school, to vocational training, and later to the workplace. This increases the likelihood that these people will become victims of bullying again. In order to prevent these long-term consequences, it is therefore strongly recommended that parents consistently ensure that the bullying is stopped.

    What should I do if my child is a perpetrator?

    The temptation to defend and protect your child is excellent and understandable from a parent’s point of view. However, trivialising the behaviour that occurred is just as unhelpful as threatening sanctions, such as not being allowed out or being banned from using the computer or television for three days.

    First, clarify the situation in a conversation with the class teacher:

    • What happened, how, where, when, and who was involved?
    • What exactly did my child do?
    • What could the motives have been?
    • Make it clear that you take the problem seriously and willtalk to your child.
    • Express your wish to be informed about your child’s future behaviour. After speaking to your child, arrange another meeting with the teacher. After speaking to your child, coordinate the educational approach at school and at home with the teacher.

    Tips for the parent-child conversation:

    • In a specific bullying incident, the misbehaviour must be clearly and unambiguously identified, but the child must be respected as a person.
    • Talk to your child about the incident.
    • Describe objectively and calmly what you learned about it, for example, from the class teacher.
    • Try to find out the motives behind the bullying behaviour and avoid blaming others.
    • Set boundaries and make it clear that you will not toleratethese bullying actions.
    • State consequences if the behaviour remains unchanged. These must be directly related to the incidents.
    • Give your child support to change behaviour.
    • Make it clear that you are in contact with the teacher and that there is an exchange about behaviour.
    • Do not contact the victim or the parents to defend themselves or apologize. Experience shows that this only escalates the conflict.
    • Also, talk to the teachers and think about practical solutions that do not worsen the child’s situation.
    • Courage to confront and set boundaries is required, but the basic principle always separates person and behaviour!
    • My child as a perpetrator – medium and long-term options
    • If you feel that you cannot influence your child sufficiently on your own, seek external advice (educational counselling centre, school psychologist, child and youth therapist).

    In general, as a parent, you should also be prepared to think critically about your own parenting and social behaviour:
    – Have you placed much value on asserting your own interests at the expense of others? Do you tend to be devaluing or aggressive in conflict situations, which your child may have copied from you?
    – Avoid negative expressions, taunts, or nicknames in the family. In everything you do, you are a role model for your child.
    If you want to change your social or parenting behaviour effectively, you should seek professional advice or, if necessary, therapeutic support.

    Information for parents whose children are bystanders

    The largest group in bullying is the bystanders. The students take on different roles within this group: Some passive bystanders stay out of everything. Then, some individual students temporarily take the perpetrator’s side and support the bullying or encourage the perpetrator in their actions. Other students come to the victim’s aid and make it clear to the perpetrator that they reject the behaviour or will inform the teacher about it. The larger this group is, the fewer opportunities the perpetrator has to continue the actions.

    Parents must encourage their child to intervene actively as an observer of bullying incidents.

    Ask your child to report incidents where classmates were bullied. Encourage your child to support victims of bullying. Encourage them to call teachers immediately in acute situations. I would like to let you know that this is not snitching!

    Cyberbullying – What can I do about it?

    Cyberbullying occurs when students are deliberately defamed, threatened or harassed without the consent of those affected by publishing text, images or videos on the Internet, via email or SMS (e.g. through embarrassing pictures on YouTube or fake forum posts on Facebook etc.).In contrast to classic bullying, the perpetrators of cyberbullying can appear anonymously. Nevertheless, their actions often constitute a criminal offence.

    Measures against cyberbullying

    As a parent, you can do the following:

    • Pay more attention to your child’s online activities.
    • Discuss with your child how they can reduce the risk of becoming a victim of cyberbullying, for example, by protecting their privacy online.
    • It is essential to collect material as evidence if you want to take action against the perpetrator.
    • If the child has already become a victim, you must support him or her, as he or she usually cannot take action against it alone.
    • Talk to your child about how to best deal with attacks and what strategies help you defend yourself against them.
    • If the child is directly harassed, he or she should not react to the attacks, as this will encourage the perpetrator to continue.
    • The most effective strategy is to download the material and send it to the attacker’s parents if they are known.
    • You can also contact teachers or school management. If necessary, suggest that the topic of “interacting with one another via the Internet or mobile phone” be addressed in class.
    • Inform the service provider or forum operator so that relevant entries are deleted. You have a right to injunctive relief.

    Particular information on school disciplinary measures

    If bullying patterns have become entrenched in a class, it is often logical to consider moving the victim to a parallel class or even another school. The initiative usually comes from the victim’s parents, who hope to defuse the situation for their child in the short term There is no general answer to whether this decision is the right one.

    From an educational point of view, it is essential to consider the possible effects on the people involved in the bullying process:

    Perpetrator: We successfully bullied the victim out of the class.

    Victim: If I stay, I will continue to be bullied. There is no guarantee that I will be better off in the other class. I still don’t know how to defend myself.

    Audience: Now, we don’t have to deal with that anymore. I have to be careful not to become a victim myself; otherwise, I’ll have to leave the class, too.

    Teachers: Now it’s quiet; the main problem has been solved for now.

    Excluding the perpetrator from the class and working through the situation with the remaining classmates to prevent further bullying by other victims is more complex but more useful for the learning experience. Improving the class community and (re-)integrating the bullied student into the class should be the real goal of educational efforts.

  • If loneliness can be a risk to health, shouldn’t more be done to address it? People can socialize again with the help of robots

    Loneliness is a state of mind linked to wanting human contact but feeling alone. People can be alone and not feel lonely, or they can have contact with people and still experience feelings of isolation.

    Loneliness endangers our health — many have experienced this firsthand during the Corona pandemic. Researchers show that being alone could be associated with harmful eating habits.

    One in four households in Germany is a single-person household. The Coronavirus pandemic was characterized by loneliness, especially for these people. Loneliness has long-term effects on health.

    Researchers at the University of California wanted to find out how loneliness affects our brains. The team, led by researcher Xiaobei Zhang, focused primarily on changes in our eating habits. The new study shows that loneliness could change fundamental processes in our brains.

    Researchers investigate loneliness and eating habits.

    The study examined 93 women. All participants’ body mass index (BMI) was measured, and the proportion of body fat compared to lean body mass was also identified. The test subjects also provided information on their perceived social isolation, eating habits, and psychological resilience.

    In the study, 39 of the 93 women reported being lonely. This group also had a higher body fat percentage than the other test subjects. Poorer diet quality, unhealthy eating habits, and poorer mental health were also more common among lonely women.

    However, these results are purely a correlation. The researchers have not proven that loneliness directly leads to a higher body fat percentage. A higher body fat percentage may just as well promote loneliness. The direction of the effect is, therefore, still unclear.

    Loneliness could change brain activity.

    Previous research projects show that loneliness also influences the fundamental structures of our brains. “Loneliness can, for example, be associated with shrinking the hippocampus,” explains Dirk Scheele, a professor of social neuroscience at the Ruhr University Bochum. The hippocampus is the part of our brain that plays a key role in memory formation. According to Scheele, loneliness can lead to a decline in memory performance.

    The researchers from California also identified physical and cognitive differences between the lonely women and the other test subjects. The lonely women’s brains reacted more strongly to food stimuli, which the researchers found out by showing the test subjects pictures of food. The lonely test subjects showed stronger brain activity in the parietal lobe, which is responsible for processing the body’s own signals, such as hunger.

    The charm of sweetness

    What is particularly interesting is that the researchers identified a difference between the stimuli of sweet and savory foods. When the lonely test subjects encountered sweet stimuli, their control mechanism in the brain was less active. The changes could indicate that sweet stimuli are given a higher priority. This can lead to a more intense craving that is also less controllable.

    Similar processes can also be observed in people with drug or food addictions. They could explain why lonely people often exhibit unhealthier eating behavior, such as reward-oriented eating, cravings, or food addiction. But what is so special about the taste of sweets? Scientist Scheele has an answer. “Sweet foods have an advantageous effect on our brain and are also processed differently,” he explains. Sweets could be a better consolation for lonely people.

    Hypothesis: Loneliness costs energy

    The researchers at the University of California go even further with their assumption: Loneliness research holds that lonely people tend to have negative expectations of their environment. They are often more alert and do not trust their fellow human beings as easily.

    The research team from California concludes that this constant state of alert requires a higher energy consumption. The body has to somehow cover this demand. This is where sweets come into play: socially isolated people may, therefore, have a stronger craving for sweet foods, which causes blood sugar levels to skyrocket and provide quick energy.

    Correlation does not imply causality.

    “The study’s results are plausible,” says Dirk Scheele. However, one must remember that the researchers’ statements should be viewed with some reservations because causality is still unclear. According to the study, there is at least a connection between loneliness, unhealthy eating habits, and obesity. However, Scheele still needs to investigate the direction of this connection.

    However, the research design could also be adjusted if it were carried out again, criticizing Scheele: “It is questionable whether the results can be transferred to men.” He believes it would be sensible to repeat the USA study and examine male subjects. Previous studies have shown that loneliness has different effects on men’s and women’s brains. Male brains often show even more significant changes.

    Loneliness: A complex field of research

    According to Dirk Scheele, neurologically, it is not easy to research loneliness: “Loneliness is still a complex feeling that cannot be traced back to the structures of a single region. We assume that changes occur in very different regions.”

    The US study is essential in basic research for building new studies. For society and politics, it highlights the dangers of loneliness for the health of every individual. “Although loneliness is not an illness, it can be proven that it is a risk factor for many mental illnesses and may now also harm our eating habits,” explains Dirk Scheele.

    Being alone is not a question of age.

    Many people suffer from loneliness, especially at the turn of the year—including an increasing number of younger people. In Düsseldorf, the Diakonie invites people to lunch. In addition to a warm meal, there is also social contact.

    Elisabeth Niesen and Elfriede Lück have now become perfect friends. At some point, Niesen decided to help Lück in her everyday life because she needed support at 90 years old. But they both benefit from it. Both had to struggle with loneliness.

    Together, the day becomes more colorful. “We have fun, and she comes alive again. If I don’t pick her up, she is sad. You have to suppress that. That’s why we go and meet up,” says Niesen.

    They met through the Düsseldorf-Gerresheim Diakonie. The Diakonie invites people to lunch almost every day. A warm meal that is also meant to warm the heart. It is an offer for people who otherwise hardly have any social contacts.

    Lück has been here for ten years. “I get out and about. That was the best thing I could do. It’s good for me. Otherwise, I might not be alive anymore,” she says to another lady who is also a guest here.

    Reasons for loneliness are varied.

    Most of the people here have lost their partners, and some have fallen into a hole because of this, they say. The conversations here are good and help in sad moments arising from loneliness. “I feel lonely when I’m alone at home. Since we have found a wonderful group here that has taken us in, I can say it is my safety net,” says 85-year-old Brigitte Bill.

    Her neighbor, Hildegard Pankonin, talks about her cat, who died shortly after her husband. “I was in a deep hole. At some point, our husbands died. We were all lonely,” she says.
    Psychologist Maike Luhmann has researched loneliness at the Ruhr University in Bochum for ten years. The causes are varied. “We know, for example, that people who are poor or have a low income have an increased risk of loneliness. People with health problems tend to become lonelier. But of course, people who simply have a small social network or are single also tend to be lonelier than others,” she says of her results.

    Loneliness has social consequences.

    Luhmann says that loneliness is an important social issue because it can lead to various unwanted consequences. “For example, mental and physical illnesses. However, some findings show that lonely people tend to be closer to more extremist political positions, lose trust in society, and so on. So loneliness can have a whole range of social consequences.”

    For a long time, loneliness has only been seen as a problem for older people. Among other things, old age brings health problems, limitations, and, of course, social isolation, says Luhmann. “But loneliness has never been just a problem for older people, and the pandemic, in particular, has changed things. The pandemic has made younger people, teenagers, and adults, particularly, lonelier. Now you can say that it really affects all age groups.”

    Study on loneliness among young people

    On behalf of the North Rhine-Westphalian state government, Luhmann investigated how many young people are affected by loneliness. For the study, almost 1,000 young people and adults between 16 and 20 were surveyed online, and a sample of more than 1,200 eighth-graders was evaluated.

    According to the study, older adolescents and young adults feel significantly lonelier (18.5 percent) than younger adults (11 percent). Including those who only feel lonely now and then, the figure is 78 percent for older adolescents and 86 percent for younger adolescents.

    Lonely people often feel ashamed. Marie Martin has decided to be open about her loneliness to combat it. She speaks openly about her experiences of loneliness and uses her reach on the Internet as an influencer.

    Due to depression, the 35-year-old had initially become more withdrawn, needed a lot of time for herself, and kept canceling meetings with friends. “I’ve always been a person who draws my energy from being alone. And that’s why it wasn’t bad for me for a long time,” she says.

    “At some point, nobody asked anymore.”

    At times, she lived in her own world. She only realized later that she was lonely: “I noticed it when the questions gradually stopped. At some point, no one asked anymore. They met in friendship groups without me. I didn’t know anything about it anymore. That’s when I realized I was alone now.”

    That’s when she realized that she had changed something. She used the Internet to find pen pals, people with similar problems with whom she could communicate confidently but relatively anonymously. That opened the door to the outside world for her again. She is now feeling much better. She quit her job as a teacher, now works as an influencer, and gives gardening tips.

    But she also uses her reach to help others in a similar situation to the one she once was. She is now using social media to call on people to send postcards to people who might be feeling lonely.

    Is there really more loneliness?

    The proportion of lonely people is increasing—an alarming report was published by numerous media outlets last week. But the institute that provided the figures contradicts this.

    It was a depressing report: More and more people in Germany are suffering from loneliness, the “Rheinische Post” reported last week — and the problem is getting worse. The figures that the newspaper cited were alarming: the loneliness rate among 45- to 84-year-olds rose by around 15 percent between 2011 and 2017, and in some age groups by almost 60 percent. And younger people are also affected.

    The newspaper referred to a federal government response to an FDP query. News agencies and many media outlets, including tagesschau.de, picked up the report. But scientists who are researching the topic quickly expressed their doubts to Tagesschau. Clemens Tesch-Römer, director of the German Centre for Ageing Research Institute, spoke up. He said the report was misleading: “Loneliness is not an epidemic, and it has not increased in the last two decades.”

    “Loneliness rate has remained stable since the mid-1990s.”

    Curious is that in its response to people over 45, the federal government refers to the figures from the German Ageing Survey (DEAS), which is compiled at Tesch-Römer’s German Center for Gerontology.

    The DEAS is the most critical long-term study on aging in Germany and is funded by the Ministry of Family Affairs. Scientists have been studying the aging process for more than 20 years. More than 16,000 people were surveyed on the subject of loneliness. According to the researchers, the results led to a different conclusion than what was read last week.

    “It turns out that the probability of being lonely has fluctuated between seven and ten percent over the last 20 years,” says Tesch-Römer. The risk of loneliness is similar among 40-year-olds and those over 80 but decreases in the decades in between. There are always fluctuations. However, it shows that “The loneliness rate has remained stable since the mid-1990s.”

    Loneliness and social isolation

    For the German Ageing Survey, people aged 40 and over are questioned about various aspects of their lives – including loneliness and social isolation. While these are often used synonymously in public debate, science clearly distinguishes between the two terms: According to the German Center for Ageing Research, social isolation is seen as a significant lack of contact with other people, which can be described objectively. Loneliness, on the other hand, is a very subjective experience. People who objectively have many social contacts but lack emotional depth and understanding can also feel lonely.

    The DEAS authors’ summary for the FDP’s inquiry sounds similar. “No evidence can be found for a strong increase, at least in the prevalence of loneliness,” it says. “An increase in the number of lonely people in the population group of 45 to 84 years is, at best, a result of the growth of this age group.”

    The most recent publication of the study results is even more specific: “If social conditions do not change radically in the next decade, people aged 70 to 90 will not be more isolated or lonely than people aged 70 to 90 today,” it says.

    According to Susanne Bücker, who is doing her doctorate at the Ruhr University Bochum, other studies have also reached similar conclusions. She refers to, among other things, a 2016 study that shows that people born in 1939, on average, are somewhat less lonely than those born earlier (in 1916, on average) at around the same age.

    Abbreviated representation?

    So, how did the report of growing loneliness come about? The federal government’s response contains concrete figures from the DEAS, which were also used in the reporting but which the researchers believe were presented in an abbreviated form and thus could be interpreted differently. As asked by the FDP, the proportion of people who have felt lonely over the past ten years is listed and compiled every three years since 2008. However, the figures only appear in the reports from 2011 onwards. However, there was a significant decline from 2008 to 2011 – and if these figures had been included, the findings would have been far less dramatic.

    Over the entire period, the loneliness rate would only have increased from 8.6 to 9.2 percent – around seven percent of the previous value. And the rate among 75- to 84-year-olds has fallen by around a quarter since 2008 – from 9.9 to 7.5 percent.

    “A trick that is easy to see through”

    Tesch-Römer criticizes that the study contains precisely those fluctuations that appear to prove an alleged “loneliness epidemic.” “The lowest value – from 2011 with 7.9 percent – was arbitrarily compared with the highest value – 9.2 percent in 2017,” he explains. “And that already results in an increase of 16 percent.” He criticizes the approach as “not serious.” “A trick that is easy to see through. If you want to see something, you see it even if it isn’t there.”

    He refers to a publication by the DEAS in which the risk of loneliness was described from 1996 to 2014. Here, too, there were repeated fluctuations in all age groups. “However, a clear trend could only be seen in the two oldest age groups of 72 to 77-year-olds and 78 to 83-year-olds,” he says – “and there the loneliness rate went down.” However, only a few nursing home residents could be surveyed.

    “A major health problem.”

    He explains that reports of increasing loneliness are falling on such fertile ground with the breakdown of traditional family structures. “This leads to the feeling that social relationships are splintering.” However, it is often overlooked that the younger generations increasingly find emotional support from friends.

    Nevertheless, even if researchers at the German Center for Gerontology are not currently seeing an increase in loneliness, no one can say how things will develop. “That could change, for example, because many older people still have children,” says Tesch-Römer. However, many scientists warn against overestimating the risk of loneliness in older age because they fear that such a fear could become a self-fulfilling prophecy.

    Regardless of the rate, it is clear that loneliness can be dangerous for those affected. “People who are lonely are constantly under stress,” says Tesch-Römer. “This puts a strain on the cardiovascular system.” People sleep worse, recover less, and are more susceptible to illness. “Even eight to ten percent are enough to say: yes, this is a major health problem,” says Tesch-Römer. “But please don’t sensationalize it. Reports about a supposed ‘loneliness epidemic’ are feeding a media debate that is not really productive.”

    Loneliness – the growing suffering

    Apparently, more and more people in Germany are suffering from loneliness. This starts with young people, according to figures from the federal government. For those affected, loneliness often has health consequences.

    Loneliness can be tormenting—and apparently, more and more people in Germany are affected by it. The Rheinische Post reports this, citing a response from the federal government to an FDP query.

    According to this, the proportion of people aged 45 to 84 who feel lonely rose significantly by around 15 percent between 2011 and 2017. In 2017, 9.2 percent of people in this age group felt lonely. In some age groups, the increase was as high as 59 percent. Problems are already apparent among young people: one in four said they sometimes or rarely feel lonely. Girls are more often affected than boys.

    Many seniors are also worried about increasing loneliness, as revealed by a study conducted by the Federal Center for Health Education a few weeks ago.

    Being alone can make you sick.

    Loneliness often has consequences for health: “Social isolation in particular hurts the occurrence and progression of chronic diseases,” the federal government is quoted as saying. There is also talk of increased mortality. SPD health politician Karl Lauterbach quoted research results in the “Welt am Sonntag” newspaper, according to which loneliness often triggers psychological illnesses such as depression, but also severe cardiovascular diseases or dementia.

    Politics is also in demand.

    Therefore, FDP health expert Andrew Ullmann called for a strategy to combat loneliness. In his opinion, this includes innovative housing and mobility concepts. Health literacy must also be promoted. The SPD had previously called for a government commissioner to deal with loneliness and the damage it causes in society.

    Experts assume that the number of people who feel alone will continue to rise because of digitalization. There is already talk of a “loneliness epidemic” that has consequences not only for those affected but also for those insured and the economy: Medical treatments such as therapies take a long time and are expensive. In addition, those affected are missing from the job market.

    Appeal from the Minister for Family Affairs
    Use offers to help against loneliness.

    Christmas is a challenging time for many people who feel lonely. Family Minister Paus calls on people to help them use telephone counseling “without hesitation.” Loneliness is increasing – in all age groups.

    People alone and lonely at Christmas should be bold and turn to existing support services. Federal Minister for Family Affairs Lisa Paus has called for this. “For most people, the best thing about Christmas is the community and togetherness in the family circle. If you cannot experience this sense of belonging and humanity, these days can be excruciating,” Paus told the editorial network Germany. Telephone counseling and on-site facilities are then available.

    Paus also warned that loneliness is becoming more widespread due to the energy crisis and inflation. “When people are poor, they are much more affected by loneliness,” she said. You can’t buy social contacts, but it costs money to participate in social life. And without money, there is a great risk of withdrawing from society until you feel lonely.”

    The problem in all age groups

    The Family Ministry has noted that loneliness is increasing in all age groups. “The idea that older people suffer from loneliness is wrong,” Paus continued. The younger generation is somewhat particularly affected. “Around one in five people under 30 know what it’s like to feel lonely, while among people over 60, it’s around one in four people.”

    Current surveys also show that loneliness has become a major concern during the holidays. In the recently published volunteer monitor by the Malteser, around 61 percent of respondents expected additional social restrictions at Christmas. Around a third also said that they personally felt more lonely than before the pandemic.

    The federal government wants to combat loneliness.

    People have been lonely for a long time, but the pandemic is exacerbating the problem. Family Minister Spiegel has now launched a “Loneliness Competence Network.” The network is intended to provide practical help and advance research.

    Around one in ten people living in Germany feel lonely often or very often. But some people also like being alone. Federal Minister for Family Affairs Anne Spiegel of the Green Party describes when loneliness becomes a challenge for society: “When you want to get out of isolation and can’t, that’s exactly when loneliness becomes a problem – when you suffer from it,” says Spiegel.

    Loneliness has consequences

    And this suffering has side effects. Psychology professor Maike Luhmann researches the topic of loneliness: “For example, we know that people who are lonely are more likely to suffer from poorer mental health, are more likely to have an unhealthy lifestyle, have reduced stress resistance – and even financial problems have been linked to loneliness.”

    We still know too little about loneliness’s effects, its causes, and what can be done about them. The “Loneliness Competence Network” wants to collect and pass on research results and practical experience. The ministry is providing a reasonable million euros for this purpose.

    Dagmar Hirche represents the Hamburg association Wege aus der Einsamkeit at the launch of the Kompetenznetz Einsamkeit (“Competence Network for Loneliness”). Above all, she wants to encourage: “Hey, there are solutions!” One of these solutions is organizing meetings for older people via Zoom during the pandemic—including an introduction to the digital world. Dagmar Hirche’s association has reached 18,000 people with this.

    Poverty drives people into loneliness.

    She has repeatedly encountered one problem: “Poverty is a huge issue. If we look at the electricity costs alone, that will drive people even more into loneliness because they don’t have the money to participate in sports. After all, they don’t have the money to participate in events because they don’t have the money to get WiFi at home.” The material situation is one thing; the other is very individual experiences such as moving separations or other breaks.

    During the pandemic, the situation for children and young people has become more difficult due to school closures and quarantine rules, but silent dramas have also been played out in the homes due to the ban on visits. To compensate for the “pandemic dent” in the experience of children and young people, Minister Spiegel has now announced a package of measures. The focus is on exercise, sport, and culture, especially for children and young people.

    Loneliness has so far been under-researched as a subject of research and a task for social work, local politics, and urban planning. Often, it is about creating opportunities. For example, benches in playgrounds are good places for encounters between generations. All those involved also acknowledged the work of this country’s more than 500 multi-generational houses.

    “Failure is not a problem.”

    For those affected and those helping, it’s all about taking the first step, says Dagmar Hirche from the “Ways out of Loneliness” association: “The important thing is: get going! And if you fail? Failure isn’t a problem! Then I always say: bad luck, I’ll do something new.” Something new could be, for example, visiting the “Loneliness Competent. Today, Family Minister Spiegel officially launched it.

    Lonelier due to Corona

    According to a study by the EU Commission, loneliness has increased during the Corona pandemic, especially among young adults. The feeling of loneliness has increased, particularly among singles.

    During the coronavirus crisis, the frequency of feelings of loneliness among EU citizens has doubled. This is the result of a study by the scientific service of the EU Commission.

    According to the study, in 2016, around twelve percent of EU citizens said they felt lonely more than half the time. This proportion rose to 25 percent in the first months of the pandemic.

    According to the study, different trends in the feeling of loneliness were evident both in age groups and in European regions. Before the pandemic, it was mainly older people who suffered from loneliness. From April to July 2020, the proportion of 18- to 25-year-olds who said they felt lonely for a long time quadrupled to 36 percent.

    Significant increase in Germany

    Even among singles, loneliness increased by 26 percentage points during curfews. The value only increased by nine percentage points among people with partners or children.

    In a country comparison, loneliness increased particularly in Northern Europe: in countries such as Norway and Sweden, around six percent of the population said in 2016 that they felt lonely more than half the time, compared to 13 percent in Southern Europe. In the first months of the pandemic, the figures rose to 22 to 26 percent across Europe. In Germany, the proportion rose from 8.8 to 24.5 percent.

    However, no significant differences between genders or urban and rural areas were found. The study also found that wealth and health protect against loneliness.
    Commission Vice-President calls for countermeasures.

    In light of the results, the Vice President of the EU Commission, Dubravka Suica, called for countermeasures. Scientific findings suggest that loneliness can weaken social cohesion, trust in the community, and economic performance, explained the EU Commissioner responsible for democracy and demography.

    For the study, the researchers evaluated interviews and existing data. However, the surveys were conducted differently in 2016 and 2020, which limits their comparability. In 2020, respondents answered questions online of their own accord. However, older participants in an online survey, in particular, are presumably more connected to social media than other people and, therefore, less lonely than the average person.

    What helps to be less lonely

    Loneliness is affecting more and more people, both young and old – and it can make people ill. What are other countries doing about it? The Tagesschau foreign podcast is looking for ideas and possible role models.

    Many people know the feeling, but only a few dare to talk about it: loneliness. A young woman sums it up: “You’re somehow in a bubble. Suddenly, the thoughts start bombarding you. You can’t get out of it. Then I just start crying.”

    The 28-year-old belongs to a better-connected generation and is more accessible than any other before. Yet many young people feel lonely all over the world. The new episode of Ideenimport—the foreign podcast of the tagesschau—shows what is being done to combat loneliness in Japan and Singapore, for example.

    Japan’s participation robots

    In Japan, loneliness is considered one of the biggest health problems. In a café in the Ginza district of Tokyo, people can socialize again with the help of robots.
    Service robots are controlled via the Internet and are sometimes hundreds of kilometers away from pilots. They serve drinks from home and chat with guests.
    In addition, Japan has a Ministry against Loneliness. This ministry brings together activities, ideas, and projects and provides low-threshold offers for anyone who feels lonely.

    Singapore’s high-rise village

    In Singapore, the issue of loneliness is also being considered from an urban planning perspective. Kampung Admiralty is a high-rise planned like an old Malay village – only built upwards. The residential building has short paths, roof gardens, green spaces, and communal and meeting rooms.

    Anyone looking for a connection will find it in the large food court. Kindergartens and preschools are located right next to senior centers. A vertical village is an example of modern urban planning.

    In the Tagesschau foreign podcast Ideenimport, ARD correspondents Ulrich Mendgen and Lena Bodewein report on loneliness in Japan and Singapore and present measures to combat it. The question always arises: What can Germany learn from these ideas?

    Searching for ideas in the Tagesschau podcast

    For many questions that arise repeatedly in everyday life, there are guaranteed to be good ideas, possible role models, and solutions somewhere in the world: How can we better deal with rapidly rising energy prices? What can we do to eat healthier? Why do people in other countries sometimes live longer?

    The Tagesschau foreign podcast looks for and finds them—together with the correspondents in the ARD’s 30 foreign studios. Ideenimport wants to broaden the view beyond the proverbial horizon and provide fresh ideas for new input in political and social debates.

    Ideenimport has been published every second Friday since April 22, 2022. You can listen to the podcast at home or on your smartphone anytime. Every second Friday morning, a new episode will be available on our website, in the ARD audio library, and on numerous other podcast platforms.

    Government strategy
    What helps against loneliness?

    Preventing loneliness and supporting those affected: Family Minister Paus has presented a “strategy against loneliness.” Social associations welcome the initiative but are skeptical about its success.

    In Berlin-Friedrichshain, no one waits for a political strategy to deal with loneliness. Here, the city mission has created a meeting place. Every Wednesday, neighbors from the Travekiez meet for a shared lunch to spend a few hours together.

    Lothar likes the chili con carne. He says that’s mainly because of the company. Ten men and women of retirement age sit at Christmas-decorated tables with candles and enjoy home-cooked food—a lively group with lots of talking and laughing. Everyone is welcome, according to the motto, “Eating together is less lonely.” The city mission has been offering lunch for three years: in the summer in the courtyard and in the winter inside.

    He says that Lothar also comes every Wednesday because he likes to be around people. The 83-year-old Berliner lives alone with his cat Felix. On the weekends, he goes out to collect deposit bottles to supplement his pension but also comes into contact with people: “Young and old, it doesn’t matter. People just talk to me. I find that very nice; it makes you feel young again,” says Lothar, laughing. Most of those who come for lunch live alone, looking for company, a change from loneliness.

    “Loneliness is invisible.”

    For Jost Berchner, the project coordinator in Travekiez, there is a need for more meeting places in the city districts than just parks or playgrounds. He calls them “moderated city places where people can come together in a good setting and experience themselves as a neighborhood.”

    In addition to the weekly lunch, they have created a place in the neighborhood called “einLaden,” a shop where living room concerts, storytelling evenings, and exhibitions take place, “to bring people together, spend a little time together, and see each other as neighbors,” says Berchner. It would be nice, he says, if everyone knew more about their neighbors than just the last name on the doorbell – “so that they develop an eye and a heart for the people in the neighborhood.” Because there is a lot of loneliness that we don’t see, that flies under the radar in the hustle and bustle of the big city, is Berchner’s experience.

    Berchner thinks it’s good that the federal government is now addressing the issue with its strategy paper, but it takes more than “the will to take action,” he says. He doubts whether concepts will succeed in rekindling a matter close to our hearts: “I wish it were, and our society needs it. But I have my questions.”

    111 measures against loneliness

    For Federal Minister for Family Affairs Lisa Paus (Alliance 90/The Greens), loneliness is “one of the most pressing issues of our time.” It harms health and social interaction. Paus wants to remove the taboo from the subject of loneliness. Her department’s cross-departmental “Strategy against Loneliness” is intended to provide more support to lonely people and combat social isolation. There are ways out of loneliness, says the Minister for Family Affairs: “No one who is lonely in Germany is alone.”

    The aim is to shed more light on the issue politically and scientifically. The 111 measures are intended to strengthen social cohesion and community spirit.

    No further funds are planned.

    Specifically, the Minister for Family Affairs wants to raise public awareness of the issue with annual action weeks and regular campaigns. To close knowledge gaps, research into the health, social, and political effects of loneliness is to be promoted. A so-called loneliness barometer is also planned. This will examine the extent and prevalence of loneliness. In addition, the federal government wants to continue promoting model projects to combat increasing loneliness in local communities and shorten waiting times for therapy places.

    The Minister for Family Affairs said no further financial resources are planned to implement the strategy’s plans. The project relies on existing structures, such as sports clubs or multi-generational houses.

    Caritas: “Strategy must be more than just a catchy name”

    The German Foundation for Patient Protection criticizes the need for additional financial resources to implement the strategic measures. Many initiatives in the municipalities simply do not have the money, said board member Eugen Brysch: “It will therefore be important that families, friends, neighbors, former employers, clubs and churches in particular accept responsibility.” Relying on the federal government is a waste of time, says Brysch.

    The German Caritas Association also warns against pure symbolic politics. The loneliness strategy must be more than just a catchy name or a loneliness barometer. The announced measures must also be adequately financed. Measuring how and where loneliness is increasing is only effective if resources are effectively invested in combating it, says the association: “It’s like fever. Here too, the fever-reducing medication is more important than the thermometer,” compares Caritas President Eva Maria Welskop-Deffaa.

    For Ulrich Schneider, general manager of the Paritätischer Gesamtverband, it is above all people who can help with combat loneliness: “It is not just any politicians, it is not any big concepts, but it is really the neighborhood that can help, the many clubs, the visiting services. We can all help.”

    Just like in the Berlin City Mission in the Travekiez, there are now coffee and pastries on the tables and people doing crafts together. If you don’t feel like doing that, you can just stay and enjoy the company—like Lothar before he goes home to Felix the cat.

    Socially disadvantaged children feel lonelier.

    A new study shows that the pandemic has left clear marks on children and young people: children, in particular, suffer from depressive symptoms – they are more likely to be lonely, have sleep problems, and are in pain.

    Schoolchildren from poorer families in Germany suffer more frequently from loneliness, exhaustion, and sleep problems than children from better-off families. This is shown by a study in Berlin, sponsored by the health insurance company DAK-Gesundheit, in which Kiel scientists surveyed more than 14,000 boys and girls in grades five to ten. Emotional stability and satisfaction temporarily declined significantly among teenagers during the coronavirus pandemic.

    Depressive symptoms are more common in poorer children.

    Whether it’s the feeling of not having friends, reduced well-being, or depression, teenagers from less affluent homes are more affected. 32 percent of students often feel alone overall.

    This figure was as high as 50 percent among children who rate their social status as low. In contrast, around 28 percent of children and adolescents with a high social status felt lonely.

    For example, 44 percent of schoolchildren from socially disadvantaged families report being unhappy and crying frequently. Among children with a high social status, only 26 percent report such depressive symptoms more regularly.

    Corona and well-being

    According to the DAK study, children’s emotional stability was dented during the Corona years. “Corona has led to a massive loss of life satisfaction,” said the study author, Kiel psychologist Reiner Hanewinkel.

    According to the survey, the score was at least pronounced on a scale of 0 to 10 in the 2022/2023 Corona school year at 5.8 and recently rose again to 6.8. Well-being and life satisfaction are also lower among children with subjectively low social status.

    Overall, 53 percent of children feel exhausted at least once a week. The figure is as high as 63 percent among children with subjectively low social status. During the Corona pandemic last year, 58 percent of children often felt exhausted – and 69 percent of children with a low social status.

    “Corona has put a lot of pressure on the children,” said State Secretary for Family Affairs Ekin Deligöz (Greens). Many children think negatively about the infection control measures taken during the pandemic. 55 percent found homeschooling somewhat stressful, 54 percent found the mask requirement – and even 69 percent found regular ventilation stressful.

    Sleeping pills for teenagers

    The researchers also asked about sleep quality – and found that 49 percent of school children with low social status regularly have sleep problems, i.e., at least once a week. Among children from wealthier families, the figure is only 33 percent.

    The study authors attribute this, not least, to higher screen time among children with a subjectively low social status. On average, 15-year-olds sleep 8 hours and 48 minutes per day, but this figure is only 7 hours and 24 minutes for 15-year-olds.

    Sleeping pills also play a role among some teenagers. According to the study, 13 percent of those with a high social status and even 22 percent of those with a low status have taken such pills. However, 62 percent of those surveyed said they slept well or very well. Among those who rated their status as low, the figure was only 47 percent.

    Bullying and cyberbullying

    The question of whether a child is exposed to bullying is always a major topic at parents’ evenings – not most minor bullying in chat groups. According to the survey, 60 percent of the children and young people surveyed reported never being teased, attacked, or excluded in the schoolyard or classroom.

    However, 14 percent say they have been bullied in this way at least once a week. Only 5 percent feel harassed or humiliated online at least once a week – 17 percent have experienced such humiliation. 82 percent of those surveyed, on the other hand, have never been victims of cyberbullying.

    Here, too, status plays a role – almost one in two people from families with less wealth say they have suffered bullying at school, but only 39 percent of those with subjectively higher social status have. In the case of cyberbullying, the figure was 27 percent of the less well-off and 17 percent of the better-off teenagers.

    Crises shape childhood

    The results for DAK boss Andreas Storm and State Secretary for Family Affairs Deligöz are “alarming.” Deligöz said: “We live in a time of crises – they shape our children’s childhoods.” Many are also worried about the climate crisis or the war in Ukraine – but not everyone is equally resilient. Only the day before, the Federal Statistical Office had released the figure of almost 2.2 million children and young people who are at risk of poverty in Germany.

    Deligöz promised that her department would present the announced draft law for essential child benefits by the end of the summer recession to improve children’s material situation. However, the government recently disputed the amount of billions of euros allocated to this.

    Loneliness is “the greatest national disease.”

    More and more people in Germany are feeling lonely – especially at Christmas. This doesn’t just affect older people, says patient advocate Brysch. One in four calls to the telephone counseling service is about feeling alone.

    According to the Patient Protection Foundation, loneliness is currently the most widespread disease in Germany. According to foundation director Eugen Brysch, it does not only affect very old people.

    In contrast to previous years, more and more 60- to 70-year-olds report feeling lonely, says Brysch. However, more is needed for Federal Minister for Family Affairs Lisa Paus to appeal to those affected to seek help on their own initiative and to use services such as telephone counseling.

    “Doing something about the growing loneliness in society cannot be left to institutional providers alone,” Brysch told the DPA news agency. “It is much more important that each of us becomes more aware of this widespread problem.”

    Every fourth call to the telephone counseling service

    It’s essential to take personal responsibility and to have the “courage to speak up.” Christmas offers an excellent opportunity to give single people next door a gesture of togetherness. This could be a greeting or a short conversation. “Let’s press the doorbell button. That can be the key to a person-to-person connection,” said Brysch.

    This year, around one in four calls to the telephone counseling service, are about loneliness, said the chairman of the nationwide telephone counseling statistics working group, Ludger Storch, to the news agency DPA. Loneliness has been a problem that has been mentioned again and again, mainly since the beginning of the corona pandemic, and affects all age groups.

    “Many callers are now telling us that they are having difficulty getting back in touch with other people,” Storch said. According to him, casual relationships have broken down during the Corona period.

    Paus: Christmas is particularly painful

    In 2022, around 1.2 million people called the telephone counseling service, about the same number as in the previous year, Stroch said. The approximately 7,700 volunteers also answered around 43,000 emails and 37,000 chats.

    Family Minister Paus called on people who are lonely at Christmas not to be afraid to turn to the help services. “For most people, the best thing about Christmas is the community and togetherness in the family circle. If you cannot experience this sense of belonging and humanity, these days can be harrowing,” the Green politician told the editorial network Germany. Telephone counseling and on-site facilities are then available.

    The federal government wants to combat loneliness.

    Family Minister Paus says loneliness can become a problem not only for those affected but also for society as a whole. Action weeks are intended to raise awareness. There will be no more money.

    Depression, heart disease, or dementia – those who are permanently lonely have a higher risk of becoming ill. Federal Minister for Family Affairs Lisa Paus of the Green Party says that millions of people in Germany are affected by loneliness.

    According to the ministry, significantly more people have felt socially isolated since the coronavirus pandemic: “We know that this rose to 40 percent in Germany during the coronavirus pandemic,” said the minister. Even after Corona, this has yet to occur. “We still have high values and young and very old people are particularly affected by this.”

    Paus: Clear and massive problems for democracy

    The Minister for Family Affairs also sees this as a problem for the state. Namely, when more and more people withdraw because they lose trust in society: “Then that is something that makes our society fragile at its core and destroys it,” says Paus. “That is a clear and massive problem for democracy, and that is why we should all be well advised to do something about loneliness.”

    The strategy paper against loneliness, which has now been approved by the Federal Cabinet, is 30 pages long. The aim is to raise awareness of the issue through action weeks. Employees in companies or clubs are to be trained to recognize signs of loneliness and to react to them. This is intended to provide greater support to socially isolated people, for example, through leisure activities and faster help with mental health problems.

    There are no additional financial resources.

    Paus admits this will require more than a strategy paper: “But we also know that we have a shortage of therapy places, especially for young people, that there is considerable demand and long waiting lists.” It is crucial and long overdue that there are special licenses for psychologists who can also work psychotherapeutically with young people.

    The Minister for Family Affairs needs to plan further financial means to implement the strategy’s plans. The structures, such as sports clubs, in which the measures are to take effect already exist, says Paus. The German Foundation for Patient Protection criticizes this precisely. More money is urgently needed, as many initiatives in the municipalities simply need it.

    Campaign in Sweden
    A “Hej” against loneliness

    As the days get colder and shorter in Sweden, many people’s moods suffer. The industrial city of Luleå encourages people to greet each other in a friendly manner. Does this help combat the gloom?

    Anyone who lives high up in the Swedish north should be weatherproof. The first snow has already fallen, and you can only go outside wearing a thick jacket, hat, and gloves.
    It could also be because the Swedes are not considered world champions of small talk. But a simple hello – Hej in Swedish – can achieve a lot, says Åsa Koski.

    “There is research that shows that greeting each other has a positive effect on well-being and health,” explains the 44-year-old, who works for the city administration in Luleå. “So we felt that we had to do something!”

    Feelings of loneliness – no exception

    According to a survey by the health authority, one in four people in Sweden feel lonely. Older people are particularly affected. Actress Eva-Britt Tjernquist appears in a video for the campaign, which is intended to encourage friendly greetings.

    She has lived alone for a long time and hopes those people, not only in Luleå, will become closer again through a little “Hej.”

    Nobody likes to talk about loneliness. I miss the conversations. You come home, and you have no one to talk to. But then I pull myself together and call someone, tell them something, and ask what they think. So you have to do something and find new solutions.

    No question of age

    Kosaki and Tjernquist are now setting a good example on the streets of Luleå and greeting everyone who passes by warmly. On the way, Koski explains that even 16- to 29-year-olds often feel alone.

    That afternoon, she stopped by a high school to discuss the topic. “Do you say hello to your classmates in the hallway?” Koski wanted to know. The group of young students in front of her seemed a little embarrassed. “When someone walks past me, I look up. But many just keep walking with a stubborn look on their faces,” one of them said.

    Ice bathing can also be connected.

    Sometimes, you must break your habits, like the first ice in the Baltic Sea. Pontus Wikström saws a hole in the frozen shore. Despite the sub-zero temperatures, they want to get into the water immediately. The Swede founded an ice bathing club in Luleå. Most of the 500 members come at the weekend. After bathing in the ice-cold water, they go to a small sauna.

    There is almost always a place like that, says Pontus. Sweating together and bathing in ice – nothing helps better against loneliness. “It is a good way to make new contacts. Also, for many people who suffer from involuntary loneliness. Here, they feel better, at least for a while.”

    Up here, just a few kilometers from the Arctic Circle, the darkest time of the year is coming. The days will get shorter and shorter until mid-December. There are many ways to avoid having to go through this alone. For example, a friendly “Hej” on the street can help, as you will find in Luleå.

Exit mobile version