Category: Research Technology

  • The Pros and Cons of Wearable Technology

    The Pros and Cons of Wearable Technology

    Wearable technology, which is currently led by smartwatches, represents a multi-billion dollar market focused primarily on health monitoring.

    Numerous high-end products assert that they can accurately measure exercise patterns, body temperature, heart rate, menstrual cycles, and sleep quality, among other metrics.

    Health Secretary Wes Streeting has discussed a plan to provide wearables to millions of NHS patients across England, allowing them to monitor symptoms related to cancer treatments from their homes.

    However, many healthcare professionals and technology specialists are wary of relying on health data gathered from wearables.

    I am presently testing a smart ring from Ultrahuman, and it seemed to detect my illness before I was even aware.

    One weekend, it notified me that my temperature was slightly elevated and that my sleep had been disturbed. It cautioned me that this could indicate I was falling ill.

    I brushed off the warning, attributing it to perimenopause symptoms, but two days later, I found myself stuck in bed with gastric flu.

    While I didn’t need medical help, I wonder if the information from my wearable could have aided healthcare providers in my treatment. Many wearable companies actively promote this idea.

    For instance, the Oura smart ring provides a feature that allows patients to download their data as a report to share with their healthcare providers.

    Dr. Jake Deutsch, a clinician in the US who also consults for Oura, states that wearable data allows him to evaluate overall health more accurately. However, not all doctors believe this is consistently beneficial.

    Dr. Helen Salisbury, a general practitioner at a busy Oxford clinic, mentions that few patients initially arrive with their wearables, but she’s observed an increase in this trend, which concerns her.

    She feels that while there are times it’s beneficial, there are likely more instances where it’s not particularly useful, and she fears we’re fostering a culture of hypochondria and excessive monitoring.

    Dr. Salisbury points out that there are numerous reasons for temporarily abnormal readings, such as a sudden increase in heart rate, whether due to a normal bodily fluctuation or a problem with the device—many of which don’t need further probing.

    She worries that we may prompt individuals to continually monitor their health and consult their doctors every time a device indicates an issue, rather than waiting until they personally feel unwell.

    Additionally, she raises a concern regarding the psychological impact of this data acting as a safety net against alarming health diagnoses. For example, a serious cancerous tumor may not necessarily be detected by a wearable device or an app, she explains.

    What these devices do is promote healthy behaviors, but the most valuable takeaway from them is the same advice doctors have provided for years. Dr. Salisbury adds, “The fundamental actions are to increase walking, limit alcohol consumption, and strive to keep a healthy weight. That advice remains unchanged.”

    The Apple Watch is often cited as the world’s top-selling smartwatch, although its sales have experienced a recent slowdown.

    Apple has not made a statement, but the tech company utilizes real-life stories of individuals whose lives were saved due to the heart monitoring feature of the device in its marketing. Anecdotally, I’ve heard many of those narratives as well. However, I haven’t encountered reports regarding the frequency of false positives.

    When patients share their data with healthcare professionals, many clinicians prefer to try to replicate those readings using their own equipment rather than automatically trusting the information provided by the wearable.

    There are several practical reasons for this preference, according to Dr. Yang Wei, an associate professor in wearable technologies at Nottingham Trent University.

    “When you’re in a hospital setting and measuring an ECG [electrocardiogram, which checks heart activity], you don’t have to worry about battery life since the machine is plugged into an electrical outlet,” he suggests.

    “Conversely, with your watch, you can’t continuously record your ECG without quickly draining the battery.”

    Moreover, the movement of both the wearable device on a wrist and the wearer’s overall activity can introduce “noise” into the data it collects, making it less trustworthy.

    Dr. Wei refers to the ring on my finger.

    “The most accurate way to measure heart rate is from the wrist or directly from the heart,” he states. “Measuring from the finger compromises precision.”

    It’s the responsibility of software to address such gaps in data, he asserts, but there is currently no global standard for wearables regarding the sensors and software that underpin these devices, or the data itself, including how it’s collected.

    The accuracy of data is likely to improve with the consistent use of a device. However, there is a cautionary anecdote to consider.

    Ben Wood was spending the day out when his wife began receiving a series of concerning alerts from his Apple Watch, indicating he had been involved in a car accident. It suggested that she text him instead of calling, as he might need to keep the line available for emergency services.

    The notifications were legitimate and sent to her as his emergency contact, but in this instance, they were unnecessary. Ben was at a racetrack driving high-speed cars. He admitted he “wasn’t very gifted” at it but insisted he felt safe throughout.

    “The distinction between an incident and an alert should be handled carefully,” he noted in a blog post. “I’m interested to see how device manufacturers, emergency services, first responders, and individuals will approach this technology moving forward.”

    Pritesh Mistry, a digital technologies fellow at the Kings Fund, acknowledges that significant challenges exist regarding the integration of patient-generated data into healthcare systems. He adds that this discussion has been ongoing in the UK for several years without any clear resolution.

    He believes there’s “a compelling argument” for the incorporation of wearables in the UK government’s current initiative to transition care from hospitals to community settings.

    “However, without a robust technological infrastructure and adequate workforce training in terms of skills, knowledge, capacity, and confidence, I think it will be a challenge,” he argues.

    With the evolution of consumer technology, there has been a marked increase in the utilization of personal devices for various healthcare-related functions. From smartphones and tablets to fitness trackers and smartwatches, more patients are using their personal devices to monitor health metrics, access medical information, and communicate with healthcare providers.

    The Bring Your Own Device (BYOD) trend involves people using their personal devices for work and medical purposes, thereby integrating personal technology into healthcare workflows. While the adoption of BYOD offers a range of advantages, it also introduces risks, especially in terms of data security, privacy, reliability, and accuracy.

    As patients become increasingly reliant on personal devices in healthcare contexts, professionals must recognize the associated risks and advocate for the use of certified medical devices. Ensuring that medical-grade devices comply with standards for data security, accuracy, and regulatory requirements is crucial for delivering effective healthcare and safeguarding patient information.

    Consumer-grade wearables, such as fitness trackers, smartwatches, and heart rate monitors, have gained substantial popularity in recent years, allowing users to monitor health metrics and pursue fitness goals. Smartwatches represent a significant segment of the expanding global wearables market. According to the Pew Research Center, almost one in five U.S. adults (21%) report regularly wearing a smartwatch or fitness tracker. Looking forward, the trend in smartwatches is anticipated to grow further. Statista forecasts that global smartwatch shipments will surge from 37 million units in 2016 to over 402 million units by 2027.

    Although these devices provide valuable insights into a patient’s everyday activity levels and overall health, it’s essential to recognize the difference between consumer-grade and medical-grade wearables. Medical-grade wearables are specifically designed for clinical use, undergoing rigorous testing and certification to guarantee accuracy, reliability, and adherence to regulatory standards. They are often utilized under the guidance of healthcare professionals.

    Medical-grade wearable devices are becoming increasingly vital in contemporary healthcare. They monitor numerous vital signs and parameters, such as heart rate, skin temperature, heart rate variability, and respiratory rate. Devices that offer continuous monitoring improve data accuracy and reliability by gathering patient information over time and in various conditions.

    By acknowledging the dangers of relying solely on personal devices and stressing the importance of certified medical technology, healthcare providers can protect patient data integrity, maintain compliance with regulations, and deliver high-quality care. Educating patients about the advantages of medical-grade wearables and the possible downsides of using personal devices can further enhance the effectiveness and safety of remote patient monitoring.

    Examining the Risks and Considerations of BYOD

    In a recent study, researchers investigated the challenges faced by hospitals concerning BYOD policies. They identified concerns about inadequate device security, staff awareness, and usage guidelines. The results indicated that hospitals must educate staff, create clear policies, and implement appropriate technological solutions to effectively manage implementation challenges and adapt to the changing risks associated with BYOD.

    With the increasing prevalence of wearable technology and personal devices in the healthcare sector, it is crucial for both healthcare providers and patients to prioritize the use of medical-grade wearables for health-related purposes. Utilizing personal fitness trackers or smartwatches for medical applications poses specific difficulties.

    Precision and trustworthiness: Medical-grade wearables undergo extensive testing and certification to ensure they provide accurate and reliable measurements in clinical environments. In contrast, personal fitness trackers might not adhere to the necessary standards for medical usage, resulting in possible inaccuracies in health monitoring data. For instance, a temperature sensor on a smartwatch may deliver unreliable readings due to its placement on the body and exposure to surrounding temperatures.

    Protection of data: Personal fitness trackers often lack sufficient security measures to safeguard sensitive medical information. Consequently, the risk of data breaches or unauthorized access increases when utilizing these devices for health-related applications, which could jeopardize patient confidentiality. Many consumer-grade trackers automatically transmit data to the vendor’s cloud storage, which might not comply with HIPAA requirements.

    Adherence to regulations: Medical wearables must comply with regulatory standards designed to ensure their safety and effectiveness for health applications. In the U.S., for example, medical devices undergo evaluation by the Food and Drug Administration (FDA) prior to receiving authorization for patient use. These devices are categorized based on risk, from Class I to Class III.

    Personal fitness trackers may fail to satisfy these regulatory requirements, limiting their appropriateness for clinical use. To gain FDA clearance, medical devices must also be clinically validated to confirm their effectiveness in monitoring particular health conditions and outcomes. Conversely, personal fitness trackers may not possess clinical validation evidence, complicating the assessment of their accuracy and reliability for medical monitoring.

    Compatibility and integration: Wearables are typically crafted to work seamlessly with healthcare systems and electronic health records (EHR) platforms, facilitating smooth data sharing and interoperability. Personal fitness trackers might lack integration features, making it difficult to incorporate their data into established healthcare processes and systems.

    Although personal fitness trackers provide ease of use and access for general health and wellness monitoring, they may not be appropriate for medical use. Providers should be cautious when evaluating personal fitness trackers for medical oversight and should emphasize the use of medical-grade wearables that comply with established safety, effectiveness, and data security standards.

    It might appear that wearable technology and personal lifestyle applications have existed for an extended period, but their popularity has only recently surged. Smartwatches, fitness trackers, smart jewelry, gaming simulators, and various health monitoring devices have become everyday necessities for many users, regardless of their technical expertise.

    Wearable technology seeks to efficiently integrate data and metrics to enhance our health and well-being. This trend’s growth has led millions to enthusiastically discuss improvements in cardiovascular fitness and weight loss, among other benefits. Brands such as Fitbit, Apple Watch, and Google Glass have become synonymous with products designed to improve the lifestyles of their users.

    The global market for wearable technology is projected to experience a compound annual growth rate (CAGR) of approximately 17.6% from 2022 to 2027. Currently valued at around $186.48 billion, it is expected to reach $493.26 billion by 2029. This indicates that the market is ripe for further opportunities, especially as sustainability becomes a focal point for many industries; we can expect an even greater blending of fashion and technology in the near future.

    However, while wearable devices offer numerous benefits and deserve recognition for their positive contributions, there are important considerations to keep in mind. This article examines the primary advantages and disadvantages of adopting wearable technology and outlines how to implement it thoughtfully for appropriate reasons.

    Sustainability: Smart glasses, watches, and other wearable items are emerging as innovative trends that meet both lifestyle enhancement and environmental goals. Many of these products are made from eco-friendly and biodegradable materials, and given their expected lifespan, they are less likely to contribute to electronic waste once they are no longer useful.

    Sustainable smart wearables—similar to immersive sports simulation systems and virtual reality technology—often incorporate energy-efficient features and biometric sensors that assist users in minimizing energy use and making informed decisions about their lifestyle and exercise choices.

    That being said, there are certainly initial expenses linked to their broad adoption, which means those with limited budgets may find it harder to obtain one. The prices can vary from hundreds to thousands of dollars, influenced by the brand and features. While many may perceive this cost as an investment in their health and future instead of a mere expense, it could exceed the budgets of several potential users right from the start.

    Wearable devices can effectively and independently track metrics such as steps, calories burned, heart rate, sleep patterns, and other crucial aspects of our daily health. This data enables us to make informed decisions about when to exercise or rest, ultimately empowering users to better understand their health. Numerous wearables come equipped with additional built-in functionalities like mindfulness and meditation, some of which may only be available to users with premium subscriptions.

    Concerns have been voiced regarding the precision of the health data gathered by wearable devices. Inaccurate or misleading information could deceive users about their fitness levels, significantly affecting both their mental and physical well-being. Issues of data privacy and security are also significant, especially if fitness trackers collect personal or location data that may be shared with outside parties.

    Wearable technology is enhancing accessibility to vital functions for various individuals, such as assistance for those with hearing impairments, glucose monitoring for diabetics, or fall detection systems for older adults, to name a few. These examples illustrate the benefits of wearables in helping those with additional needs. Furthermore, wearables facilitate easier access to crucial treatments for patients and allow doctors to monitor their patients remotely, enabling them to take a more proactive role in their health management.

    One concern in this area is that wearable technology can be susceptible to hacking and exploitation, putting vulnerable populations at risk. Whether users lack the necessary technical skills to safeguard personal information or have physical challenges that complicate the detection of threats, their privacy needs to be protected.

    Employers are promoting the widespread adoption of fitness technology, including wearables, to encourage healthy behaviors within their workforce. Gamification strategies that reward employees for achieving activity milestones have demonstrated high levels of engagement. This, in turn, fosters positive lifestyle changes and cultivates a stronger sense of community among teams, significantly enhancing overall workplace enthusiasm, engagement, and morale.

    Nonetheless, mandatory workplace wellness programs might intrude excessively on an employee’s personal or work time. Some employees may be uncomfortable sharing health information with anyone outside their HR department. Therefore, if you plan to implement a workplace wellness program, it’s essential to obtain full support from employees and take their preferences or concerns into account when designing it.

    Many individuals purchase wearable technology to boost productivity and efficiency in both their personal and professional lives. Wearable devices assist in achieving this goal by keeping users connected throughout their daily activities. In an age of instant access, wearable technology enables users to obtain information and data without the constant need for phones or laptops, which can be viewed as poor manners in workplace settings.

    Given this, wearable technology can also be seen as potentially distracting, particularly for devices that access the internet and send real-time notifications. Constant pings and alerts throughout the day can replicate those from a smartphone, compelling users to carefully manage their usage to avoid becoming overly dependent on the devices. Practicing mindfulness is crucial.

    By weighing the advantages and disadvantages—with a focus on the people—it’s possible to utilize wearables as a beneficial force for employees and customers alike.

    Here are some suggestions for fostering better relationships with wearable technology, which many people tend to overlook:

    – Set appropriate daily usage limits.
    – Determine how a device may provide essential assistance for disabilities.
    – Establish clear goals linked to habits and lifestyle; if they prove too ambitious, modify them.
    – Use them discreetly during professional engagements.
    – Plan your budget—some high-end models may offer features beyond what some users actually require.

    With a considerate and principled approach, wearable technology can significantly contribute to healthier and more knowledgeable lifestyles. However, attention to issues of privacy, accuracy, and the risk of overdependence still needs continuous assessment.

  • Next-Generation Geothermal Technologies Are Heating Up

    Beneath our feet lies a nearly unlimited source of energy, but while a few fortunate locations have geothermal heat near the surface, the majority of the world will need to dig much deeper. The challenge lies in how to reach adequate depths.

    There are certain places globally where energy literally rises to the surface. In Iceland, which has over 200 volcanoes and numerous natural hot springs, accessing this energy is relatively easy. The country is dotted with steaming water pools, heated by geothermal activity just below the Earth’s crust. Geysers erupt boiling jets of water and steam into the air.

    Iceland now utilizes geothermal energy to heat 85% of its homes, and 25% of the nation’s electricity is also derived from power plants that harness this underground heat. This presents an attractive opportunity—an almost limitless energy source waiting to be tapped into.

    Geothermal energy provides an essentially inexhaustible green energy option worldwide. Furthermore, it is “always on,” unlike wind or solar energy, because the heat is perpetually emitted from the Earth’s molten core and the decay of naturally occurring radioactive materials in our planet’s crust. In fact, the Earth releases such massive amounts of energy as it cools that the heat lost into space annually could satisfy the world’s total energy requirements many times over. The challenge remains in how to access that energy.

    At present, only 32 countries across the globe operate geothermal power plants. There are under 700 power plants worldwide, producing around 97 Terawatt hours (TWh) in 2023 collectively. This amount is less than half of the electricity generated by solar energy in the US alone and falls significantly short of projections for geothermal’s potential to contribute to the global energy landscape. Some estimates suggest that geothermal could generate approximately 800-1400 TWh of electricity annually by mid-century, along with an additional 3,300-3,800 TWh per year of heat.

    “The Earth itself has the potential to tackle a variety of challenges in the transition to a clean energy future,” stated Amanda Kolker, geothermal programme manager at the National Renewable Energy Laboratory (NREL) in the US, when presenting a report on geothermal energy’s potential in 2023.

    However, not all countries are as fortunate as Iceland, where reservoirs of hot water at temperatures between 120-240°C (248-464°F) can be easily accessed at shallow depths. In various regions of the country, wells drilled to depths of up to 1.5 miles (2.5 km) can reach temperatures as high as 350°C (662°F). For instance, Iceland’s main geothermal site at Reykjanes has drilled exploratory wells down to 2.9 miles (4.6 km) to access superheated fluids reaching up to 600°C (1112°F). Currently, geothermal energy extraction occurs using shallower wells that tap into temperatures around 320°C (608°F) to generate 720 Gigawatt hours (GWh) of electricity yearly.

    One reason why geothermal energy is not more widely used is the significant initial investment needed for energy extraction. Additionally, physically reaching these depths has also presented challenges thus far.

    For other regions of the world to benefit from this geothermal clean energy bounty, deeper drilling is essential to access the necessary temperatures for electricity generation or large-scale heating for nearby communities.

    Across much of the globe, temperatures typically increase by 25-30°C (77-86°F) for every kilometer one descends through the Earth’s crust. For instance, in the UK, the subsurface temperature at roughly 5 km (3 miles) deep is about 140°C (284°F), according to the British Geological Survey.

    However, if one digs deeply enough, it is feasible to reach a point where water temperatures exceed 374°C (705°F) at pressures greater than 220 bars (with one bar representing average pressure at sea level). This temperature and pressure combination leads to an energy-intensive state known as supercritical, where water exists in a form that is neither liquid nor gas. The hotter and more pressurized it becomes, the more energy it holds.

    In fact, a single superhot geothermal well could produce five to ten times the energy generated by current commercial geothermal wells, according to the NREL.

    One significant obstacle, however, is that conventional rotary drills—even those equipped with diamond tips—are poorly suited for reaching the depths required to access these temperature levels. In the enigmatic deep underworld, characterized by uncertain geology, extreme temperatures, and immense pressures, drill components frequently fail, and preventing holes from becoming blocked presents a constant struggle.

    In 2009, a team involved in the Iceland Deep Drilling Project accidentally reached supercritical conditions when they drilled into a magma chamber located roughly 1.2 miles (2km) beneath the surface at the Krafla volcano. The steam released from this well was extremely hot and acidic, complicating its usability. The intense pressures and temperatures made it challenging to manage, requiring intermittent discharges for about two years until a valve failure led to sealing the hole.

    Deep drilling is often a costly and time-intensive process

    The deepest hole ever bored by humans dates back to the Cold War, during which there was competition between superpowers to drill as deeply as possible into the Earth’s crust. The Soviets succeeded in drilling through 7.6 miles (12.2 km) of rock, creating the Kola Superdeep Borehole on the Kola Peninsula in the Arctic Circle. They spent nearly two decades reaching that depth, which still stands as the greatest depth humans have reached in the Earth. (For more on the Kola Superdeep Borehole, see this article by Mark Piesing.)

    The National Renewable Energy Laboratory (NREL) estimates that drilling a 1 km deep well costs about $2 million (£1.57 million), while drilling four times that depth can range from $6 million to $10 million (£4.7 million to £7.87 million) with current technology.

    However, deep geothermal energy has the potential to offer significant cost savings in comparison to traditional geothermal systems, thanks to the elevated temperatures and pressures accessible deeper in the Earth’s crust. Some research indicates that deep geothermal energy could provide heating for communities at prices comparable to other heating methods, such as gas, but with reduced greenhouse gas emissions.

    With this in mind, innovative researchers and companies are exploring new drilling methods and technologies to create some of the deepest holes ever in order to harness geothermal energy in regions previously thought unsuitable.

    Quaise Energy, a spin-off from the Massachusetts Institute of Technology (MIT), aims to drill to depths of 12 miles (20 km) to access temperatures exceeding 500°C (932°F). They are employing a tool based on years of study into nuclear fusion technology. “While others are using traditional drills, we are introducing microwaves into the ground for the first time,” states co-founder Matt Houde.

    He and his team are testing millimeter-wave directed energy beams that can vaporize even the toughest rock. This technology directs a high-powered radiation beam, similar to microwaves but at a higher frequency, onto a rock segment, heating it up to 3,000°C (5,432°F) to melt and vaporize it. By directing the beam to penetrate the rock, it allows for hole creation without the debris and friction associated with conventional drilling methods.

    “Millimeter-wave drilling can function largely independently of depth,” Houde explains. “Additionally, this millimeter-wave energy can pass through dirty and dusty environments.”

    This technology is derived from nuclear fusion plasma experiments conducted by Paul Woskov, an engineer at MIT’s Plasma Science and Fusion Center. Since the 1970s, millimeter-wave directed energy has been investigated as a means to heat plasma in nuclear fusion reactors, but a few years ago, Woskov discovered an alternative application for the technology and began using millimeter-wave beams produced by a gyrotron to melt through rock.

    So far, the technology has only undergone laboratory testing, achieving shallow drilling in relatively small rock samples; however, the company claims a drilling rate of about 3.5 m (11.5 ft) per hour. Although this is slower compared to traditional methods, it offers other advantages, as the “drill bit” does not physically grind through the rock, so it should not wear down or require replacement. Quaise Energy is currently in the final stage of lab testing for millimeter-wave technology and plans to commence field trials in early 2025.

    Nonetheless, transferring millimeter-wave drilling technology from the lab to full-scale operations presents challenges.

    “They have never been utilized in the deep high-pressure subsurface environment before,” Woskow comments. “The changes resulting from intense energy-matter interaction during drilling necessitate a new learning curve.”

    A Slovakia-based company, GA Drilling, is investigating an alternative high-energy drilling technology designed to penetrate the Earth’s crust. They are employing a pulse plasma drill that utilizes very brief, high-energy electric discharges to break apart rock without melting it. This method prevents the creation of thick molten rock, which can be challenging to remove and may hinder further drilling. “As the process rapidly disintegrates the rock with brief shocks, there isn’t enough time for melting to occur—thus, the frequency of needing to pull up and change the drill bit is significantly lower,” states Igor Kocis, the chief executive and chairman of GA Drilling. “Our current development program aims for depths of five to eight kilometers (3-5 miles)—and eventually over 10 kilometers,” he adds. “These depths will provide nearly universal access to geothermal energy.”

    Research into pulse plasma drills, which use extremely short energy pulses to fragment rock with ionized gas reaching temperatures of up to 6,000°C (10,832°F), is being pursued by a European consortium led by the Geothermal Energy and Geofluids (GEG) group, along with partners from Germany and Switzerland.

    GA Drilling has also collaborated with Konstantina Vogiatzaki, an associate professor of engineering science at the University of Oxford, to apply advanced mathematical techniques to manage supercritical fluids when accessing deep Earth energy sources through plasma drilling. “We focused on determining the ideal combustion system for a full-scale drilling tool, paving the way for better control of ultra-high pressure combustion via plasma drilling,” Vogiatzaki explains.

    Others are exploring methods beyond our own planet to facilitate deeper drilling. Technologies initially developed for planetary exploration missions on the intense surface of Venus, where temperatures can soar to 475°C (887°F), are being repurposed by companies in the geothermal drilling sector. Ozark Integrated Circuits, an electronics firm located in Fayetteville, Arkansas, has been modifying circuits to endure extreme temperatures, suitable for use in deep Earth geothermal drilling rigs.

    In its efforts, the U.S. National Renewable Energy Laboratory (NREL) has implemented artificial intelligence to analyze complex subterranean conditions to identify optimal drilling sites for supercritical water and to help foresee and recognize issues with drills before they escalate into significant problems.

    Some companies are already advancing in deep Earth exploration. The geothermal firm Eavor stated to the BBC that in 2024, it achieved a depth of three miles (5 kilometers) with two vertical wells at a site in Geretsried, Bavaria, Germany. It is utilizing two of the largest land-based drilling rigs in Europe to develop a commercial-scale facility in Geretsried that aims to extract geothermal heat by circulating water in a closed-loop system referred to as the Eavor Loop. This system operates similarly to a large radiator, where cold water in the loop is heated underground and then returned to the surface for electricity generation and distribution to nearby homes through a district heating network. Eavor anticipates beginning energy production at the site in early 2025, according to John Redfern, the company’s CEO and president.

    “Our technology aims to reach drilling depths of up to 11 kilometers (6.8 miles) in the future,” remarks geologist and co-founder of Eavor, Jeanine Vany. “I am confident we can make significant strides in tapping superhot rock within the next three to five years.”

    The closed-loop design also helps mitigate some contamination issues associated with extracting superheated water from deep geothermal wells, as highlighted by the Iceland Deep Drilling Project in 2009. It may also reduce emissions of harmful gases such as hydrogen sulfide, which can be a byproduct of open-loop geothermal systems.

    Vany further emphasizes that deep geothermal energy requires minimal surface space, indicating that it could be integrated into urban settings in the future.

    However, there are additional challenges that need to be addressed. It remains uncertain how simple it will be to maintain deep geothermal wells and prevent them from becoming obstructed.

    The pursuit of deep geothermal energy may also revitalize aging fossil fuel power plants, as nations aim to phase out their traditional carbon-emitting energy sources. Retrofitting old coal power plants into geothermal facilities could provide these steam-powered generators with a renewed purpose and facilitate the swift establishment of geothermal plants by leveraging existing electricity transmission infrastructure. Woskov has pinpointed an unused coal power facility in upstate New York, which he hopes to reactivate before the decade concludes, to generate electricity from subsurface heat.

    There is a certain poetic element to this transition—a power station that once operated on a polluting fuel extracted from the earth finding new vitality in the clean energy movement with a source from deeper beneath the surface. The looming question remains—will they manage to drill deep enough?

  • Virtual Reality (VR) has had a major impact on the healthcare sector

    Virtual Reality (VR) has significantly influenced the healthcare industry, with extensive applications in both training and patient education and care.

    In the realm of psychology specifically, numerous studies indicate that VR exposure therapy can provide various benefits for patients.

    But what precisely does VR exposure therapy entail, and how is it implemented?

    What is VR Exposure Therapy?

    To comprehend what Virtual Reality Exposure Therapy (VRET) is, we first need to break down the term into its two parts: VR and exposure therapy.

    Virtual Reality (VR)

    VR refers to computer-generated technology designed to create a simulated environment. Utilizing VR headsets, a user finds themselves immersed in a particular 3D world that they can engage with.

    Within this environment, many senses can be activated (sight, sound, touch, and occasionally smell), which helps to fully immerse the user in the artificial setting.

    Exposure Therapy

    The American Psychological Association defines exposure therapy as a psychological treatment developed to assist individuals in confronting their fears.

    Individuals with fears (such as heights, flying, or spiders) often avoid situations or activities that trigger these fears. Although this may temporarily alleviate anxiety, over time, the fear may intensify.

    In exposure therapy, a psychologist establishes a secure environment that incorporates those fears, enabling the patient to confront them and ultimately lessen avoidance and anxiety.

    VRET

    Now that we grasp the individual meanings of VR and exposure therapy, we can understand that VRET is a form of exposure therapy that employs VR technology to help expose patients to a safe environment where they can face and diminish their fears.

    How do patients begin with VRET?

    Typically, patients start VRET by getting acquainted with their therapist and discussing in detail what led to their trauma. The therapist will then create a customized VRET environment designed specifically for that patient.

    To commence the therapy, patients will don a VR headset featuring a simulated environment that replicates their trauma. There may be sounds, sights, vibrations, or smells aimed at re-creating the experience and eliciting an emotional reaction.

    This process exposes patients to what they may be trying to evade. Through this method, they can confront the situations that instill the most fear in them.

    Following the session, patients engage in a discussion about their experience with their therapist, who will gain insight into their trigger points and how to best assist them in recovery as they move forward.

    Is it effective?

    VRET is still a relatively recent form of therapy, so research continues regarding the complete advantages of this approach. Nonetheless, several studies have already demonstrated its potential effectiveness.

    For instance, Jimmy Castellanos, a Veteran Marine Corps member, experienced post-traumatic stress disorder for an extended period after his service in Iraq. His psychiatrist suggested this method, which virtually transported him back to the traumatic memory repeatedly until his triggers ceased to generate anxiety.

    Castellanos remarked on the experience:

    • It was an entirely different experience. I don’t recall having a physical reaction…In just 13 weeks, I had transformed who I had been for the prior ten years. Before the treatment, 80-90 percent of my dreams were related to Iraq. Now, I can’t remember the last time I had one. I lead a completely different life now.
    • It is well known that soldiers returning from combat zones frequently suffer from PTSD. Recent advancements in VR technology have enabled these veterans (and many patients dealing with PTSD, anxiety, etc.) to finally receive the assistance they need and deserve.

    Why is it effective?

    VRET has demonstrated remarkable benefits in addressing various disorders, particularly PTSD, anxiety, and phobias.

    Regrettably, these disorders are currently on the rise — The CDC has reported a notable increase in anxiety disorder symptoms. Many healthcare professionals in the U.S. have also noted rising PTSD rates amid the ongoing COVID-19 pandemic.

    The effectiveness of VRET lies in the fact that individuals often develop anxiety and avoid situations that may remind them of a traumatic event. However, VRET encourages them to confront such situations.

    For example, a war veteran with PTSD triggered by military combat might react strongly to the sound of fireworks. VRET allows them to face such triggers in a controlled setting. With prolonged exposure, they learn coping mechanisms for the anxiety and reframe their thoughts and feelings regarding a particular event.

    Ultimately, the patient becomes accustomed to the triggers, embraces the experience, and their anxiety or stress response diminishes in intensity.

    The same principle applies to using VRET for other forms of PTSD, anxiety, and phobias.

    VRET vs. In Vivo Therapy

    Historically, many psychologists have utilized “in vivo” therapy, which involves guiding patients through exposure-based activities face-to-face. For instance, an individual dealing with agoraphobia may be taken to a public location to help confront their fears, or someone who has a fear of flying might visit the airport with their therapist to replicate the experience of boarding a plane.

    While this therapy can be effective, it often depends on the patient’s ability to move and their access to environments that allow them to face their fears without becoming overwhelmed, making it challenging to identify appropriate settings. In contrast, VRET enables therapists to adjust the exposure intensity according to the patient’s circumstances, supporting them in gradually confronting their fears over time. Furthermore, this type of therapy can occur remotely from any location, broadening access to patients beyond those who may benefit from in vivo therapy.

    VRET represents a promising new approach for various mental health challenges, and this groundbreaking technology is likely to positively influence the lives of many patients.

    Augmented reality, or AR, involves overlaying digital information onto real physical objects or environments—think of the Pokemon Go phenomenon from a few years ago.

    However, AR can serve more purposes than just capturing animated characters in real life. In the medical field, it has applications in training, educating patients, assisting during surgeries, and other related functions.

    Advances in both hardware and software have paved the way for numerous new and cost-effective applications of AR in healthcare, even for small medical practices. AR tools can be utilized through specific headsets, special eyewear like Google Glass, smartphones or tablets, or other specialized AR devices.

    Let’s explore some of the ways AR can enhance healthcare in medical education and practice, as well as in consumer health applications.

    AR for Medical Students and Practitioners

    For medical professionals—including surgeons and nurses—understanding a patient’s anatomy is crucial before performing any medical procedure. AR can assist by showing visualizations of what lies beneath the skin, increasing accuracy in injections or incisions, or simply providing a clearer view of human anatomy.

    At Case Western University’s medical school, for example, students can utilize Microsoft’s Hololens to observe a large 3D model of the human body, allowing them to navigate holographic representations of various tissue layers, muscle, and bone, thus enhancing their anatomical knowledge beyond what’s available during real procedures.

    Nurses can employ AR handheld scanner technology to visualize their patients’ veins, making it easier to locate the vein for blood draws or vaccinations. This innovation saves time in vein identification while ensuring patient comfort throughout the process.

    Surgeons can leverage AR to gain an augmented sense of vision. By using AR headsets featuring eye displays that project images of the patient’s internal anatomy derived from CT scans, they can obtain a much clearer understanding of the underlying structures. At Johns Hopkins, neurosurgeons have implemented this technology for tasks such as inserting screws during spinal fusion and excising cancerous tumors from patients’ spines.

    “When augmented reality is used in the operating room, it functions like having a GPS navigator before your eyes in an intuitive manner, eliminating the need to glance at a separate display to check the patient’s CT scan,” explains Timothy Witham, M.D., director of the Johns Hopkins Neurosurgery Spinal Fusion Laboratory and a professor of neurosurgery at the Johns Hopkins University School of Medicine.

    AR has also proven advantageous in allowing doctors to minimize their teams to limit potential COVID-19 exposure. Imperial College Healthcare NHS Trust used Microsoft HoloLens 2 and Dynamics 365 Remote Assist to provide doctors with hands-free video consultations with other specialists during procedures while enabling them to access medical notes and X-rays within their direct line of sight.

    “This means that all the information and specialist care you require at the patient’s bedside is readily available, all through one headset,” stated Dr. James Kinross, a surgeon and senior lecturer at Imperial College.

    Although the technology is still in its early stages, it is likely that medical schools and practices worldwide will harness the advantages of AR-assisted surgeries and other medical procedures. AR provides access to in-depth insights into patient medical data and anatomical information, which medical professionals and students can learn from and rely on during their operations. It represents a cost-effective and convenient means of gaining “x-ray vision” to enhance their performance in their respective tasks.

    AR for patient care and education
    AR offers numerous advantages for patients as well

    For instance, during a Google Glass trial with the Australian Breastfeeding Association, nursing mothers wore Google Glass while consulting with virtual lactation consultants. This allowed the consultants to view exactly what the mothers were experiencing and provide valuable feedback and advice to enhance their nursing sessions—without requiring the mothers to set their babies down.

    Patients can enhance their understanding of drug interactions through an app that scans drugstore shelves and generates alerts about which over-the-counter medications might negatively interact with their existing prescription drugs. The app, offered by Cigna, aids patients in reducing anxiety, preventing adverse reactions, and boosting the efficacy of their medications.

    Lastly, AR can encourage individuals to prioritize their health by gamifying physical activity. Applications like Pokemon Go and Ghostbusters World offer a fun and engaging experience that promotes increased walking or running as part of daily routines.

    The augmented reality market in healthcare is currently expanding at a compound annual growth rate of 32.9%. As hardware and application developers create more affordable AR technologies, and as medical practitioners, educators, and consumers recognize their value, AR is expected to play a larger role in our healthcare experiences, both at home and during office or operating room visits.

    Researchers are continually investigating and improving virtual reality exposure therapy for mental health care, specifically in treating PTSD, anxiety, OCD, and various other mental health issues.

    Virtual reality is becoming an increasingly important tool across different industries, and healthcare is no exception. The introduction of virtual reality in surgical training, pain management, management of neurological disorders, pediatrics, and mental health care has yielded distinct and multifaceted advantages for the healthcare system. As research into virtual reality in healthcare progresses, an increasing number of mental health professionals are considering its role in exposure therapy for treating fear-based or anxiety-related conditions.

    Exposure therapy, which dates back to the 1900s, has been extensively utilized to manage and treat mental health disorders. Despite its proven effectiveness, there are many limitations in terms of accessing and initiating exposure therapy, as well as maintaining its continuity. With personal and safety obstacles posing challenges, researchers have turned to innovative virtual reality technologies to address some of these issues. However, even the most advanced technologies face their own set of challenges.

    Exposure Therapy

    Exposure therapy is utilized to treat a variety of mental health disorders, including phobias, panic disorders, social anxiety, generalized anxiety, obsessive–compulsive disorder (OCD), and PTSD.

    The American Psychological Association (APA) identifies four primary categories of exposure therapy: in vivo exposure, imaginal exposure, interoceptive exposure, and virtual reality exposure.

    In vivo exposure involves real-life, direct interaction with a specific situation or activity that may trigger symptoms. For example, someone with acrophobia, or fear of heights, might go on a rollercoaster or ascend a mountain. A person afraid of public speaking may be asked to deliver a speech.

    Imaginal exposure requires patients to vividly visualize the situation or object that provokes a fear response. For instance, someone with claustrophobia may need to imagine being in a confined space.

    Interoceptive exposure therapy involves exposing the patient to a harmless physical sensation to help them understand that this feeling does not signify danger.

    Overall, exposure therapy can be a complex treatment avenue because, even with appropriate diagnosis and support, fear can lead patients to hesitate or avoid treatment.

    In addition to the challenges associated with starting treatment, realistic exposure therapy may not always be feasible. For instance, in vivo exposure could be dangerous or inaccessible, while the traditional alternative, imaginal exposure, can be difficult to regulate.

    Virtual Reality Exposure Therapy

    Perhaps the most innovative type of exposure therapy is virtual reality exposure therapy (VRET). VRET leverages VR technology to create an entirely virtual, immersive experience, providing exposure therapy in the comfort of an office, home, or healthcare facility.

    According to the APA, virtual reality therapy is “a form of in vivo exposure therapy in which clients are active participants in a three-dimensional computer-generated interactive environment that allows them a sense of actual involvement in scenarios related to their presenting problems.”

    VRET is a form of exposure therapy that utilizes computers to create virtual settings, which users experience through virtual reality headsets or head-mounted displays (HMDs). Patients may find VRET to be more manageable and appealing compared to other types of exposure therapy.

    An article in Campbell Systematic Reviews states, “The primary goal of VR is to substitute sensory experiences from the actual world and establish a sense of presence for the user in the virtual realm. To engage with the user in real-time, the VR system gathers data about the user’s position and head movements using sensors and input tools like a head tracking system or joystick.”

    Psychological Assessments

    According to research published in Dialogues in Clinical Neuroscience, virtual reality technology opens up a unique opportunity for deeper mental health evaluations by immersing patients in real-life scenarios.

    While traditional psychological assessments have evolved since their inception, they fall short in accurately reflecting the daily experiences of patients. Limitations in live exposure lead to difficulties in precisely evaluating anxiety, PTSD, phobias, and other mental health issues, potentially affecting recommended treatment strategies.

    Using VR assessments, mental health practitioners can glean insights into a patient’s psychiatric condition through virtual exposure. For instance, a study featured in the Annals of General Psychiatry found that both real and virtual images of food triggered similar reactions in people with eating disorders, indicating that VR might offer reliable assessments across various mental health issues.

    Phobias

    The initial demonstration of VRET’s effectiveness was reported by Barbara Rothbaum, PhD, in the American Journal of Psychiatry, observed to aid in overcoming a fear of heights.

    A study in Cognitive Behavior Therapy examining VRET for individuals with public speaking anxiety concluded that VRET can be an essential therapeutic tool when implemented correctly and as part of the suitable care regimen.

    Anxiety

    As noted in Campbell Systematic Reviews, VRET has been investigated as an adjunctive therapy combined with traditional cognitive behavioral therapy for individuals with social anxiety disorder (SAD).

    Research has further compared VRET as a standalone mental health intervention for anxiety, highlighting similar outcomes between VR therapy and conventional treatments.

    Cognitive behavioral therapy (CBT) is the standard treatment for SAD and addresses other prevalent comorbidities such as depression. According to the systematic review and meta-analysis featured in Campbell, many individuals with SAD delay or avoid treatment due to high costs, extensive travel requirements, and other obstacles.

    Post-Traumatic Stress Disorder

    An article from the University of Central Florida (UCF) discusses the potential of VR exposure therapy in addressing post-traumatic stress disorder (PTSD). In addition to facilitating straightforward assessments, a publication by Albert Rizzo in the Annals of the New York Academy of Sciences examined the capacity of VRET to replicate combat experiences for soldiers who served in Iraq or Afghanistan.

    Rizzo emphasizes that a customized approach to exposure therapy for combat-related PTSD could yield more patient information and assist in tailoring treatments to their specific experiences, enabling healthcare providers to evaluate PTSD symptoms and adjust medications or therapeutic strategies as needed.

    Despite evidence from individual studies showing the efficacy of VR technology, a systematic review and meta-analysis published in the International Journal of Environmental Research and Public Health does not indicate a robust link between VRET and PTSD.

    Findings suggest that gradually increasing stimuli throughout a session, rather than in response to the patient, diminishes the effectiveness of virtual reality exposure therapy for PTSD patients.

    “Unfortunately, standard VRET involves increasing the intensity and frequency of trauma-related stimuli as the session continues, rather than tailoring this to the subject’s reactions. This approach may hinder full immersion for PTSD patients, as trauma-related stimuli are not presented with respect to their responses,” researchers noted in the article.

    An alternative to the conventional progression of VRET is VR-based graded exposure therapy (VR-GET). VR-GET is a revised version of VRET treatment that observes a patient’s reactions during PTSD therapy. By assessing a patient’s physiological and emotional responses, the therapist can adjust treatment protocols accordingly.

    Limitations

    Although numerous potential benefits of VRET are recognized, there are also notable limitations. One significant barrier is the cost of VR technology, which can be prohibitive for patients, clinicians, or healthcare systems lacking the financial means for high-tech headsets and other necessary VR components.

    The use of virtual reality (VR) in clinical practice faces limitations beyond financial costs for researchers and clinicians. The Dialogues in Clinical Neuroscience states, “The biggest challenge to implementing VR in clinical settings currently is the scarcity of evidence-based VR programs that can be easily purchased and utilized by clinicians and researchers. Several laboratories globally are creating their own software and conducting tests, but these solutions are not yet available for public purchase. The limited commercially available products developed by software firms have not been evaluated to determine their safety and effectiveness.”

    Furthermore, these virtual solutions present challenges regarding the time, funding, and resources necessary for the upkeep and enhancement of software and hardware. The possibility of technological failures also poses a significant risk for patients with delicate conditions, like those with panic disorders.

    Additionally, ethical issues relating to data security, privacy, confidentiality, and technological hurdles exist.

    While VR is not a complete substitute for psychotherapy or other psychiatric treatments, it may offer an innovative approach to various conditions; however, further research, development, and incorporation into the healthcare system are vital for ensuring safe, effective, and affordable care.

    VR therapy employs a computer-generated environment as a treatment tool. An individual might use it to practice skills, confront fears in a secure setting, or build confidence in social situations.

    VR therapy is not intended to replace traditional treatments for mental health disorders. Instead, it is often utilized by clinicians as a supplementary intervention.

    For instance, a therapist might integrate VR into cognitive behavioral therapy (CBT), enabling a client to practice new skills in a more controlled setting compared to real life.

    Numerous studies indicate that virtual reality therapy can effectively manage various mental health issues, including anxiety, depression, post-traumatic stress disorder (PTSD), and phobias.

    VR therapy leverages virtual environments and scenarios as therapeutic tools. A user may don a headset or utilize a device to immerse themselves in the virtual realm and engage with it.

    VR technology allows for the simulation of a vast array of environments and situations, making it advantageous for creating scenarios that are difficult to replicate in the real world or that may be too intimidating or hazardous.

    Therapists might employ VR to help clients navigate real-life challenges, revisit past experiences, or assist individuals in facing their fears in a controlled manner.

    Researchers published the first study on VR therapy over 25 years ago, and as technology has advanced, interest in this therapeutic tool has grown.

    How does VR therapy function?

    VR therapy operates by providing individuals with the opportunity to enact, practice, or revisit situations in a safe environment. This approach may:

    • teach skills
    • alleviate fears
    • enhance confidence
    • assist in processing past events

    By eliminating risks present in the real world, VR can render frightening experiences more manageable. For instance, someone with a phobia might not feel ready to confront it in reality, but engaging with a simulation in VR could help them gradually acclimate to the feared object and understand that it does not pose a threat.

    In this manner, VR may facilitate a connection between therapy and real-life experiences.

    What conditions could benefit from VR?

    VR therapy was initially designed to address phobias, but over the years, therapists have experimented with it for a range of mental health disorders.

    Phobias

    Therapists can utilize VR for exposure therapy, which is a fundamental component of phobia treatment. This method involves slowly introducing a person to their fear in small, manageable increments, ensuring their consent.

    While exposure therapy can occur without VR, it is sometimes challenging to achieve. For example, someone who fears flying cannot simply take a brief flight and progressively build their tolerance. Additionally, encountering fears like wild animals could expose individuals to danger.

    VR broadens the opportunities for exposure therapy. A systematic review from 2022, which examined 18 studies, found that this method improved nearly all types of specific phobias addressed in the reviewed research, including animal phobias and fears related to blood or injections.

    PTSD

    Exposure therapy may also be beneficial for PTSD, but, as with phobias, controlled exposure to a traumatic event can be difficult and potentially unsafe.

    Several studies indicate that VR therapy serves as an alternative. For example, a 2019 review and meta-analysis of nine prior studies compared the outcomes of VR exposure therapy with no treatment.

    Compared to participants who received no therapy, those undergoing VR therapy reported a reduction in PTSD symptoms, with benefits persisting for at least three months following the conclusion of treatment.

    Social and emotional skills

    Individuals can practice a variety of social and emotional skills through VR therapy. For instance, they might rehearse addressing a conflict with their partner or request a raise from their supervisor. This enables them to safely experiment with new skills while under professional guidance.

    Anxiety and depression

    A review published in 2019 examined earlier studies and highlighted the potential utility of virtual reality (VR) in treating various aspects of anxiety and depression. It could:

    • assist individuals in understanding mental health
    • visualize cognitive-behavioral therapy (CBT) techniques
    • promote self-compassion
    • simulate therapies like gardening or animal-assisted therapy
    • A scoping review from 2021 evaluated nine prior studies that integrated VR with CBT and concluded it could be beneficial for treating anxiety and depression.

    What is the cost of VR therapy?

    Within a therapist’s office, VR therapy typically costs about the same as conventional psychotherapy. Insurance may cover VR therapy in a clinician’s setting if the therapist is recognized by the insurance provider.

    Some therapists provide clients with VR devices to use at home to complement their sessions, while certain companies offer home VR units for self-care. Clients might rent these devices weekly, depending on the type of device.

    How can one find VR therapy?

    To explore VR therapy, an individual needs to locate a licensed psychotherapist who has access to a VR device. Online search engines and directories for therapists may assist with this process.

    VR therapy may be appropriate for individuals who:

    • experience specific phobias or fears
    • wish to practice particular skills
      are not prepared or able to confront certain situations in real life
    • Numerous companies provide home VR therapy through an app, allowing individuals to progress at their own pace. However, this format is not equivalent to traditional psychotherapy and may lack some benefits.

    When should one seek assistance?

    Individuals should seek help when any mental health concern negatively impacts their relationships, quality of life, or well-being, especially if self-care has not alleviated their symptoms. This support could be accessed through a doctor or any qualified therapist, regardless of whether they offer VR therapy.

    It is crucial to seek help if someone has thoughts of self-harm or suicide.

    Virtual reality therapy utilizes virtual reality to recreate various scenarios. This technique can assist individuals in acquiring new skills and addressing their fears in a secure setting. Initially, therapists employed it to treat phobias, but it is now used for a wide array of conditions.

    VR can create a controlled environment to tackle situations that might feel overwhelming or hazardous in the real world, which makes it valuable for exposure therapy. Nonetheless, as with any form of therapy, it is vital to receive care from a licensed and experienced provider.

    What is the experience of virtual reality exposure therapy like?

    You will spend time interacting with your therapist and discussing the events that led to your trauma. Following this, your therapist will establish the setting for your virtual reality exposure therapy (VRET). You may use a VR headset or enter a dimly lit room filled with screens that produce an immersive environment echoing what your trauma felt like. The experience can involve sights, sounds, smells, and vibrations to further replicate the traumatic event and emotional response. This setup aims to help you face the situations that induce fear and anxiety in a safe and monitored setting. You will review these immersive experiences with your therapist. Medications and coping skills training might also be integrated with your therapy.

    Please note: VRET may induce dizziness or headaches, particularly for individuals with brain injuries.

    What do patients say?

    “It was an entirely different experience. I don’t recall having the physiological response… In just 13 weeks, I had completely transformed from who I had been for the last ten years. Before the treatment, 80-90% of my dreams were related to Iraq. Now I can’t even remember the last time I had one. I am living in a completely new way now.” – Jimmy Castellanos, Veteran, U.S. Marine Corps

    “The layers … they just peel back and reveal your core. Initially, you resist, but eventually you break down and face it, and it’s truly incredible.” – Kevin Tergliafera, Veteran, Army National Guard

    Why does virtual reality exposure therapy work?

    When certain individuals undergo a traumatic event, they might react naturally, resulting in a heightened fear reaction to stimuli like sights, sounds, or other elements that trigger memories of that trauma. This can lead them to avoid circumstances that incorporate those triggers, such as the sound and sight of fireworks for someone with PTSD from military combat. By exposing oneself to these triggers in virtual reality, individuals can confront their fears in a controlled environment.

    Similar to traditional prolonged exposure treatment, this practice enables one to learn coping mechanisms and reevaluate thoughts regarding the traumatic incident. Ultimately, this can lead to becoming increasingly desensitized to the triggers and coming to terms with the experience. Over time, the stress responses to these triggers can diminish significantly.

    How substantial is the evidence?

    Research indicates that VRET may be effective in alleviating PTSD symptoms. Numerous studies have demonstrated that VRET is associated with a reduction in symptom severity for both PTSD and depression, and that the effectiveness of symptom relief tends to increase with the number of VRET sessions attended. These improvements have also been shown to persist over time, as seen in 3-month and 6-month follow-up evaluations.

    A randomized controlled trial confirmed these findings, indicating that patients undergoing VRET reported a decrease in symptoms related to PTSD, depression, and anger. This research concluded that VRET is most beneficial when combined with additional traditional treatment methods. Despite encouraging initial results, further research is required.

    What are the characteristics of effective virtual reality exposure therapy?

    Locate a licensed psychologist or another qualified therapist who has experience with prolonged exposure therapy, including VRET for PTSD and/or TBI. It is advantageous if they have experience dealing with your specific trauma source.

    Advantages of VRET Compared to Traditional Exposure Therapy
    There are various advantages to participating in virtual reality exposure therapy. Virtual reality acts as a link between a simulated stressful environment and the real world.

    Here are some of the advantages of VRET therapy:

    • It can be more cost-effective compared to real-life exposures: A virtual environment provides a budget-friendly and practical option, especially in situations where repeated real-life exposure may be too costly and dangerous, such as fears related to flying, heights, or wild animals.
    • Participants experience a sense of control: VRET employs specialized equipment and devices that create a very realistic feeling experience; however, if it becomes too overwhelming, the session can be halted at any point.
    • It can be beneficial for individuals lacking access to other treatment options: With the expansion of availability and quality of VRET, it may be possible to offer mental health care to patients who would otherwise have limited access to treatment.
    • It often yields enduring results: Several studies indicate that VRET can effectively address anxiety, PTSD, and depression, with symptoms remaining low during follow-up appointments. Although these previous findings are promising, additional research is necessary.

    Challenges & Obstacles to VRET Therapy

    While VRET demonstrates effectiveness, it remains an emerging field with several challenges and obstacles to consider.

    Here are some potential barriers to virtual reality exposure therapy you may encounter:

    • There has been a gradual acceptance of VR technology as a viable therapy option: Clinicians often prefer face-to-face treatments. Even when professionals are trained in VR, they seldom utilize it, partly due to misconceptions about this exposure-based method.
    • Access to and selection of VRET technology is restricted: VR software, equipment, and guidance on how to use them are not readily accessible to all therapists. Additionally, the broad array of materials can make it challenging for professionals to determine what is appropriate for themselves and their clients.
    • There is insufficient training available for clinicians, making it difficult to find a licensed provider: Despite the increasing interest in VRET, there’s a lack of training opportunities for professionals wishing to incorporate it into their practice. Furthermore, they may need to refresh their training each time new software or products become available.
    • More research is essential for understanding its effectiveness: Currently, VRET primarily addresses anxiety-related disorders such as PTSD and specific phobias. However, with the growing efficacy and popularity of VRET, more data is required to extend its benefits to other mental health conditions.

    What Equipment Is Typically Utilized in Virtual Reality Therapy?

    To partake in virtual reality exposure therapy, a provider employs programmed computers, immersive devices, and artificially created settings that replicate reality through a simulated experience. The individual undergoing virtual reality exposure therapy is fitted with a headset that grants access to the virtual environment. Contemporary virtual reality equipment has been adapted for smartphones, enabling the use of gyroscopes and motion sensors to monitor head, body, and hand positions while also tracking subjective units of distress.

    The equipment used during a virtual reality exposure therapy session is generally supplied by the therapist, although some individuals seeking this treatment may purchase their own headsets or goggles either through their therapist or online, with prices ranging from less than $50 to several hundred dollars.

    Virtual reality exposure therapy is suitable for various populations and ages. Besides adults, children and teenagers can be great candidates for innovative methods to develop healthy coping strategies. Research investigating the effectiveness of virtual reality exposure therapy for adolescents aged 13 to 16 who faced public speaking fears due to social anxiety showed positive outcomes in helping them manage their symptoms.

    Beyond its common applications, virtual reality is also being explored for treating sleep-wake disorders, enhancing sports performance, and addressing stress and test anxiety.

    Virtual reality has been widely employed as a prolonged exposure technique for treating PTSD in military personnel. Significant funding from military sources has facilitated numerous studies to assess the effectiveness of this method. With virtual reality exposure therapy, providers can create an immersive, 360-degree interactive computer-simulated environment.

    A meta-analysis that reviewed 14 studies involving military populations with PTSD demonstrated the high efficacy of virtual reality exposure therapy. Additionally, this therapy has been utilized in treating PTSD among military personnel through a program called Virtual Iraq. In this approach, soldiers use a head-mounted display and a gamepad to navigate a simulated Iraqi environment while traveling in a Humvee.

    Systematic exposure to these feared stimuli and settings aims to alleviate anxiety and traumatic stress symptoms. Initial findings among the first group treated revealed significant reductions in PTSD symptoms, with 75% of participants no longer meeting the DSM-V criteria for the diagnosis. Another investigation using the Virtual Iraq framework reported approximately a fifty percent reduction in PTSD symptoms among veterans diagnosed with the condition.

    Virtual reality exposure therapy serves as a practical exposure therapy method for anxiety disorders, with strong empirical support. Research findings uphold the use of VRET to address anxiety and phobia symptoms. This technology-based treatment approach enables clients to cope with anxiety by facing their fears through gradual or repeated exposure.

    The ultimate aim is to alter thought patterns, behaviors, and reactions that hinder daily functioning. The feared stimuli can differ from individual to individual, but may include living creatures, objects, situations, activities, thoughts, mental images, physical sensations, or experiences. Encountering these feared stimuli can lead to the extinction of the fear response, which is beneficial compared to other forms of exposure therapy (e.g., in vivo exposure), as VRET allows access to the most feared cues.

    VRET is increasingly employed for specific phobias, such as an intense fear of animals. The virtual settings can replicate feared animals or insects like spiders, snakes, or roaches. Recently, it has also been adapted for treating fear of public speaking, fear of heights, addiction, bullying, claustrophobia, depression, and eating disorders.

    What contributes to the success of virtual reality exposure therapy? Virtual reality uses visual and often physical simulations to evoke sensory responses similar to real-life experiences. By integrating VR with various techniques, VRET enables individuals to confront and receive feedback in a secure environment, facilitating processing and diminishing established responses. Moreover, it allows individuals to gradually approach fears that might be harder to tackle in the real world. Ultimately, VRET provides a safe exposure experience, encouraging individuals to challenge their reactions, process stimuli, and alleviate symptoms, ultimately enabling them to engage more in real-life activities.

    The effectiveness of exposure therapy, especially when combining multiple theories, can be explained through habituation, extinction, emotional processing, and self-efficacy theories. Each theory or a combination thereof can elucidate the success of virtual reality exposure therapy:

    • Habituation theory is utilized by repeatedly presenting a stimulus (for instance, an individual who has been attacked in a park and now avoids parks utilizes VR to be immersed in a virtual “park” environment) to lessen anxiety and boost familiarity.
    • Extinction theory is employed to diminish the conditioned response (such as fear, avoidance, anxiety) by weakening the reinforcement of an unconditioned stimulus (the experience of being attacked in the park); this association is weakened through repeated exposure to a conditioned stimulus (the park) without the occurrence of the unconditioned stimulus.
    • Emotional processing is implemented by repeatedly facing a stimulus (the park) to confront the response (panic, anxiety) and the unhealthy beliefs (for example, “I’m too weak to defend myself, I’m foolish for not anticipating that”) that were initially ingrained in one’s memory.
    • Self-efficacy is fostered by acquiring techniques to handle or master a fear-inducing situation, which affects a fear or anxiety response; by mastering these skills, one acknowledges their ability to manage a frightening circumstance and can use this knowledge with similar stimuli.

    What to Expect at Your First Session

    Prior to commencing virtual reality exposure therapy, the therapist will conduct an initial consultation to evaluate if a prospective client is suitable before starting. Once an individual is considered eligible for VRET, the therapist will administer a biopsychosocial assessment during the first session, allowing them to gather comprehensive information about their client and their therapy objectives prior to entering the treatment phase.

    After this assessment, the next step will involve treatment planning, which includes providing fundamental education about the specifics, establishing expectations, and allowing the client an opportunity to ask questions or voice any concerns. It’s important to note that each therapist may have different operating procedures, but they will typically include many of the standard processes discussed. The treatment process will then commence and can occur either in person or through virtual means.

    What Is a Typical VRET Session Like?

    During the exposure therapy session, where the therapist may address a specific phobia or trauma, the client will encounter exposure to feared stimuli or environments. By using VR equipment such as a headset or goggles, the client will have direct access to the simulated or artificial environment while gradually increasing the intensity of their exposure to the stimuli.

    Thanks to the flexibility in regulating the simulated setting, the individual undergoing treatment should recognize that the therapist can lessen or eliminate exposure to the fear at any moment. With the inclusion of biofeedback equipment, physiological sensations can be monitored and recorded using sensors between treatments. If the session is being conducted virtually, all that is required are the headset or goggles and access to a smartphone since the virtual environment is navigated through an app on the individual’s phone, which the provider controls.

    VRET Examples

    While the fundamental procedures of VRET will be consistent for everyone, there may be some differences based on the client’s experiences and the specific VRET equipment being utilized.

    Here are a few instances of VRET in practice:

    VRET for a Phobia

    Dan, a 31-year-old man, recently purchased a house and soon discovered it was infested with spiders. Dan suffers from a severe phobia of spiders, often resulting in panic attacks. He moved in with his parents after scheduling pest control for his new home but has struggled to go back and even see how the house looks due to his fear and anxiety surrounding the spiders.

    Dan consults a VRET therapist to address this issue, as he wishes to return to his new residence. He and his therapist begin with an exposure hierarchy, listing his fears to evaluate comfortability. They decide on habituation and self-efficacy techniques.

    In each session, he wears VR goggles and biofeedback sensors because of his panic attacks and observes virtual spiders approaching him. He engages in this for an extended duration in each session while his therapist monitors his sensors. Following each exposure, they discuss the emotions that arise, partake in educational activities and coping practice, and occasionally repeat the exposure. Eventually, Dan feels comfortable enough to visit his home and, after 2 months, succeeds in moving back in with ongoing therapy.

    VRET for PTSD

    Jane, a 29-year-old woman, has recently returned from her third year-long deployment in an active combat zone. Although she managed to adjust well after her first return, she has recently started experiencing symptoms of post-traumatic stress disorder (PTSD). During her debriefing, she frequently discussed her symptoms and continues to face flashbacks, hypervigilance, nightmares, irritable behavior, and challenges at work and with her family, even 6 months later as a result.

    Jane began working with a VRET therapist knowledgeable in military matters to start addressing some of her symptoms, hoping to reclaim her life. Loud noises triggered her significantly, often causing reactions that others saw as exaggerated. Together, Jane and her therapist commenced sessions using Virtual Iraq VRET, employing a head-mounted display and game pad, gradually increasing the duration while identifying specific triggers and applying coping strategies and skills as they arose.

    Over time, Jane managed to return to her job and developed an action plan with her employers and family for managing loud noises and particularly disruptive flashbacks.

    VRET for Social Anxiety

    Taylor, a 19-year-old, was frequently labeled as shy during high school. However, Taylor struggled significantly in crowded settings, never took part in talent shows or groups that required extensive social interaction, and often found socializing challenging. The school counselor suggested that Taylor might be experiencing social anxiety disorder (SAD), but did not facilitate any treatment.

    After a year at college, Taylor continued to face many of the same challenges and found it difficult to connect with others, but desired friendships and connection while attending school. A therapist subsequently diagnosed Taylor with social anxiety following a comprehensive evaluation and recommended VRET. In therapy sessions, Taylor was engaged in social interactions and practiced communicating with several people for longer periods, using a headset and biofeedback sensors. Taylor was also encouraged to confront or challenge negative thoughts such as “I’m an idiot” that arose during interactions, both in real life and within the VR environment.

    Eventually, Taylor formed a small circle of local friends as well as some online friends and started participating more actively in classes.

    Who Can Provide VRET?

    There is specialized training available that equips providers with the necessary skills for delivering virtual reality exposure therapy. Nevertheless, there are currently no specific certifications mandated for offering this type of treatment. Any mental health professional interested in this method can undergo training through organizations like Psious, which is a leading entity supplying VR equipment tailored for treating various mental health disorders.

    Donna Davis, Ph.D., who directs the Oregon Reality Lab in Portland, Oregon, specializes in virtual reality therapy (VRT). She clarifies that VRT takes place in a computer-generated or 3-D environment and is entirely distinct from teletherapy. While teletherapy involves virtual talk therapy (like through Zoom), VRT focuses on utilizing a virtual setting, such as a computer game or headset. It is crucial to emphasize that for it to qualify as therapy, a licensed therapist must be present. Programs or videos aimed at relaxation or enhancing meditation do not qualify as VRT since there is no therapist involved.

    A specific kind of VRT is known as virtual reality exposure therapy (VRET), which immerses individuals in a highly realistic 3-D environment. This is often accomplished using a headset, but not always. For instance, if someone has a fear of heights, the 3-D setting might feature a glass elevator to aid them in confronting their fear. VRET is also employed to support individuals with various phobias, as well as those experiencing post-traumatic stress disorder (PTSD) or victims of violence.

    However, VRT does not always reach such an immersive level. Dr. Davis mentions that another version of VRT involves conversing with a therapist while assuming an avatar’s identity in a computer-generated setting. For example, Dr. Davis has been involved with a virtual reality support group for individuals with Parkinson’s disease on the online platform Second Life, where members create 3-D characters in an alternate universe. The group has been “meeting” consistently for over a decade. “Participants in the group develop an avatar, which allows them to feel more secure when opening up, as their actual physical identity remains hidden,” she states.

    As VRT remains relatively new, there are fewer therapists trained in its application compared to more traditional therapy methods. Consequently, access can be challenging. Dr. Davis recommends searching online for clinical therapists in your area to determine whether they have been trained in VRT or VRET. Another useful resource is Virtual Reality International, which maintains a database of VRT therapists.

    How Effective Is Virtual Reality Therapy?

    Lucy Dunning, a licensed professional counselor in Marietta, Georgia, who incorporates VRET into her practice, notes that since the concept is still emerging, data regarding its long-term effectiveness is being developed. However, initial research indicates encouraging outcomes. “It has particularly demonstrated success for individuals with PTSD, anxiety, and chronic pain,” she remarks.

    Reports indicate that virtual reality therapy in the form of VRET has a success rate ranging from 66% to 90% for individuals with PTSD when combined with cognitive behavioral therapy (CBT), based on 2022 research published in JMIR Serious Games. Additionally, it has been shown to significantly alleviate pain as an alternative to medications. A study published in the Annals of Behavioral Medicine found that burn victims, when placed in a snowy environment where they could interact with snowmen and throw snowballs, experienced a reduction in their physical pain by 35% to 50%. Scientific studies have also shown success in overcoming spider phobias and positive results for treating individuals with eating disorders.

    Most current research on VRT is concentrated on VRET, and less is known about the effectiveness of therapy involving avatars in a virtual world. One study published in Frontiers in Psychiatry found that using CBT in a virtual reality context effectively treats individuals dealing with depression, who may hesitate to pursue conventional therapy. Another article in JMIR Mental Health suggests that VRT could serve as an alternative treatment method to in-person therapy for individuals experiencing social anxiety.

    Are There Risks Associated with Virtual Reality Therapy?

    Virtual reality therapy has several beneficial aspects, but it also comes with downsides. Although the virtual component might enhance accessibility, using it from home necessitates a computer or smart device along with a reliable Internet connection, which may not be readily available for individuals in underprivileged areas. Some people who lack technical skills might struggle to navigate VRT, and since VRT is still fairly new, finding a qualified provider can be challenging.

    As with any form of therapy, the therapist plays a critical role in determining the treatment’s effectiveness, according to Dr. Davis. Especially with Virtual Reality Exposure Therapy (VRET) utilizing realistic scenarios, simulations that are too realistic could possibly cause distress for participants if they are not supported by a skilled therapist.

    Another important factor is the therapist’s ability to assist the individual if an issue arises. “If the therapist is located far away, or if the person receiving counseling is interacting anonymously, this introduces significant ethical concerns and issues,” she states. “In any therapy setting, safeguarding the patient is essential. Anonymity can pose a substantial risk.” As with many new technologies, ethical considerations will likely need to be addressed as this treatment becomes more common.

    The Prospects for Virtual Reality Therapy

    Dr. Davis and Dunning are enthusiastic about the potential for VRT. “The future looks promising as technology evolves to become more advanced, affordable, and accessible,” Dr. Davis notes. Dunning concurs, suggesting that its usage will expand as additional VRT platforms emerge.

    If you are dealing with anxiety, depression, PTSD, chronic pain, or wish to overcome a phobia and have an interest in technology and creativity, VRT might be a suitable option for you.

    It is crucial to consult with a provider who is specifically trained in VRT. “Take the time to investigate which clinical practices exist in your area and whether the providers possess training in VRT,” advises Dr. Davis. Before deciding to proceed, ensure you know who is facilitating the sessions. Unlike clients attending therapy, the identity of the therapist should not remain anonymous so you can research their qualifications and confirm that they are properly trained to assist you effectively.

    The expense associated with VRT varies based on the provider, the individual’s health insurance, and any required equipment (such as a headset) for at-home use. Some VRT sessions may be pricier than other types of therapy when considering the costs for equipment, as noted by the mental health non-profit Panic Anxiety Community Support. Thankfully, prices for VRT software are declining, making this therapy increasingly affordable.

    Therapy through virtual reality is no longer a concept of the future; it is occurring presently. As VRT is still in its early stages, there is no existing data regarding how many people currently utilize it, yet as more clinicians gain training and research increases, accessibility will grow. “There is tremendous potential for development and learning,” states Dr. Davis.

  • Fungal skin could enhance the tactile and sensory capabilities of robots

    The Shiny Lackporling can do more than attack trees. Researchers have used the fungus to create a living robot skin and a sustainable alternative to chips and batteries.

    The more technologies used today, the more questions arise about how they can become more robust and sustainable. Vacuum cleaner robots, smartphones, and computer circuit boards also have to be disposed of at some point. Sustainable alternatives made from plants or fungi could help here.

    Austrian researchers, for example, have developed a circuit board based on the shared tree fungus Shiny Lackporling, as they published in the journal Science Advances.

    Circuit boards made from tree fungus

    Circuit boards serve as carriers for electronic components and connect them to each other using so-called conductor tracks. The plate itself is made of a stable and electrically insulating material; plastic or silicon is usually used for this. Fungus, on the other hand, can create biodegradable electronic circuit boards that decompose themselves in a very short time, within several weeks.

    This is made possible by so-called fungal mycelia. These are the root networks of fungi, which have vast networks of fibers underground. The mycelial skin is used for the circuit boards. The skin of the mushroom mycelia is both heat-resistant, robust and flexible.

    Simple and resource-saving production

    Production begins with beech shavings, wholemeal spelt flour, gypsum, and water—and with spores of the Shiny Lackporling. The research team from Johannes Kepler University Linz allowed the mycelium to grow on it. In the final step, the skin was peeled away from the mycelium, dried, pressed, and cut to the correct size. Conductor tracks can then be added, and electronic components can be attached as with conventional circuit boards.

    According to the research team, circuit board production is more straightforward, requires less energy and water than conventional production, and does not require harmful chemicals. So far, this has produced simple and small printed circuit boards.

    The researchers also use the fungal mycelia to make batteries. In such a battery, the mycelium of the shiny lacquer cell can consist of both the membrane between the poles and the cover.

    Mushroom mycelium – a complex and adaptive network

    In addition to the properties of the mycelial skin in electronics, the mycelium can also be attractive for science. Mushroom mycelium is a living, complex, and adaptable material that forms large networks. These networks, in turn, consist of elongated cells called hyphae. The hyphae absorb water and nutrients, which is how the fungus spreads in nature.

    However, in most of the previously known application approaches, the fungi used die at the end of the process or are removed again. Researchers at the Swiss Federal Institute of Technology in Zurich also use the adaptable behaviour to develop self-healing and robust robot skin.

    Living robot skin from the 3D printer

    As the research team describes in Nature Materials, a three-dimensional grid is printed from a hydrogel using a 3D printer. The hydrogel is loaded with spores of the shiny lacquer pore. If you leave the framework at 23 degrees Celsius and a high relative humidity of 95 per cent for several days, the mycelium grows without the hydrogel drying out.

    Within 20 days, the fungal mycelia colonise the printed grid, creating robust and regenerating skin. If this is cut or punctured, it will grow back together. The metabolic activity of the mycelia and the availability of nutrients are responsible for this.

    Robot coated with mycelium.

    The living robot skin of mycelium is soft, waterproof, regenerative and robust against mechanical influences. This means that the properties of the skin through the mycelium are comparable to some functions of biological animal skins.

    The researchers carried out tests with a gripper arm and a ball robot covered with mycelium skin. The robots successfully completed underwater actions or were rolled over different surfaces.

    Further research approaches and possible areas of application.

    Both research approaches show that the use of fungal mycelia is still in its early stages. For example, complex circuit boards will be made from smoother mycelial skin in the future, and further research is also needed to keep the metabolic activity and, thus, the living robot skin alive in the long term.

    But mushroom mycelium is also being used for research beyond electronics and robotics, for example, for sustainable insulation and building materials or for a durable leather alternative.

    A bot with wheels moves along the surface. A star-shaped soft-bodied robot flexes its five legs, shifting with an unsteady shuffle.

    While these basic robotic creations are powered by conventional electricity from a plug or battery, what makes these robots unique is that they are operated by a living organism: a king oyster mushroom.

    A team of researchers from Cornell University has modified two types of robots by integrating the mushroom’s mycelium, or rootlike threads, into the hardware, enabling the robots to sense and respond to the environment by utilizing the fungus’s electrical signals and its sensitivity to light.

    These robots represent the latest achievement in the field of biohybrid robotics, where scientists endeavor to combine biological, living materials such as plant and animal cells or insects with synthetic components to create entities that are partly living and partly engineered.

    While biohybrid robots have not yet moved beyond the laboratory, researchers aspire to see robot jellyfish exploring the oceans, sperm-powered bots delivering fertility treatments, and cyborg cockroaches searching for survivors in the aftermath of an earthquake.

    Robert Shepherd, a senior author of a study detailing the robots published in the journal Science Robotics on August 28, stated, “Mechanisms such as computing, understanding, and responsive action are accomplished in the biological world and in the artificial world created by humans, and most of the time, biology performs these tasks better than our artificial systems.”

    “Biohybridization is an effort to identify components in the biological world that we can leverage, comprehend, and control to enhance the functionality of our artificial systems,” added Shepherd, who is a professor of mechanical and aerospace engineering at Cornell University and leads the institution’s Organic Robotics Lab.

    The team initiated the process by cultivating king oyster mushrooms (Pleurotus eryngii) in the lab using a basic kit purchased online. The selection of this mushroom species was based on its ease and quickness of growth.

    They grew the mushroom’s threadlike networks or mycelium, which, according to the study, can sense, communicate, and transport nutrients, functioning somewhat like neurons in a brain. (However, it is not entirely accurate to refer to the creations as “shroom bots.” The mushroom is the fruit of the fungi, while the robots are energized by the rootlike mycelium.)

    The cultivation of the fungus in a petri dish took between 14 and 33 days to fully integrate with the robot’s framework, as per new research led by scientists at Cornell University.

    Mycelium produces minor electrical signals and can be linked to electrodes.

    Andrew Adamatzky, a professor of unconventional computing at the University of the West of England in Bristol who develops fungal computers, stated that it remains unclear how fungi generate electrical signals.

    “No one knows for sure,” mentioned Adamatzky, who was not involved in the research but reviewed it before publication.

    “Essentially, all living cells produce action-potential-like spikes, and fungi are no exception.”

    The research team encountered difficulties in engineering a system capable of detecting and utilizing the small electrical signals from the mycelia to control the robot.

    Anand Mishra, a postdoctoral research associate in Cornell’s Organic Robotics Lab and the lead author of the study, mentioned, “You have to ensure that your electrode makes contact in the correct position because the mycelia are very thin. There isn’t much biomass there. After that, you cultivate them, and as the mycelia start growing, they wrap around the electrode.”

    Mishra developed an electrical interface that effectively captures the mycelia’s primary electrical activity, processes it, and converts it into digital information that can activate the robot’s actuators or moving components.

    The robots were able to walk and roll in response to the electrical spikes produced by the mycelia, and when Mishra and his colleagues exposed the robots to ultraviolet light, they altered their movement and trajectory, demonstrating their ability to react to their surroundings.

    “Mushrooms are not particularly fond of light,” Shepherd remarked. “Based on the variations in light intensities, you can elicit different functions from the robot. It will move more swiftly or distance itself from the light.”

    “Exciting” progress

    Victoria Webster-Wood, an associate professor at Carnegie Mellon University’s Biohybrid and Organic Robotics Group in Pittsburgh, mentioned the excitement surrounding further developments in biohybrid robotics beyond the utilization of human, animal, and insect tissues.

    “Fungi may offer advantages over other biohybrid approaches in terms of the conditions required to sustain them,” Webster-Wood, who was not involved in the research, noted.

    “If they are more resilient to environmental conditions, this could render them an exceptional candidate for applications in agriculture and marine monitoring or exploration.”

    The study highlighted that fungi can be mass-cultivated and thrive in various environmental conditions.

    The rolling robot was operated by the researchers without a tether connecting it to the electrical hardware — a notable accomplishment according to Webster-Wood.

    Webster-Wood, via email, mentioned that truly tether-free biohybrid robots are a challenge in the field and it’s quite exciting to see them achieve this with the mycelium system.

    Regarding real-world applications, Shepherd stated that fungi-controlled technology could be useful in agriculture.

    Shepherd mentioned that in this case, light was used as the input, but in the future, it will be chemical. The potential for future robots could be to sense soil chemistry in row crops and decide when to add more fertilizer. This might help mitigate downstream effects of agriculture like harmful algal blooms, according to Shepherd.

    Adamatzky emphasized the huge potential of fungi-controlled robots and fungal computing, mentioning that more than 30 sensing and computing devices using live fungi were produced in his lab. This included growing a self-healing skin for robots that can react to light and touch.

    Adamatzky, via email, explained that when an adequate drivetrain is provided, the robot can, for example, monitor the health of ecological systems. The fungal controller would react to changes, such as air pollution, and guide the robot accordingly.

    Mestre, who works on the social, ethical, and policy implications of emergent technologies, mentioned that if biohybrid robots become more sophisticated and are deployed in the ocean or another ecosystem, it could disrupt the habitat, challenging the traditional distinction between life and machine.

    Mestre stated that if these robots are released in big numbers, it could be disruptive to the ecosystem. He also emphasized the importance of considering the ethical concerns as this research continues to develop.

    Mushrooms have gained popularity as a vegan substitute for leather and are being used in high-end fashion and even in car manufacturing. Additionally, hallucinogenic varieties of mushrooms have been found to alleviate treatment-resistant depression.

    Researchers at Johannes Kepler University in Linz, Austria, have found a significant use for fungi that could potentially help mitigate global warming.

    The team, led by scientist Martin Kaltenbrunner, devised a way to use fungi as a biodegradable base material for electronics chips, as outlined in the journal Science Advances.

    Kaltenbrunner, with a focus on sustainability, material science, and engineering, explored using sustainable materials in robotics in previous research.

    In their latest research, the team looked at redesigning the substrate of electronic circuits utilizing a mushroom-based material to replace unrecyclable plastic polymers.

    This mushroom, Ganoderma lucidum, was used for the experiment and has a history of promoting health and longevity in Asia. The team was particularly interested in the skin generated by this mushroom to cover its root-like appendage, called a mycelium.

    When the skin was dried out and tested, it was discovered that it could endure temperatures of 200°C (390°F), and it acted as a good insulator and conductor. The skin could also easily hold circuit boards after being treated with metal and strengthened by the addition of copper, chromium, and gold.

    Another positive characteristic of this remarkable fungi is its thickness, which is comparable to that of paper. Paper was considered as a potential substrate, but it was rejected due to its highly water-intensive and toxic chemical-soaked production process.

    In contrast, the mushroom substrate could be bent up to 2,000 times without any damage and was so adaptable in shape that it surpassed the planar geometry challenges faced by engineers in chip design.

    Andrew Adamatzky, a professor in unconventional computing at the University of the West of England, stated, “The prototypes produced are impressive and the results are groundbreaking,” in New Scientist.

    Kaltenbrunner and his team anticipate that the mushroom-encased chip will be suitable for use in wearable, low-powered, and short-lived Bluetooth sensors for humidity and proximity, as well as in radio tags.

    Moreover, the mycelium’s ability to repel moisture and UV light indicates that it could potentially endure for several hundred years. The research team has also proposed a completely new concept of batteries, having successfully used the mushroom skins as battery separators and casings.

    Even more encouraging, the production of these mushrooms has minimal impact on the environment — in fact, the more CO2 available for their production, the better. The team effortlessly grew and harvested mature mycelium on beechwood in just four weeks.

    Furthermore, when these devices reach the end of their lifespan, they can biodegrade quietly in any soil and disappear in less than two weeks, presenting the kind of solution that engineers need to adopt in order to counter the unsustainable electronic consumption threatening the world.

    Introduction

    In a world affected by climate change and extensive waste production, environmental impact must be a primary consideration in technological innovations. Disposable technology, in particular, represents an increasingly large portion of our waste, accumulating over 100,000 tons per day. End-of-life consumer electronics, which are often difficult to recycle due to diverse product designs and material compositions, are typically discarded since they are cheaply produced. In addition, the unsustainable use of rare and often toxic materials poses an environmental threat when inadequately treated or landfilled.

    Designs for easily recyclable devices, the use of low-cost and renewable materials, and the implementation of biodegradable or transient systems are promising approaches toward technologies with a closed life cycle, opening up new opportunities in various fields from medicine and environmental monitoring to security and intelligence applications.

    Recent advancements in robotics focusing on safe human-machine interaction, swarm robotics, and untethered autonomous operation are frequently inspired by the diversity found in nature. The intricacy observed in nature motivates scientists from various disciplines to develop soft and lightweight forms of robots that aim to replicate or mimic the graceful movements of animals or their efficient energy management.

    In the future, the increased integration of such soft robots into our daily lives poses, akin to consumer electronics, environmental concerns at the end of their life cycle. Once again, we can derive inspiration from nature and design our creations in a sustainable manner, mitigating the issues associated with current technology. Unlike standardized industrial robots, which are already incorporated into recycling loops, bioinspired robotics will find diverse ecological applications in various niches.

    Examples range from soft healthcare machines that assist elderly individuals in their daily activities to robots that harvest produce and then decompose as compost for the next season’s plants. Ongoing demonstrations of transient behavior include elastic pneumatic actuators, in vivo-operating millibots for wound patching, robot swarms for drug delivery, and small grippers controlled by engineered muscle tissues.

    These developments benefit from extensive research efforts towards bioresorbable electronic devices, primarily explored in the biomedical sector, and sustainable energy storage technology, aiming to address environmental concerns associated with the growing demand for energy in mobile devices. The future challenge for autonomous robots will be the efficient integration of actuators, sensors, computation, and energy into a single robot, requiring novel concepts and eco-friendly solutions. Success can only be achieved by bringing together material scientists, chemists, engineers, biologists, computer scientists, and roboticists.

    Here, we present materials, manufacturing methods, and design strategies for eco-friendly bioinspired robots and their components. Our focus is on sustainable device concepts, non-toxic, and low-cost production processes, and environmentally safe materials that are either biodegradable or sourced from renewable resources, all of which address the current pressing needs. The review begins with an exploration of sustainability and summarizes various approaches that enable technology with reduced environmental impact.

    Turning our attention to soft and lightweight forms of robotics, we then compare biodegradable polymers—from elastomers to bioplastics—and regenerative resources for the primary robotic body. In each component of typical autonomous robots, we examine environmentally friendly sensors, computation, and control tools, and introduce promising options for energy harvesters and storage systems, including solar- and biofuel cells, as well as batteries. Lastly, we showcase a selection of current soft robotic demonstrations that utilize economical material approaches and degrade with a positive impact on the surroundings.

    Sustainable Approaches for Soft Robotics

    The main scientific inquiries into sustainable materials development for robotics revolve around two questions. First, can we use new materials and resources that contribute to a more sustainable future? Second, how can we utilize or modify existing materials to reduce their ecological footprint on the environment?

    Addressing the first question involves the development of high-performance materials with increased durability, materials sourced from renewable sources, or biodegradable ones, all aiming to conserve valuable resources or minimize waste. Similar objectives apply to solutions addressing the second question, which focus on fabrication processes, recycling, and product designs. Sustainability in robotics encompasses numerous facets, approaches, and solutions, which we delve into in this section, including renewable resources, recycling, and biodegradability.

    Renewable Resources

    Unlike finite resources such as fossil fuels, nuclear fuels, and rare earth metals, renewable materials are either perpetually available or naturally replenished within reasonable timeframes. In an ideal sustainable scenario, the consumption rates of material/energy equal the regeneration rate of the resources. Autonomous robotics stand to benefit from renewable resources more than other technologies, by harnessing energy from solar power or tidal waves and by replacing damaged body parts with spare parts that naturally regenerate.

    Solar power, a long-standing standard for space exploration robots, offers an inexhaustible energy supply that can be stored in a robot’s battery to provide consistent power over an extended period. The smaller and lighter a robot is, the more efficient it becomes to utilize solar power over fuel energy, as robots only need to carry collectors, not the fuel itself. For instance, extremely lightweight solar panels can deliver substantial power (23 W g−1) without adding considerable weight to the robot.

    Rather than using fossil-based plastics, the robotic body can be constructed from plant-based materials. Green composite materials show promise as suitable candidates for sturdy yet lightweight components, not only for robots but also for mobile machinery in general. In the context of electric cars, lightweight natural fiber composites with adequate mechanical properties could replace dense synthetic materials for both interior and exterior components, helping to offset the increasing weight of batteries.

    To cater to the growing interaction between machines and humans, elastomers derived from biomaterials can be used to create soft grippers or (robotic) soft electronic skins (e-skins) that mimic biological designs. Carbonized biomass can be employed as an electron conductive alternative to metals in many electronic components, or it can participate in the electrochemical reactions of batteries and supercapacitors.

    However, the use of renewable materials primarily addresses resource issues rather than waste issues. For instance, vulcanized natural rubber, despite being naturally derived, does not degrade within a reasonable timeframe and necessitates waste treatment and recycling. Therefore, renewability, biodegradability, and recycling must be collectively optimized to yield a sustainable technology with a beneficial impact on resources and waste.

    Recycling

    For technologies that must meet high performance benchmarks—such as complementary metal-oxide-semiconductor (CMOS) chips or Bluetooth communication—finding renewable or biodegradable alternatives remains challenging. Thus, recycling emerges as a viable approach toward the more sustainable use of technology. It is important to view recycling as the process of transforming waste into a valuable (similar) product. Recycling also encompasses the generation of energy through waste combustion, although this is only sustainable to a certain extent, as it consumes resources and elevates CO2 emissions.

    In general, whether it’s material, device, or robot recycling, the decision is often driven by economic considerations: a product is more likely to be recycled if the cost of recycling is lower than the cost of manufacturing a new one.

    As a result, an effective recycling process must be economically viable, easily achievable technologically, integrated into closed production-recycling loops, focused on valuable materials, and requiring minimal energy. An example of efficient recycling is lead-acid batteries (such as car batteries). Due to their standardized simple design, these batteries can be easily taken apart and recycled. When technicians replace the batteries, they close the life-cycle loop by sending the worn-out batteries back to the manufacturers.

    Recycling other electronic waste (e-waste) is often challenging and not easily achievable due to the varying architecture and material composition of integrated circuits, Li-batteries, or displays. To reduce recycling costs, e-waste is sometimes sent to developing countries like Ghana, where improper e-waste processing endangers workers and residents.

    To make robotics sustainable, recycling must be considered during the design phase. A successful recycling plan necessitates the easy separability of individual robotic materials to facilitate straightforward reuse, exchange, and upgrading of robots. While this is more feasible for traditional robots, as they often consist of standardized electronic parts and actuators, it can be difficult for soft robots, which employ various actuation principles and materials. However, soft robots benefit from less complex material arrangements.

    For instance, pneumatically driven soft robots have combined actuators and bodies. As a result, the complexity of recycling an entire robotic body with many actuators (comprised of various components themselves) is reduced to recycling a single material.

    Similarly, the less stringent requirements of control feedback allow for e-skins with reduced material complexity. A beneficial approach is to incorporate self-healing materials or concepts for soft robots that autonomously restore materials functionality. Tan and colleagues developed a stretchable optoelectronic material for stretchable electronics and soft robotics with light emission and feedback sensing, which independently self-heals after being punctured.

    Another sustainable approach involves using fewer materials in the design. Autonomous robots benefit twofold from lightweight materials/component designs, aiming to first reduce weight and increase operation time, and second minimize environmental impact by decreasing the total amount of waste. Ultimately, zero waste robotics could be achieved with fully biodegradable materials.

    Biodegradable materials are a promising material class for sustainable technology. In the ideal scenario, a material breaks down into smaller, environmentally friendly components that are metabolized by bacteria or enzymes at timescales comparable to typical waste processing. Moreover, the degradation process should start at the end-of-life phase of a device, triggered and occurring at a controlled rate and under feasible environmental conditions. The concept of biodegradability is not clearly defined and handled consistently in the literature, particularly concerning multicomponent/material devices.

    For biodegradable electronics, not all components may be biodegradable, or they may degrade at different rates. Bao and colleagues distinguish between materials with transient behavior (type I) that disintegrate into sufficiently small components and biodegradable materials (type II) that undergo complete chemical degradation into tiny molecules.

    Transient electronics, made from type I materials, play a significant role in the biomedical sector. Implantable or ingestible devices are designed to remain in our bodies, monitoring cardiac pressure, glucose levels, or neural activities. The degradation of these devices must be achievable under physiological conditions to create truly bioresorbable devices. Therefore, the lifetime of all materials should be limited to timescales comparable to the healing of human tissue or regeneration processes, and each degradation product must be noncytotoxic.

    Such material design also holds promise for microbots operating inside the body, for wound treatment or drug delivery applications. Outside the body, biodegradable materials enable secure systems that disappear after their operation, preventing plagiarism, espionage, or unauthorized acquisition of critical technology.

    Biodegradable robotics and electronics (type II) require the complete metabolization of all constituents. It is not enough for materials to break down into smaller units; they must be converted into biomass or gases by microorganisms. Additionally, materials that degrade into bio-derived small molecules offer intrinsic biocompatibility and recyclability, returning energy back to nature. This technology may ultimately provide solutions to critical e-waste issues while transforming conventional robotics into creative solutions that encompass the entire technology life cycle.

    To ensure the correct degradation of materials, it is crucial to accurately report the application areas, operational environments, and degradation timescales for type I or II technology. Implanted devices should degrade under conditions similar to our body’s environment, produce harvesting robots must decompose in organic waste and compost, while maritime fish robots need materials that disintegrate in seawater.

    Immersing a material into an unsuitable environment might not result in any degradation, even if it is labeled biodegradable. This misunderstanding is unfortunately common in reports of biodegradable materials, as illustrated by Bagheri and colleagues.

    For their study on degradation, Bagheri and co-workers immersed typical biodegradable polymers like polylactic acid (PLA), polycaprolactone (PCL), and poly(3-hydroxybutyrate) (P3HB) in seawater. Surprisingly, they discovered that these polymers hardly degrade over 400 days, with a mass loss of less than 10%. The same holds true for the elastomer Ecoflex used in the soft robotics community. Although this polymer is 100% fossil-based, it fully decomposes in approximately 80 days under industrial composting conditions.

    Cellulose, for instance, requires about 50 days under the same conditions. In seawater, factors such as temperature, microorganisms, and oxygen availability differ significantly from those in compost, leading to a much longer degradation time for Ecoflex.

    While there are also standards for biodegradation in seawater, the most common standards that certify biodegradable polymers, particularly in packaging, target degradation in industrial composting facilities. The ISO 17088 norm, effective since 2008, is the globally applicable standard based on the European EN13432 and American ASTM 6400-04 standards. In essence, biodegradation tests monitor the CO2 evolution of polymer/compost mixtures under optimum humidity and oxygen conditions at 58 °C, with specified pass levels.

    In situations where industrial composting is not feasible, biodegradable materials are necessary to disintegrate in less controlled environments. For instance, tech waste disposed of through household composts or in nature needs to vanish under milder conditions, yet at equally rapid rates.

    For biodegradable materials used in electronics or robotics, additional declarations should indicate that the robot, once its purpose is fulfilled and it reaches the end of its life cycle, can simply be discarded without consideration for environmental conditions or be left at the disposal site. Therefore , advancing materials that enable individual-based waste management requires research, standards, and specifications.

    A wheeled robot traverses the ground. A soft-bodied robotic star shifts its five legs, moving in a somewhat clumsy manner.

    These basic robotic creations would be considered ordinary if not for one distinguishing feature: they are controlled by a living organism—a king oyster mushroom.

    By integrating the mushroom’s mycelium, or rootlike filaments, into the robot’s design, researchers from Cornell University have created two types of robots that perceive and react to their surroundings by utilizing electrical signals generated by the fungus and its light sensitivity.

    These robots represent the latest achievement in the field of biohybrid robotics, where scientists aim to merge biological materials, such as plant and animal cells or insects, with artificial components to create entities that are partially alive and partially engineered.

    Although biohybrid robots have not yet left the laboratory, researchers are optimistic that future applications could include robot jellyfish exploring the oceans, sperm-driven robots delivering fertility treatments, and cyborg cockroaches searching for survivors after earthquakes.

    “Biological mechanisms, including computing, comprehension, and actions in response, exist in nature, often outperforming the artificial systems developed by humans,” stated Robert Shepherd, a senior author of a study about the robots published on August 28 in the journal Science Robotics.

    “Biohybridization endeavors to identify biological components that we can utilize, comprehend, and control to enhance the performance of our artificial systems,” added Shepherd, a professor of mechanical and aerospace engineering at Cornell University and head of the school’s Organic Robotics Lab.

    A combination of fungus and machinery

    The research team began by cultivating king oyster mushrooms (Pleurotus eryngii) in the lab using a basic kit purchased online. They selected this mushroom species because it is simple and quick to grow.

    They grew the mushroom’s threadlike structures, or mycelium, which can develop networks capable of sensing, communicating, and transporting nutrients—similar in function to neurons in a brain. (It’s important to note that referring to these as shroom bots isn’t entirely correct, as the robots derive their power from the rootlike mycelium, not the mushroom itself.)

    Mycelium emits small electrical signals and can be linked to electrodes.

    Andrew Adamatzky, a professor specializing in unconventional computing at the University of the West of England in Bristol who constructs fungal computers, stated that the exact mechanism by which fungi generate electrical signals remains uncertain.

    “Currently, nobody knows for certain,” said Adamatzky, who did not participate in the study but reviewed it prior to publication.

    “Basically, all living cells generate action-potential-like spikes, and fungi are no different.”

    The research team encountered difficulties in creating a system that could identify and utilize the faint electrical signals from the mycelia to control the robot.

    “It’s essential to ensure that your electrode is positioned correctly because the mycelia are extremely fine. There is minimal biomass present,” explained lead author Anand Mishra, a postdoctoral research associate in Cornell’s Organic Robotics Lab. “Afterward, you culture them, and as the mycelia begin to grow, they wrap around the electrode.”

    Mishra developed an electrical interface that effectively reads the mycelia’s raw electrical activity, processes it, and converts it into digital signals capable of activating the robot’s actuators or moving parts.

    The robots demonstrated the ability to walk and roll in response to electrical spikes generated by the mycelia, and when stimulated with ultraviolet light, they altered their gait and trajectory, indicating that they could react to their environment.

    “Mushrooms tend to shy away from light,” Shepherd remarked. “By varying light intensities, you can induce different functions in the robot. It might move faster or steer away from the light.”

    ‘Exciting’ progress

    The advancements in biohybrid robotics that extend beyond human, animal, and insect tissues are exhilarating, noted Victoria Webster-Wood, an associate professor at Carnegie Mellon University’s Biohybrid and Organic Robotics Group in Pittsburgh.

    “Fungi may offer advantages over other biohybrid strategies regarding the environmental conditions needed for their survival,” stated Webster-Wood, who was not part of the research.

    “If they can withstand environmental variations, it could make them an excellent choice for biohybrid robots used in agriculture, marine monitoring, or exploratory purposes.”

    The research highlighted that fungi can be grown in significant volumes and can prosper in a variety of environments.

    The team operated the rolling robot without a tether linking it to the electrical components — a task that Webster-Wood emphasized as particularly significant.

    “Completely tetherless biohybrid robots pose a challenge in this field,” she mentioned in an email, “and witnessing their accomplishment with the mycelium system is extremely thrilling.”

    Fungi-managed technology could find uses in agriculture, as noted by Shepherd.

    “In this scenario, we utilized light as the stimulus, but in the future, it will likely be chemical. The future possibilities for robots might include detecting soil chemistry in crop rows and determining when to apply additional fertilizer, potentially alleviating the negative downstream impacts of agriculture such as harmful algal blooms,” he explained to the Cornell Chronicle.

    According to Adamatzky, fungi-controlled robots, and fungal computing in a broader sense, hold significant promise.

    He stated that his laboratory has developed over 30 devices for sensing and computing using live fungi, including creating a self-repairing skin for robots that can respond to both light and touch.

    “With a suitable drivetrain (transmission system) in place, the robot could, for instance, assess the condition of ecological systems. The fungal controller would respond to variations like air pollution and direct the robot accordingly,” Adamatzky wrote in an email.

    “The emergence of yet another fungal device — a robotic controller — excitingly showcases the extraordinary potential of fungi.”

    Rafael Mestre, a lecturer at the University of Southampton’s School of Electronics and Computer Science in the UK, who focuses on the social, ethical, and policy implications of emerging technologies, expressed that if biohybrid robots become increasingly advanced and are introduced into oceanic or other ecosystems, it could disrupt the environment, challenging the conventional boundaries between living organisms and machines.

    “You are introducing these entities into the food web of an ecosystem where they may not belong,” Mestre remarked, who was not part of the recent study. “If they are released in significant quantities, it could be disruptive. At this time, I don’t perceive strong ethical concerns surrounding this specific research… but as it continues to evolve, it is essential to contemplate the consequences of releasing this into the wild.”

  • Innovation in a crisis: Why it is more critical than ever

    A recent analysis of scientific articles and patents from previous decades suggests that major discoveries in science are becoming less common. What has caused this shift and what is the situation in Germany?

    Throughout the past 120 years, there have been numerous groundbreaking discoveries that have had a significant impact on the world, such as the first antibiotic, the invention of the Internet, and the development of the first chemotherapy. However, a study published in the journal Nature indicates that such groundbreaking innovations are now occurring less frequently.

    The study reveals a noteworthy decrease in the innovative capacity of science. This trend, which had been previously suggested, is now confirmed by a comprehensive analysis of 45 million scientific articles and 3.9 million patents across all scientific fields. The study concludes that the proportion of groundbreaking discoveries has substantially declined from 1945 to 2010, despite a significant increase in overall research. According to science historian Helmuth Trischler, this suggests that a higher volume of research does not necessarily lead to more breakthrough innovations.

    An innovation score was utilized by the research team to whether evaluate the examined work was confirming previous theories or introducing new ones. The study found that the innovation score for scientific articles had decreased by 90 percent between 1945 and 2010, and for patents, it had fallen by 78 percent since 1980. The study’s authors highlighted the shift in language usage, noting that phrases such as “produce” are being replaced with “improving,” and that researchers are repeating themselves.

    The study authors stated in a press release that they are observing a decreased tendency in scientific articles and patents to break away from established practices and steer science and technology in new directions. However, there is also a significant need for innovation, particularly in fields such as climate change and space research.

    Marco Tamborini, a science historian and philosopher at TU Darmstadt, commented that research teams often lack boldness. He emphasized that young scientists are under pressure to publish frequently and are compelled to structure their research questions around funding applications, leading to increasingly narrow research inquiries. Tamborini stressed the importance of a broader perspective for significant innovations and highlighted the need for young researchers to have more time for creativity. The study authors also advocate for evaluating the quality rather than the quantity of papers.

    The decline in innovation in all scientific fields contradicts the speculation that many easily attainable innovations have already been discovered. Another factor contributing to this decline may be the review process, as it could impede innovation by pitting new ideas against established research within the same field. Science historian Trischler pointed out that, on average, a research paper is reviewed by only three scientists before publication.

    The study suggests that smaller research teams are more likely to produce innovative work. While larger research groups are involved in more specialized work, smaller teams have been responsible for many groundbreaking discoveries. However, Tamborini argues that larger research teams stand a better chance of securing significant funding.

    The future of the economy depends on significant innovations, raising concerns about the decline in groundbreaking research. However, Trischler cautioned against overemphasizing the study’s findings, pointing out methodological weaknesses in the meta-analysis. He clarified that while the proportion of groundbreaking discoveries is decreasing due to a higher volume of publications, the absolute number of such discoveries has remained constant. Additionally, the calculation of the innovation score has sparked controversy.

    Trischler believes that the current situation reflects a phase shaped by past actions. He noted the existing potential for innovation, particularly in biotechnology, and highlighted the need to accelerate the translation of research into real-world applications.

    Trischler is optimistic about Germany’s innovation capacity, stating that despite having only around one percent of the world’s population, Germany accounts for approximately ten percent of global innovation activity, with some areas reaching as high as 20 percent. For instance, eleven percent of the world’s patents related to hydrogen technologies originate from Germany, making it the leader in the EU.

    While the study’s analysis only extends to 2010, Trischler acknowledges that there have been significant breakthroughs since then, such as the first direct detection of gravitational waves and the development of mRNA vaccinations. Despite these achievements, he expressed concerns about the conditions for innovation in Germany.

    The slowdown of the U.S. innovation ecosystem is indicated by data showing that productivity growth in the United States, which relies on innovation, has been slowing down. Total factor productivity saw significant growth in the mid-20th century, but began to slow down in 1970. This reduced growth continues to this day, with productivity levels lower than they were over 100 years ago, despite increased investment in scientific research.

    The reason behind this is that, since the 1970s, the U.S. innovation ecosystem has fragmented, causing a separation between corporate and academic science and making it more challenging to apply basic scientific discoveries. If we want to observe greater productivity growth, we need to consider alternative methods for translating science into invention.

    Departure of large corporations from scientific endeavors

    Before the 1970s, some major American companies invested heavily in scientific research to the point that their corporate science matched or exceeded university research in quality. DuPont’s central R&D unit, a pioneer in polymer chemistry, published more articles in the Journal of the American Chemical Society than MIT and Caltech combined in the 1960s. AT&T’s Bell Labs, renowned for the transistor and information theory, had 14 Nobel Prize winners and five Turing Award recipients among its alumni.

    By the 1980s, a mix of pressure from shareholders, increased competition, and public setbacks caused corporations to reduce their investments in science. Instead, companies began to rely on universities and small startups for fresh ideas. Bell Labs was separated from its parent company, AT&T, and placed under Lucent in 1996. Xerox PARC was spun off into a separate company in 2002. Under Louis Gerstner, IBM shifted its research focus towards more commercial applications in the mid-90s. DuPont shut down its Central Research & Development Lab in 2016.

    The decline in corporate investment in science is evident from the accompanying chart, showing that the share of research (both basic and applied) in total business R&D in the U.S. dropped from approximately 30% in 1985 to less than 20% in 2015. While the amount of basic research stagnated over the 20-year period between 1990 and 2010, total industry spending and patenting on development activities steadily increased.

    This decline in research is also reflected in data on scientific publications. A study involving 4,608 American listed firms engaged in R&D found that the number of publications per firm decreased at a rate of 20% per decade from 1980 to 2006. This trend is also evident in data on scientific awards: Another study found that Fortune 500 firms won 41% of innovation awards in 1971, but only 6% in 2006.

    This marks a significant shift in the U.S. innovation ecosystem. We’ve transitioned from an economy where major corporations conducted both scientific research and development to one characterized by a clearer division of labor, with companies focusing on development and universities concentrating on research.

    In contrast to the corporate sector, universities have continued to expand their research, a trend that gained momentum after World War II. Data from the National Science Foundation (NSF) reveals that university spending on research grew more than fourfold, from $15 billion to $62 billion between 1980 and 2015. Moreover, peer-reviewed scientific publications increased by 37% between 2003 and 2016 for universities, while those for firms declined by 12%.

    While specialization allows universities and corporations to become more proficient at producing research and developing products respectively, this division of innovative labor has made it more difficult for innovative research to translate into practical products. University science differs in nature from corporate science. Companies have access to specialized resources that universities find challenging to replicate. For instance, Bell’s invention of the Holmdel Horn Antenna, Google’s creation of the Tensor Processing Unit (TPU), and Pfizer’s use of High Throughput Screening (HTS) processes are achievements that universities or small firms would have found difficult to accomplish. These inventions demanded both scale and scope, which the individual investigator model in universities would struggle to reproduce.

    Additionally, corporate and university researchers are driven by different incentives, which can impact the ease with which their research is transformed. University researchers are rewarded for priority (“who discovers it first”), while corporate researchers are rewarded for the usefulness of their inventions (“does it work”). As a result, university research is more likely to be original but less likely to function as intended by businesses. Inventors seem to be aware of this issue, as a recent study found that a discovery published by a university research team is 23% less likely to be cited in patents than the same discovery published by a corporate research team.

    Even if universities generate relevant and applicable knowledge, it may be challenging for companies to locate it. As companies decrease investment in fundamental research, they have fewer researchers engaging with the broader academic community. This suggests that companies will increasingly struggle to access relevant research and apply it.

    Venture capital (VC) and startups have served as solutions for connecting university research with commercial applications. Emerging technologies in various fields, including molecular biology, biochemistry, integrated circuits, and personal computing, were initially overlooked by large companies until VC-backed startups developed successful products, such as microprocessors (Intel), synthetic insulin (Genentech), and the Macintosh (Apple).

    VC-backed startups bridge the gap between university science and corporate innovation. VC managers often have advanced degrees in the areas they invest in and gain commercialization experience over time. Startups also do not adhere to established business models that make traditional firms resistant to new ideas. For example, although Xerox PARC’s research was successfully commercialized by Xerox when it aligned with the company’s core business (e.g., laser printers), it failed to gain traction when it diverged from it (e.g., GUI, ethernet).

    However, VC has not been a perfect solution, as investments have mainly focused on the life sciences and digital innovation. Data from PwC Moneytree shows that information and communication technologies (ICT) and life-science startups received approximately 83% of all VC investments between 1995 and 2019. This limited funding hinders innovation in other critical sectors for advanced economies, such as energy, which relies on material sciences and nanotechnology advancements for more efficient grids and safer power generation.

    The challenges of translating scientific research beyond ICT and the life sciences

    Why do entrepreneurs in science-based fields struggle to secure funding outside of these two sectors? We believe this is due to both technical and commercial uncertainties.

    Technical uncertainty revolves around whether a technical problem can be solved using a proposed approach. Commercial risk pertains to accurately evaluating the demand for a proposed product and the likely costs of scaling up and serving the market. Most software projects face minimal technical risks, focusing instead on what should be done rather than how. Commercial risks are managed by setting commercial milestones (e.g., number of users or cost of customer acquisition). When a startup achieves a milestone, it receives additional investment to progress toward the next one.

    The life sciences entail significant technical uncertainty but low market uncertainty, given the consistent need for new medical treatments and devices. Projects in this field can be mapped to specific milestones and managed accordingly. Success in reaching a milestone allows investors to estimate the increase in value based on the potential market size.

    Balancing both types of risks may be challenging, explaining why physical sciences startups receive limited private-sector funding. The energy sector serves as an example: Thermionic energy generation, a method directly converting heat to electricity, holds the promise of significant improvements in comparison to mechanical heat engines. Despite its exploration for powering satellites in the 1960s, the technology was overlooked by investors until recently due to technical challenges – the lack of easily available microfabrication tools required for prototypes.

    Innovations in the energy sector face notable market risks, as adoption often requires changes in existing technical infrastructure, consumer behavior, and government regulation. Advancements in grid-energy storage technologies, crucial for wind and solar clean-energy innovations, depend on downstream market demand. VC funding in battery technology startups started gaining traction only in the 2000s, after the automotive sector began embracing hybrid and fully electric vehicles.

    Large corporations can often better manage these commercial and technical uncertainties than startups due to their experience in bringing products from labs to markets. They, or their partners, can also provide a source of demand and effectively coordinate changes across the entire value chain while handling regulatory challenges more adeptly.

    Where can we proceed from this point onward?

    How can we connect the realms of science and application in overlooked sectors? One option involves the public sector providing funding for promising startups in the physical sciences, allowing them to focus on solving technical issues, with the hope that private capital will later step in to address commercial challenges. For example, the SBIR grant program at the Department of Energy supports renewable energy startups, and has been shown to increase patenting, revenue, and successful exit rates by alleviating financial constraints.

    More funding would help to address technical uncertainties by enabling the development of capital-intensive prototypes over extended development cycles, and validating performance for potential investors. Organizations with specific missions, such as DARPA and ARPA-E, continue to support fundamental innovations, including those that have led to the creation of the Internet, automated voice recognition, language translation, and Global Positioning System receivers.

    Another approach is to nurture scientific entrepreneurial talent. Initiatives such as Cyclotron Road at Berkeley Lab and Runway at Cornell Tech offer research fellowships for post-doctoral scientists and engineers, focusing on the transition from discovery to application. These fellows leverage national lab and academic research infrastructure to advance technologies based on their research and explore commercial viability.

    Similar programs at research institutions in the U.S., Canada, and Germany are following suit. The mentorship and equipment that scientists receive in these programs enables them to better address commercial uncertainty by becoming more attuned to consumer needs and developing viable business models.

    Although it may be tempting, there is little value in longing for the past, whether it was a golden age or not. The new innovation ecosystem holds great promise. What we need is a more effective way to leverage today’s scientific advancements and technical breakthroughs to accelerate productivity growth.

    The pace of scientific disruption and technological invention has significantly slowed down, according to scientists.

    A new study cautions that the substantial amount of new scientific and technological knowledge does not appear to be contributing to new groundbreaking discoveries and innovations.

    Researchers set out to examine whether the technological process was accelerating or decelerating. To do so, they analyzed 45 million scientific papers and 3.9 million patents spanning six decades. They utilized the CD index, a tool that measures the impact of papers and patents on citations in science and technology, to produce a “disruptiveness score” indicating the extent to which a paper has influenced science and technology.

    Their objective was to utilize the extensive body of scientific research to analyze two different types of breakthroughs: those that build upon existing knowledge and reinforce our current understanding, and those that are genuinely disruptive and render old knowledge obsolete, offering researchers new avenues to explore.

    The authors discovered that new research is becoming less disruptive and aligning more closely with the existing status quo. This trend is evident across all disciplines, particularly in the social sciences and technology.

    The reasons for this shift are not definitively clear. However, the authors suggest that scientists and inventors are increasingly focusing on narrower fields for their new discoveries.

    “A healthy scientific ecosystem involves a blend of disruptive discoveries and consolidating improvements, but the nature of research is evolving,” stated Russell Funk, a researcher at the University of Minnesota and the lead author of the study. “With incremental innovations becoming more prevalent, it may take longer to achieve those crucial breakthroughs that significantly propel science forward.”

    A study detailing these findings, titled “Papers and patents are becoming less disruptive over time,” is being published in the journal Nature today.

    Science is the driving force behind our society, and the decline in truly disruptive research is a warning sign for all of us

    If we gauge it by the sheer volume of published papers, we are currently in a golden age of science. There are more scientists and publications than ever before, and while much valuable work remains underfunded, there is now more funding available than ever. Federal funding for research and development has increased from $3.5 billion in 1955 to $137.8 billion in 2020, a rise of over tenfold even after adjusting for inflation.

    Fields such as AI and biotechnology appear to be flourishing, but outside of a few specific areas, it does not feel as though we are truly in a golden age of science. In the early 20th century, there were successive discoveries that substantially transformed our understanding of the world and revolutionized various industries: nitrogen fixation, enabling the feeding of billions; the structure of the atom and DNA; rocketry, plate tectonics, radio, computing, antibiotics, general relativity, nuclear chain reactions, quantum mechanics … the list goes on and on.

    The 20th century seems to have had more groundbreaking discoveries that changed the world compared to the present, despite the increase in scientific research. This is the main idea explored in a new Nature paper titled “Papers and patents are becoming less disruptive over time,” which aims to systematically investigate the notion that there is more science but less world-altering science.

    The study in Nature examines patents and papers to gauge how much future research builds on a specific publication or how much a particular piece of work contributes to “pushing science and technology in new directions.”

    The findings indicate that there indeed appears to be a decrease in radical innovations compared to the past.

    The question arises: are we becoming less proficient at producing transformative scientific advancements?

    This is not a new question, as previous studies have also highlighted declining research productivity in various fields. The Nature researchers sought to assess this more comprehensively by analyzing 25 million papers (1945–2010) and 3.9 million patents (1976–2010) using a new metric termed the “CD index,” which assesses whether papers predominantly “consolidate” knowledge in the field or “disrupt” the field and point towards new research paths.

    The study suggests that there is a decline in “disrupting” papers that change the field and indicate new research directions. This decline is remarkably significant.

    One possible explanation is that all the most disruptive ideas have already been discovered. At the turn of the 20th century, there was significant basic work that remained unexplored. Moreover, important discoveries tend to be made by older researchers and larger teams nowadays, possibly due to the increased time and effort required to grasp all the necessary knowledge to advance in a field.

    However, this explanation seems somewhat circular. Is science slowing down because we have already discovered all the transformative and crucial things, or do we believe that we have discovered all the transformative and crucial things because scientists are not finding any new ones?

    There’s a possibility that the slowdown in science is not an inevitable natural consequence but a result of policy decisions. The process of allocating scientific grants, for instance, is flawed. Despite the ample funding available, innovative individuals with transformative ideas often struggle to secure grant funding. Additionally, obtaining funding entails navigating an increasing number of bureaucratic hurdles, with many leading scientists spending significant time writing grant proposals rather than conducting actual research.

    For instance, Kaitlyn Suski, a postdoctoral researcher in chemistry and atmospheric science at Colorado State University, expressed in a 2016 Vox survey of scientists that the pressure to publish to maintain employment and satisfy funding agencies has led to numerous mediocre scientific papers with minimal new scientific content.

    Claiming that scientific progress is slowing down because our predecessors have already exhausted all the good ideas might prevent us from recognizing that the deceleration of science could be due to our mismanagement of it. This mismanagement might lead researchers away from the most important research and toward producing small, incremental papers to satisfy funders and tenure review committees.

    The decline of science has significant societal implications that affect a wide range of areas. Groundbreaking papers often result in new innovations that boost productivity, enhance quality of life, increase wages, and save lives. Some have suggested that the slowdown in scientific innovation is a major factor contributing to the leveling off of productivity and wages in the US.

    In reality, the decrease in groundbreaking papers is likely the result of numerous factors, some of which are within our control and some that are not. Nonetheless, a recent paper in Nature makes it clear that the effects are substantial. Given that science is the driving force behind productivity and prosperity, understanding the reasons for its diminished performance is of utmost importance.

    It is a common assumption that the pace of innovation is increasing. We frequently hear about new breakthroughs in artificial intelligence (AI), materials, genetic treatments, or high-speed aircraft. Indeed, the widespread adoption of new technologies appears to have accelerated, with phenomena like smartphones and apps such as Instagram reaching one billion users much more rapidly than in the past.

    However, this conventional wisdom may be misleading. Innovation, in many respects, is actually slowing down. For example, research by Benjamin Jones has revealed that the amount of time required to make significant new discoveries in science has lengthened, as has the number of collaborators needed for groundbreaking scientific projects. Consequently, the average age of Nobel Prize winners has steadily risen.

    This slowdown may be pervasive. For instance, sectors such as aerospace still rely on technologies from half a century ago, while most cars continue to employ the internal combustion engine, which was invented in the 19th century. Last year, we published Nesta’s report on the biomedical bubble, which included the diagram below summarizing the evidence that, over the past 50 years, biomedical research and development (R&D) productivity has consistently declined, despite significant increases in both public and private investment. New drugs now cost much more than they used to.

    More recently, a study by Nick Bloom and colleagues indicated that research productivity has been declining across both the private and public sectors. The study revealed that, despite a rise in the number of researchers and an increase in public investment in R&D, productivity growth has stagnated. The researchers found that ‘more than 20 times as many researchers are currently needed to generate roughly the same amount of productivity growth as 80 years ago.’ Furthermore, they demonstrated that this trend of declining research productivity applies to various fields, including computer chips, crop yields, and medical innovations.

    Some may argue that these declining rates of return justify reducing public investment in R&D and redirecting the limited funding toward other priorities. Conversely, others may assert that, now more than ever, we need to boost R&D investments to reverse the productivity slowdown.

    Whether you lean more toward one viewpoint or the other, a crucial question that needs to be tackled is how to enhance the returns on R&D. This is a realm that should be guided by facts and evidence, rather than intuition or self-interest, and it is therefore imperative that we confront it head-on.

    So, what are the options? What could reverse this relentless decline in productivity? We propose three primary areas of action that should be explored by those responsible for managing R&D budgets at public funding agencies and large companies.

    Leverage technology

    The first step involves harnessing technologies with the potential to expedite innovation. Machine learning, in particular, could be a game-changer. According to a recent paper from the National Bureau of Economic Research (NBER), machine learning could represent the “invention of a method of invention.” If used effectively, it could significantly hasten the development of new materials or drugs. However, this will hinge on ‘having access not just to the underlying algorithms but also to large, granular datasets on physical and social behavior.’

    An analogy can be drawn to double-cross hybridization, which enabled the development of a wide array of new corn varieties tailored to different conditions. The notion is that employing AI machine learning in tasks ranging from protein folding to drug discovery could accelerate the pace of R&D and alleviate the productivity issue. However, it remains uncertain whether this will be the case, and this can only be ascertained through thorough and systematic experimentation.

    Adopt new R&D methods

    The second step entails breaking free from the conventional approaches that largely govern R&D management worldwide. Many organizations still apply methods that have seen little innovation over the past half century. We recently released a Compendium of Innovation Methods, encompassing a range of approaches from challenge prizes to crowdfunding, designed to propel R&D into the future of creativity and productivity.

    Instill a culture of experimentation

    The third proposed action calls for organizations to promote a culture of experimentation. In recognition of the uncertainty surrounding the most effective paths to increased R&D returns, fostering a mindset and organizational structure that values and supports experimentation is crucial. Embracing the principles of lean startup and agile methodologies can facilitate swift experimentation and adaptation, ultimately leading to better outcomes in R&D.

    Saying that the inevitable science slowdown is due to our predecessors having already exhausted all the great ideas might prevent us from considering that science is slowing down because we are mismanaging it. We are redirecting researchers away from the most crucial research and toward small incremental papers to keep funders and tenure review committees satisfied.

    The decline in science has significant and far-reaching societal implications. Disruptive papers often lead to new innovations that increase productivity, enhance quality of life, raise wages, and save lives. Speculation suggests that much of the lack of productivity and wage growth in the US is driven by the slowing pace of scientific innovation.

    In reality, the decrease in innovative papers is likely due to a variety of factors, some within our control and some beyond our control. Nevertheless, the effects are substantial, as indicated by the new Nature paper. Given that science is the engine of productivity and prosperity, understanding why it is not performing as well as before is crucial.

    It is commonly assumed that the pace of innovation is accelerating. Every week, we hear about new advancements in artificial intelligence (AI), materials, genetic treatments, or new types of high-speed aircraft. The rapid spread of new technologies certainly appears to have accelerated, with smartphones and apps like Instagram reaching their first billion users much faster than in the past.

    However, this common belief may be misleading. Innovation is slowing down in many respects. For instance, research by Benjamin Jones indicates that the time required to make significant new discoveries in science has increased, along with the number of people needed to collaborate on breakthrough science projects. Consequently, the average age of Nobel Prize winners has steadily risen.

    The slowdown may be systemic. Sectors like aerospace still rely on technologies that are half a century old, while most cars continue to use the internal combustion engine, invented in the 19th century. Last year, Nesta’s report on the biomedical bubble, which included the diagram below summarizing the evidence, indicated that over the past 50 years, biomedical R&D productivity has consistently declined, despite significant increases in public and private investment. New drugs now cost significantly more than they used to.

    More recently, a study by Nick Bloom and others suggested that research productivity has been declining across the private and public sectors. The study revealed that as the number of researchers has increased and public investment in R&D has grown, productivity growth has stagnated. They found that ‘more than 20 times as many researchers are needed today to generate about the same amount of productivity growth as 80 years ago.’ The study also demonstrated that this trend of declining research productivity applies to many different fields, including computer chips, crop yields, or medical innovations.

    Some may view these declining rates of return as a reason to reduce public investment in R&D and allocate limited funding to other priorities. Conversely, others may argue that now, more than ever, we need to increase R&D investments to reverse the productivity slowdown.

    Regardless of which perspective one is more sympathetic to, a key question we should address is what can be done to increase the returns to R&D. This is a domain that should be guided by facts and evidence, not intuition or interests, so it is important to confront it directly.

    So, what are the options? What could reverse this relentless decline in productivity? We propose three main areas of action that should be explored by those managing budgets in R&D at public funding agencies and large companies.

    Harness technology

    The first step is to utilize technologies with the potential to accelerate innovation. Particularly, machine learning may prove to be a game-changer. According to a recent NBER paper, machine learning could represent the “invention of a method of invention.” When used appropriately, it could greatly expedite the development of new materials or drugs, although this will depend on having access to not only the underlying algorithms but also large, detailed datasets on physical and social behavior in each case.

    An analogy can be drawn to double-cross hybridization, which facilitated the creation of a wide range of new corn varieties optimized for different conditions. The hypothesis is that the application of AI machine learning to everything from protein folding to drug discovery could accelerate the pace of R&D and address the productivity issue. However, at present, we do not know whether this will be the case. Only vigorous and rigorous experimentation will provide the answer.

    Innovation in research and development (R&D) should incorporate new methods.

    To revamp R&D management, it is crucial to break away from the traditional approaches that continue to dominate the field. Many organizations still rely on methods that have remained largely unchanged for the past fifty years. Nesta recently released a compilation of diverse innovation methods, which include challenge prizes and crowdfunding. Although some of these methods are already in use in certain areas and have been applied by Nesta, it is surprising to note the limited adoption of these methods by major funders, and the lack of experimentation with newer approaches like randomised funding.

    Another noteworthy observation is the infrequent application of the scientific method by funders on themselves – such as conducting experiments to test hypotheses, employing control groups, and gathering essential data. There are a few exceptions, such as the National Science Foundation in the US, but most organizations tend to stick with familiar methods.

    At the very least, organizations should consider diversifying their research funding approaches and implement sound experimental research designs to determine the most effective methods for different stages of the R&D process across various industries. The Innovation Growth Lab plans to venture into this sphere by leveraging its success in utilizing experimental methods in business support and innovation policy and extending them to scientific research.

    Enhancing the availability of valuable data to inform decision-making is another crucial step.

    For R&D to improve, there should be a greater focus on learning. It is imperative to open, organize, and link data on research activities, along with funding information detailing what is being funded, the funding mechanisms, and the achieved outcomes. Our work in Innovation Mapping is beginning to demonstrate how this can be achieved through advanced tools for tracking real-time innovation.

    With the advent of new technologies and search engines, such as our upcoming Clio and Rhodonite tools, it will be possible to create sophisticated real-time maps of technology evolution. Making this type of data open, easy to integrate, and analyze will be vital in making R&D more introspective, self-critical, and better equipped to test new solutions.

    The slowdown in R&D productivity has not yet become a priority on any political agenda

    Government officials and agency leaders barely mention it in their speeches. Apart from a few exceptions, agencies and funders have not yet engaged in substantial discussions about potential solutions. However, this is a problem that is likely to gain more attention in the coming years, primarily due to the glaring patterns.

    The primary question to address is whether the key stakeholders will wait for it to become a crisis, or if they will have the foresight and wisdom to take preventive action to resolve their own challenges.

    This new article falls within a realm that could be termed “innovation studies”—the examination of invention itself. I’ve always been uncertain about my stance on these pursuits. There’s a considerable amount of history to investigate, and in more recent years, a wealth of actual data (like patent filings and citations in the academic literature, etc.), and to quote Pete Townshend, “From all this you’d imagine / that there must be something learned.” Yet, as you’ll discover, I still question how readily that learning occurs.

    This new article serves as a prime illustration of the field. The researchers are reviewing the last sixty years of literature and patents (approximately 25 million papers and almost 4 million patents) and introducing a new metric, the “CD Index”:

    The basic idea is that if a paper or patent is disruptive, the subsequent work that references it is less likely to cite earlier works as well; for future researchers, the ideas that contributed to its creation are less pertinent. Conversely, if a paper or patent is consolidating, subsequent work that cites it is more inclined to also reference its predecessors; for future researchers, the foundational knowledge of this work remains relevant, or perhaps even more so.

    At first glance, that seems like a reasonable concept, and it turns out that when you graph the average CD scores for both papers and patents since World War II, every discipline shows a notable decrease in this “disruptiveness” score over time. Interestingly, the patent graph is much tighter and less variable compared to the open-literature graph when examining different fields. Another intriguing metric arises from observing the fraction of new word pairs in the data relative to total word pairs over time.

    The same fields (life sciences and biomedicine, physical sciences, social sciences, and a broad “technology” category) exhibit quite similar declines by this metric, and notably, all the graphs present a biphasic pattern, showing steep declines until around 1970 and then significantly slower declines afterwards. The authors do note that the number of “most disruptive” papers and patents has remained fairly steady over time. The substantial overall increase in paper and patent production allows this to align with the lower overall measures of disruptiveness.

    You may already have thought of several objections and potential confounding factors, as I certainly did. To their credit, the authors have attempted to address many of these concerns. The database from which the above conclusions were drawn is the Web of Science collection, and to ensure that this isn’t an artifact of WoS indexing, they applied the same methodologies to the JSTOR database, the American Physical Society collection, and PubMed, yielding similar outcomes.

    There’s also the genuine possibility that authorship and citation standards have evolved over time, leading to shifting baselines for these measurements. The authors have devised variations of the CD Index that account for the rising citation counts found in papers over time, but they report that these produced comparable results. Similarly, assessing the CD numbers concerning various other indicator variables (such as the number of new papers, average number of authors, number of papers cited, etc.) also revealed declining disruptiveness.

    There’s also the “low-hanging fruit” argument—the notion that fundamental breakthroughs in any field can only occur once, naturally becoming rarer over time. I find it challenging to dismiss this, yet the paper indicates that the CD decline appears strikingly similar across diverse fields, which can’t all share the same low-hanging fruit (at least not to the same extent). In my view, while this last objection is valid, the phenomenon will likely apply to any field making measurable progress, albeit to varying degrees: “they should appear more different than they do if low-hanging fruit is an explanation” doesn’t, in my opinion, negate the idea that it’s probably part of the explanation.

    Another perspective on the growth of knowledge overall can be considered in this context

    The authors point out that this goes both ways: as time progresses, there’s both an increase in available knowledge to build upon and more knowledge to master. The former seems beneficial for innovation, whereas the latter may hinder it. Interestingly, the authors suggest that their analysis indicates that, for open-literature papers, the positive impact of increased knowledge is somewhat more evident, while for patents, the growth of knowledge in a specific field appears to have a more inhibitive effect on disruptiveness.

    “Given these conflicting findings,” the authors explored not just the growing knowledge but also its application. They found that the diversity of cited work has been decreasing (as researchers focus more narrowly within citations), along with an uptick in citations to the top 1% most frequently cited papers or patents. The average age of cited works has been increasing, and similarly, there’s a rise in self-citation. All these trends seem to indicate that scientists and inventors are increasingly relying on familiar knowledge.

    I can only speculate, but I believe these findings are likely authentic and probably the result of multiple factors interacting simultaneously. For one reason, I’m not prepared to dismiss the low-hanging-fruit explanation as part of the narrative. Another element, I think, is the growth of research as a profession over the past 60 to 80 years. Additionally, there may be misaligned incentives within many of these research careers—the process of applying for grants, having projects approved, writing papers, seeking promotions, and navigating departmental politics does not necessarily reward innovative work in an ideal manner. The saying goes, “the nail that sticks up gets hammered down,” and these declines in the CD index might be just what we would expect as a result.

    There’s also a historical aspect that the authors may not have taken into account. All of the figures presented in this paper cover a timeframe primarily since World War II, which can be considered a starting point for the modern scientific era. However, it was also very close to a pivotal moment: the war acted as a technological turning point like no other, as well as a junction for attitudes toward and funding for research and technology.

    Innovations advanced during the war in ways that had never been seen in previous conflicts, with inventions and applications rapidly building upon each other to secure military advantages. Just to name a few off the top of my head, these included radar, proximity fuses, code-making and code-breaking, piston engine advancements followed by jet engines, drones, ballistic rockets, and of course, the atomic bomb. The evolution of aircraft from 1939 to 1945 serves as a striking illustration of this general effect.

    The insight gained by industrialized countries was that advancements in science and technology were vital for national survival, and the eventual Cold War only reinforced this with developments in space exploration, satellite and electronic reconnaissance, intercontinental ballistic missiles, computing technologies, and much more. The scientific landscape we inhabit today was shaped by the events between 1939 and 1945, and any measurements of scientific activity that began during this significant shift will inevitably be influenced by it.

    The pace of revolutionary scientific discoveries and technological advancements is diminishing, despite an increasing volume of knowledge, according to an analysis published Wednesday that examined millions of research papers and patents.

    While earlier studies have identified declines in specific fields, this investigation is the first to “clearly and convincingly document this reduction in disruptiveness across all principal areas of science and technology,” stated lead author Michael Park in an interview with AFP.

    Park, a doctoral student at the Carlson School of Management at the University of Minnesota, described disruptive discoveries as those that “diverge from established concepts” and “drive the entire scientific discipline into unexplored territory.”

    The researchers assigned a “disruptiveness score” to 45 million scientific articles published between 1945 and 2010, as well as to 3.9 million US-based patents from 1976 to 2010.

    From the beginning of these periods, research papers and patents have increasingly leaned towards consolidating or building on prior knowledge, as indicated by the findings published in the journal Nature.

    The scoring was based on how often the articles were cited in other research five years post-publication, operating under the assumption that the more disruptive the research, the less frequently prior works would be referenced.

    The most significant drop in disruptive research occurred in the physical sciences, such as physics and chemistry.

    According to senior study author Russell Funk, “the nature of research is evolving” as incremental innovations gain prevalence.

    One suggestion for the decline is that the “low-hanging fruit” in science has already been harvested.

    If that were accurate, the decrease in disruptiveness across various scientific domains would occur at differing rates, Park noted.

    However, “the declines are quite consistent in their timing and speed across all major fields,” Park stated, suggesting that the low-hanging fruit theory is likely not the cause.

    Instead, the researchers highlighted a concept referred to as “the burden of research,” which posits that the sheer volume of knowledge required to master a field leaves scientists with little opportunity to explore new frontiers.

    This results in scientists and inventors “concentrating on a narrow segment of existing knowledge, which prompts them to produce more consolidating work rather than disruptive innovations,” Park explained.

    Another contributing factor could be the “growing pressure in academia to publish extensively, as that is the benchmark for academic evaluation,” he added.

    The researchers urged universities and funding organizations to prioritize quality over quantity and to consider providing full funding for year-long sabbaticals to enable scholars to read and think more extensively.

    “We are not becoming any less innovative as a species,” Park reiterated, citing recent advancements such as the application of mRNA technology in COVID-19 vaccines and the detection of gravitational waves in 2015.

    Jerome Lamy, a historian and sociologist of science at France’s CNRS research agency, who did not participate in the study, remarked that it highlighted the increasing “ultra-specialization” and publishing pressures over the years.

    He attributed the global trend of academics feeling “compelled to break down their papers” to boost publication numbers, resulting in a “dulling of research.”

    In 2005, well before the emergence of Apple’s Siri and Amazon’s Alexa, two startups—ScanSoft and Nuance Communications—merged to capitalize on a growing opportunity in speech recognition. The new firm developed advanced speech-processing technology and experienced rapid growth for nearly a decade—averaging a 27% annual increase in sales. Then, suddenly around 2014, it ceased expanding. By 2019, revenues had stagnated, remaining roughly the same as in 2013. Nuance faced significant challenges as major tech companies that were once collaborators turned into competitors.

    Nuance’s situation is far from exceptional. Across all primary sectors and technology areas, startups are encountering unparalleled challenges. New companies continue to emerge to seize innovative prospects. Additionally, these firms now have access to an unprecedented influx of venture capital. Yet, the startup ecosystem is not in a healthy state. Innovative startups are now growing at a significantly slower rate compared to similar companies in the past.

    Unexpectedly, technology is a significant factor—specifically, proprietary information technology held by large companies that control their sectors. We often associate technology with disruption, where innovations from smaller, newer enterprises allow them to expand and eventually replace older, less efficient ones. However, these proprietary technologies are now hindering industrial turnover, which has sharply decreased over the past twenty years. This reduction in dynamism has extensive adverse effects on the US economy. It has slowed the expansion of innovative companies, and researchers have linked this slower growth to a substantial decline in productivity growth, impacting the entire economy, including personal incomes.

    Nuance was established in 1994 as a spinoff from SRI, a Stanford lab that created speech-recognition technology for the US government. ScanSoft originated as a spinoff from Xerox. Before their merger in 2005, speech recognition was limited by computer processing power, with systems able to understand only restricted vocabularies, yet they still found utility in specific commercial areas, such as customer support via telephone and the transcription of medical records.

    By the late 2000s, circumstances had shifted. As computing power increased, Nuance succeeded in creating a significant advancement: “large vocabulary continuous speech recognition.” This innovation allowed users to speak freely about any subject, and the technology could accurately transcribe it in real time. Nuance utilized this technology in Dragon Dictation, an app that Apple highlighted when it launched the iPhone 3GS at its 2009 Worldwide Developers Conference. After Apple endorsed the product, Samsung and other phone makers sought after it, as did Google, Amazon, and Microsoft. Nuance experienced rapid growth by attracting these major clients and through millions of individual consumers buying the iPhone app, which became the leading business productivity application in the iTunes store. By 2011, Apple had released Siri, which featured Nuance technology, leading to Nuance’s revenue increasing to $1.7 billion in 2013.

    However, this growth was fleeting. Nuance wasn’t alone in recognizing that voice was set to become a primary mode of human interaction with computers and cloud services. Voice recognition evolved beyond merely dictating text to include shopping, information searches, music and video selection, appliance control, and additional functions. It was fast, hands-free, and—compared to keyboards and mice—a far more intuitive way for humans to communicate.

    Major tech companies began investing significant resources and talent into this opportunity. Apple focused on developing its own systems, Amazon pursued its Alexa voice assistant, and Google quickly followed suit with its Home Assistant. These firms successfully recruited talent from Nuance, absorbing many top professionals into their organizations. Amazon now employs over 10,000 engineers working on Alexa products, significantly surpassing the number of core R&D staff Nuance had at its peak.

    Beyond their financial clout, these large enterprises boasted advantages like extensive customer bases, complementary products, and vast data resources, enabling them to constantly enhance their voice-recognition technologies. Currently, there are 300 million Alexa devices in use; Google processes an average of 5.6 billion searches daily, with half of its users reporting that they use voice for searches. Amazon has developed a robust ecosystem where third-party developers contribute over 100,000 new “skills” to Alexa, covering functionalities from playing specific radio stations to telling jokes. Additionally, Amazon has licensed the Alexa far-field technology to appliance makers, who utilize it for controlling dishwashers, washers and dryers, and vacuum cleaners.

    Nuance struggled to compete in this arena. It shifted its focus to niche markets like healthcare before being acquired by Microsoft in 2021.

    The situation Nuance faced is not merely a retelling of the standard narrative of large companies outspending startups. Across various sectors, dominant firms are leveraging large-scale information systems to outmaneuver their competitors, including innovative startups. They are utilizing proprietary software to effectively manage complexity, thereby distinguishing themselves from rival firms. This strategy has enabled them to bolster their market dominance and avoid being surpassed by competitors.

    In retail, Walmart’s inventory management and logistics software enables it to offer a much wider range of products at lower prices, customize each store according to local preferences, and react swiftly as demand fluctuates and popular items arise. By utilizing vast data systems, leading financial institutions customize credit cards and home equity loans for individual consumers on an extensive scale and subsequently target their marketing efforts.

    Even the foremost waste-management companies and health insurance providers are making significant investments in proprietary software to outperform their competitors. Overall, companies (excluding those whose core product is software) now invest over $240 billion annually in their internal software, a steep rise from $19 billion in 1985. Most of that change comes from large firms. Since 2000, the top four companies in each industry, ranked by revenue, have increased their investment in their proprietary software by eight times, far exceeding investments from second-tier firms.

    These investments have yielded positive results. Since the 1980s, the leading four firms in various sectors have boosted their market share by 4% to 5%. My findings suggest that the majority of this growth stems from investments in proprietary software.

    This heightened dominance of leading firms corresponds with a notable decrease in the risk of being disrupted, a concern that has preoccupied corporate leaders since Clayton Christensen published The Innovator’s Dilemma in 1997. When Christensen wrote his book, disruption was becoming increasingly common. However, since around 2000—when top firms began their spending spree on proprietary systems—this trend has sharply declined.

    In a specific industry, the likelihood of a top-ranking firm (based on revenue) falling out of the top four positions within four years has decreased from over 20% to approximately 10%. In this instance, investments made by dominant firms in their internal systems primarily account for the shift. While some new technologies have disrupted entire industries—like how the internet affected newspapers or DVDs—others are now mitigating the disruption of leading firms.

    How does this occur, and why does it seem to impact so much of the economy? The answer lies in the fact that these business systems tackle a significant drawback of modern capitalism. Since the late 19th century, innovative companies have often discovered that they can attain remarkable cost savings by producing at a large scale. This transition significantly lowered consumer prices, but came with a downside: to reach those substantial volumes, products and services had to be standardized.

    Henry Ford famously asserted that car buyers could choose “any color so long as it is black.” Retail chains gained their efficiencies by offering a limited range of products to numerous stores. Financial institutions provided standardized mortgages and loans. Consequently, products had restricted features; stores had limited selections and were slow to adapt to shifts in demand; and many consumers struggled to obtain credit or received it only under costly and unsuitable terms.

    Software alters this dynamic, partly addressing these limitations. This is because it lowers the costs of managing complexity. With the appropriate data and organization, software enables businesses to customize products and services to meet individual needs, providing greater variety or more product features. This capability allows them to outperform competitors, securing a dominant market position. Walmart stores provide a much broader selection than Sears or Kmart stores and respond more rapidly to evolving customer requirements. Sears, once the leader in retail, has been supplanted by Walmart, which now stands in bankruptcy.

    Toyota swiftly produces new models as it identifies emerging consumer trends; smaller car manufacturers lack the financial resources necessary to invest billions for such adaptability. Similarly, only Boeing and Airbus are able to construct highly intricate new jumbo jets. The leading four credit card companies have the data and systems to effectively tailor offers to individual consumers, maximizing both profit and market share; they dominate the market.

    These software-driven platforms have enabled leading firms to solidify their dominance. They have also hindered the growth of competitors, including innovative startups.

    Various pieces of evidence support the notion that startup growth has significantly slowed down. One indicator is the duration it takes for venture-backed startups to secure funding: from 2006 to 2020, the median age of a startup in the seed-round funding stage increased from 0.9 years to 2.5 years. The median age of a late-stage startup rose from 6.8 years to 8.1 years during that same timeframe. Among companies that were acquired, the average time from initial financing to acquisition tripled, growing from just over two years in 2000 to 6.1 years in 2021. The trend was similar for firms that went public. However, the clearest indication of a slowdown is seen when firms become more productive.

    The defining characteristic of dynamic economies, referred to as “creative destruction” by economist Joseph Schumpeter, is that more efficient firms—those offering superior products, lower costs, or innovative business models—tend to expand more rapidly than their less effective counterparts, ultimately replacing them. However, after 2000, firms with a specific level of productivity experienced growth at just half the rate compared to those with the same productivity levels in the 1980s and 1990s.

    This indicates that productivity now has a diminished impact on growth compared to earlier periods

    When productive firms grow at a slower pace, their chances of overtaking industry leaders and displacing them—a key feature of disruption—are reduced. Recent research I conducted alongside my colleague Erich Denk established a direct link between the diminishing effects of productivity enhancement and the increasing dominance of large firms, particularly regarding their investments in software and other intangible assets.

    Another perspective, strongly articulated by congressional investigators in hearings and a staff report released in 2020, attributes the decrease in economic dynamism to a different cause: the decline of government antitrust enforcement since the 1980s. According to this view, major firms have been allowed to acquire their competitors, leading to a reduction in competition. Such acquisitions have enhanced the dominance of these companies, especially in the technology sector, resulting in fewer new tech firms emerging and declines in venture capital funding for early-stage companies.

    Nevertheless, the entry rate of new tech firms into the market has only slightly decreased from the remarkable growth seen during the dot-com boom, and early-stage venture capital investment is at unprecedented levels, with double the finance amounts today compared to 2006 and four times the investment sum. The challenge isn’t that large companies are blocking startups from entering markets or obtaining funding; rather, it’s that these large entities are utilizing extensive technologies that hinder startup growth. Additionally, major firms like Walmart and Amazon have primarily thrived by implementing superior business models instead of acquiring competitors. In fact, the frequency of acquisitions by dominant companies has actually diminished since 2000.

    Certainly, such acquisitions can sometimes influence the startup ecosystem. Some researchers have highlighted the existence of “kill zones,” where Big Tech makes acquisitions to stifle competition, making venture capital scarce. However, other studies show that startups often adapt by shifting their innovative efforts to different applications. Furthermore, the possibility of being acquired by a large corporation frequently encourages entrepreneurs to initiate startups. Indeed, despite the situation with Nuance, the volume of startups focused on speech recognition and natural language processing has quadrupled since 2005, with 55% of these ventures securing venture capital investments.

    The deceleration in the growth of innovative startups impacts not just a select few companies within the tech industry; the challenges facing enterprises like Nuance contribute to broader issues affecting the entire economy’s health. Research from the US Census Bureau indicates that the sluggish growth of productive firms accounts for a significant portion of the overall slowdown in growth of aggregate productivity, which gauges the output produced per person in the economy and serves as a rough indicator of economic well-being. My research has also suggested that this trend plays a role in increasing economic inequality, greater social divisions, and the diminishing effectiveness of governance.

    What steps are necessary to reverse this trend? Strengthening antitrust enforcement might be beneficial, but the shifts in economic dynamism are more closely related to advancements in technology than to mergers and acquisitions. A more fundamental issue is that the foremost new technologies are proprietary and only accessible to a limited number of large corporations. Historically, new technologies have disseminated widely, either through licensing or through firms developing independent alternatives; this has fostered enhanced competition and innovation.

    The government has occasionally aided this process. For instance, Bell Labs invented the transistor but was mandated by antitrust authorities to license the technology broadly, leading to the formation of the semiconductor industry. Likewise, IBM played a crucial role in creating the modern software industry when it began to sell software separately from hardware due to antitrust pressure.

    Currently, we observe some parallels even without government intervention. For example, Amazon has opened its proprietary IT infrastructure, giving rise to the cloud industry, which has significantly bolstered the prospects for many small startup companies. However, antitrust legislation could be utilized to encourage or require more large firms to make their proprietary platforms accessible. Additionally, relaxing restrictions imposed by noncompete agreements and intellectual-property rights on employee mobility could further promote the distribution of technology.

    Finding the appropriate balance of policies will be challenging, and it will require time—we want to avoid undermining incentives for innovation. Nevertheless, the initial step is to acknowledge that technology plays a new role in today’s economic landscape. Once a driving force for disruption and creativity, it has now taken on a different function.

  • Intel’s missed opportunity laid the foundation for the success of the British company Advanced Risc Machines (Arm)

    Arm makes its debut on the New York Stock Exchange today. The chip designer’s technology is found in practically every smartphone—Apple founder Steve Jobs also contributed to this.

    It all started with a mistake. A very expensive mistake. In 2006, Apple boss Steve Jobs asked chip giant Intel to build the processors for his first iPhone. But the then Intel boss Paul Otellini turned down Jobs; smartphones didn’t have much of a future for him. Otellini saw them as just a niche product – a misjudgment that he would soon bitterly regret.

    Arm IPO: Biggest IPO of the year

    Intel’s missed opportunity laid the foundation for the success of the British company Advanced Risc Machines (Arm). After being rejected by Intel, Jobs turned to the small company he co-founded in Cambridge in 1990, thereby laying the foundation for the British success story.

    The current highlight is today’s IPO on the New York technology exchange Nasdaq. The papers cost $51 and are, therefore, at the upper end of the price range. This corresponds to a valuation of more than 54 billion dollars—it is the largest IPO of the year in the USA so far.

    Arm processors: cheap and energy-saving

    What many smartphone owners may not know: Your daily companion most likely uses an Arm processor. More than 90 per cent of all mobile phones – from Apple, Samsung or Xiaomi – have a processor with Arm architecture. The company, therefore, has a natural smartphone monopoly. But what is the secret behind the success of Arm processors?

    The main difference between Intel and Arm processors is the amount of instructions the processor can execute. Conventional Intel chips are, roughly speaking, designed to manage many complex instructions, which draws a lot of energy. On the other hand, the Arm architecture relies on small and few commands. Although this reduces computing power, it makes the processors particularly energy-efficient – meaning smartphone batteries last longer. Arm processors are also used in game consoles, printers, routers and TV set-top boxes, which are also significantly cheaper .

    Successful with chip licenses for smartphones

    Arm doesn’t even build the processors themselves. Arm is a pure design company: a chip design licensor without factories that only markets IP (intellectual property) for processors. Arm sells these to chip manufacturers such as Qualcomm, who then build Arm-based smartphone processors. The licensee pays Arm a fee for each chip produced.

    Artificial intelligence as a growth driver?

    However, Arm CEO Rene Haas moved away from this strategy before the IPO. In an interview with the Bloomberg news agency, Haas explained that in the future, they will not just provide IC designs, based on which the chips can then be manufactured directly in the factories. This involves integrated circuits (Integrated Circuit – IC) that respond to the specific requirements of the respective application. Arm differentiates between mobile devices, the automotive industry, cloud computing, the Internet of Things and artificial intelligence.

    Some market experts see great opportunities for the British group, particularly in ​​​​artificial intelligence. This is especially true since the traditional mobile communications business is weakening, and the world’s largest smartphone market, China, is “particularly vulnerable to economic and political risks.” Arm itself points this out in the stock market prospectus.

    Poor – a second Nvidia?

    Arm’s future prospects on the stock market largely depend on whether the group convinces the market that it is one of the big players in the AI ​​trend. “If the hype surrounding artificial intelligence and the demand for the securities of competitor Nvidia are an indication, only those who received the shares at the issue price can consider themselves lucky,” explains Jürgen Molnar, capital market strategist at RoboMarkets.

    ARM and Intel processors currently dominate the majority of our electronic devices. Despite serving the same purpose, the processors are constructed on distinct architectures, impacting program development and chip production.

    ARM processors are commonly found in handheld devices, such as smartphones, while Intel processors are used in desktop and laptop computers. Nevertheless, various computer manufacturers are now introducing ARM-powered computers. For example, Apple’s M1 and M2 processors power several Mac and MacBook models.

    So, what sets ARM and Intel processors apart?

    Arm vs. Intel: Technology Companies Dominating the Digital World

    Arm and Intel are prominent technology companies producing different microprocessors for different markets. While Intel manufactures high-performance processors for personal computers and laptops, Arm primarily designs ARM processors for low-power devices like smartphones, tablets, wearable tech, and smart home devices.

    It’s important to clarify that Arm is the company, while ARM is the processor. When referring to the company that designs microprocessors, it’s called Arm. However, when discussing the processor made by Arm, it’s referred to as ARM.

    In contrast to Intel, which both designs and manufactures its chips, Arm designs ARM processors and licenses these designs to various companies. As a result, numerous ARM-based smartphone processors are produced by different companies, such as Snapdragon, MediaTek, Exynos, and Tensor.

    In 2012, Intel attempted to produce smartphone processors through its Intel Atom SoCs, but the endeavor was largely unsuccessful. Subsequently, the company shifted its focus away from handheld processors and continued concentrating on the PC market.

    On the other hand, Arm achieved some success in designing processors for low-powered laptops and single-board computers but has yet to establish a significant presence in the desktop PC market. Nevertheless, it’s worth noting that Apple’s M1 chips, which Arm assisted in designing, have proven to be capable processors, demonstrating that ARM processors can compete against Intel and AMD’s x86 chips.

    Differences Between ARM and Intel Processors

    ARM processors are predominantly used in mobile devices, while Intel processors are primarily used in personal computers. Despite potential plans for both Intel and ARM to design processors for both markets, their differences make them more suitable for specific applications. Although both processors serve as the brains of electronic devices, they differ in several ways; some of the key disparities include the following:

    1. Instruction Set Architecture (ISA)

    The predominant use of different ISAs largely explains why ARM processors are primarily used for handheld devices and Intel processors for personal computers.

    There are two types of ISA: Complex Instruction Set Computing (CISC) and Reduced Instruction Set Computing (RISC). CISC is designed for microprocessors to efficiently handle complex tasks and calculations at the expense of power efficiency. Conversely, RISC is designed for processors to handle simpler instructions with high power efficiency.

    Intel processors utilize their x86 architecture based on the CISC ISA, while ARM processors use their ARM architecture, which employs the RISC ISA. Both x86 and ARM ISAs are closed-source ISAs, and designing processors using these architectures requires a license from Intel or Arm.

    2. Software Compatibility/Support

    Due to the differing design philosophies (RISC and CISC) of x86 and ARM ISAs, software developed for Intel chips cannot be processed natively by an ARM processor, and vice versa. The significant majority of programs developed on a specific architecture ensures that consumers will choose a company’s processor due to the multitude of available applications.

    If you’ve recently looked for a new desktop CPU, you’ve probably noticed that your only choices are either an Intel or an AMD processor. This is because the majority of computer programs are designed for x86 processors. As only Intel and AMD (which has a license to use x86) can run these programs natively, they have effectively created a duopoly in the PC processor market.

    Conversely, ARM processors dominate the smartphone processor market because smartphone operating systems and applications have been specifically programmed to run on ARM processors. While other RISC-based processors compete with ARM, the abundance of applications already available for ARM is a significant reason for their use in almost all modern smartphones today.

    To gain traction in the PC market, Arm must persuade developers to either port or develop applications from scratch to run natively on its ARM processors. Despite Microsoft Windows already supporting ARM processors, ARM must still find a way to run the everyday software that users rely on to convince the public to opt for ARM processors in their PCs.

    3. Performance

    In terms of CPU performance, Intel processors generally exhibit superior raw performance compared to ARM processors. However, ARM processors are designed for energy efficiency, making them suitable for devices requiring lower power consumption.

    On the other hand, ARM processors are much speedier than Intel processors as they are designed for simpler tasks that do not require the same clock speeds as their Intel equivalents.

    4. Efficiency

    While they may lack raw performance, ARM processors have an advantage over Intel in terms of efficiency.

    CISC-based processors like Intel’s x86 chips heavily rely on hardware to handle intricate tasks. High-performance processors need substantial power, sufficient cooling, and complex manufacturing, making them quite expensive.

    In contrast, ARM-based processors are optimized in both hardware and software, making them highly efficient processors that require very little power. Requiring minimal energy to operate also means relatively lower heat, allowing ARM CPUs to be embedded in a System on a Chip (SoC). An SoC combines CPU, GPU, RAM, DSP, and telecommunication modems in one die or chip. Combining all the essential solid-state components into one chip makes an ARM process highly efficient because data does not have to travel far for each component in the SoC to process.

    ARM vs. Intel: Which Processor Is Better?

    Since Intel and ARM produce different types of processors for different markets, it is challenging to compare both processors in terms of performance and efficiency. Although CISC-based processors require a higher performing processor for complex tasks, it does not necessarily mean that it will outperform a highly efficient RISC-based processor designed for PCs.

    However, Apple’s ARM-based M1 and M2 computers are highly efficient and relatively powerful. Numerous tests have shown that even its base-model M1 MacBook Air can outperform CISC-based Windows laptops of similar specifications.

    However, since most computers run Windows, many developers find it more profitable to create apps for CISC processors. This is also why Macs typically have fewer options when it comes to apps and AAA games.

    This, however, is not the case when it comes to smartphone and tablet processors. Although Intel’s Atom SoC was power efficient, they were not as good as their ARM counterparts in terms of performance per watt. For this reason, ARM processors are still superior when it comes to the smartphone market.

    ARM-Based Chips Are Slowly Gaining an Edge over Intel’s CISC CPUs

    Just a decade ago, even Intel’s entry-level CISC-based processors were much more potent than top-of-the-line ARM-based chips. However, this is no longer the case. Apple has demonstrated with its M1 and M2 chips that ARM-based CPUs can be as capable, if not more capable, than Intel’s competing offering.

    Nevertheless, this does not mean that the CISC-based x86 chip is on its way out. Given the large number of computers that run these chips, it will take time before ARM’s ARM chips will supplant Intel’s x86 offerings. But as ARM processors become more powerful and affordable, their adoption will increase, and soon enough, it will be found that ARM CPUs are powering most computers on the market.

    What is an Arm processor?

    An Arm processor is one of a family of central processing units (CPUs) based on the reduced instruction set computer (RISC) architecture for computer processors. Arm Limited, the company behind the Arm processor, designs the core CPU components and licenses the intellectual property to partner organizations, which then build Arm-based chips according to their own requirements. Arm Limited does not manufacture or sell any chips directly.

    Acorn Computers initially developed the Arm processor in the 1980s.

    Arm Limited provides designs for both 32-bit and 64-bit RISC multicore processors. The processors use a much simpler instruction set than their Intel counterparts, which are based on complex instruction set computing (CISC) architecture. The two types of processors also utilize different methods to optimize performance and increase efficiency. For example, Intel employs a hardware approach to maximizing performance, while Arm opts for a software approach.

    Arm processors can execute many more millions of instructions per second than Intel processors. By eliminating unnecessary instructions and optimizing pathways, an Arm processor can deliver excellent performance while using much less energy than a CISC-based processor. The decrease in power also means that Arm CPUs generate less heat. This does not mean that Arm processors are inherently superior to Intel processors, only that they are better suited to specific use cases.

    Arm processors are extensively used in consumer electronic devices such as smartphones, tablets, wearables, and other mobile devices. They also find use in a wide range of sensors and internet of things devices. According to Arm Limited, the company’s partners have shipped more than 215 billion Arm-based chips over the past three decades.

    The reach of Arm processors is expanding. Historically, Arm processors were mainly used in smaller devices like smartphones and sensors. However, there is a shift as Arm processors are now being integrated into device categories that were traditionally dominated by Intel and to a lesser extent, AMD. For instance, Microsoft offers Arm-based versions of its Surface computers and Windows editions designed for Arm-based PCs.

    Arm processors are also utilized in numerous Chromebook laptops, and Apple has introduced several computers equipped with the Arm-based M1 chip. The new MacBook Pro systems from Apple, powered by the M1 chip, have established a new standard for laptop performance and battery life in the industry.

    Arm processors are now making their way into the server market. While this isn’t a new initiative, its adoption has been gradual. Nevertheless, enterprises are starting to take notice of the Arm architecture due to its potential to provide the best performance-per-watt among enterprise-class CPUs.

    As workloads grow in both size and complexity, they demand more energy for processing and for maintaining safe operating temperatures for the underlying hardware. This not only poses a financial concern but also becomes an issue for organizations that are transitioning towards more sustainable data centers.

    In contrast to scaling up with x86-based processors, an Arm server might employ hundreds of smaller, less advanced, and low-power processors that collaborate on processing tasks. This is sometimes referred to as scaling out. However, even when scaled out, these processors consume less energy and generate less heat than x86 servers, presenting a potential solution for addressing future energy considerations.

    Although Arm-based servers currently represent only a fraction of today’s data center systems, they are progressively gaining ground. For example, Amazon recently introduced the third generation of its Arm-based AWS Graviton processors, which boast up to 25% better compute performance than the previous generation and twice the cryptographic workload performance. These Graviton3 chips are now powering the AWS EC2 C7g instances.

    Ampere has also revealed the first 80-core Arm-based 64-bit server processor, targeting workloads such as artificial intelligence, data analytics, web hosting, and cloud-native applications. Arm-based processors are also being utilized in some of the world’s fastest supercomputers, garnering increasing recognition as a result.

    Meanwhile, Arm Limited continues its efforts to make headway into the data center. For example, its Neoverse chips are designed to deliver the performance and energy efficiency required to support cloud, edge, and 5G workloads now and in the future.

    Recently, there has been a noticeable rise in the prevalence of Windows laptops (and other devices) that utilize ARM processors instead of traditional CPUs (central processing units). However, ARM processors have been a significant part of many people’s electronic devices for over the past twenty years, indicating that ARM technology is not a recent development.

    Despite this, there is still a lack of widespread understanding of this processor technology. What exactly characterizes ARM? What kinds of devices make use of ARM? How do ARM processors differ from CPUs? Why is the adoption of Windows on ARM considered significant? Why is ARM frequently associated with Apple and Qualcomm? Let’s delve into these questions and analyze.

    Note: There are technical discussions on how to refer to RISC technology. This page uses “ARM” to refer to the general computing architecture technology and the original company name, whereas “Arm Ltd.” or “Arm” is used to denote the current company names.

    What is an ARM processor?

    “ARM” or “Arm” refers to a type of RISC instruction set architecture (ISA). An ARM processor simplifies instructions for the computer to maintain efficient system operation.

    ARM processors are typically less powerful than traditional CPUs, yet they also require less power to operate. Many companies opt to utilize ARM-based processors to create lightweight devices with extended battery life and reliable, well-balanced performance.

    What is an ARM-based RISC?

    A reduced instruction set computer (RISC) denotes a computer architecture that streamlines instructions sent to a computer, enabling efficient system operation. In this context, an ARM-based RISC represents a type of RISC computer processor architecture designed to operate with an ARM SoC (system on a chip).

    What sets ARM apart from other processors?

    CPUs from Intel and AMD are supplied and integrated into OEM computers (manufacturers such as Dell, Lenovo, etc.) as is, with no modifications to these processors by the OEMs apart from adjustments to power draw/thermal envelope, known as TDP.

    Intel and AMD design their own x86-x64 processors (CPUs), yet only Intel proceeds to manufacture them in its fab, which accounts for Intel’s significant influence. Conversely, AMD relies on a company like TSMC to manufacture its chips.

    Meanwhile, companies like Qualcomm or MediaTek procure licenses from Arm Ltd. to utilize core ARM CPU components and architecture. With these licenses, manufacturers have the legal freedom to modify or adapt ARM architecture or integrate an ARM processor into the chip. Put differently, while Qualcomm proceeds to design certain components of the SoC, the foundation of the processor stems from Arm Ltd., which conceived and licensed it.

    Subsequent to the finalization of the ARM chip by Qualcomm or MediaTek, it proceeds to a company like TSMC for manufacturing.

    (That being said, Qualcomm’s new Snapdragon X Elite and Snapdragon X Plus chips are, for the first time, independently designed without Arm Ltd.).

    Due to this distinction, an ARM processor is termed an SoC because it encompasses a CPU, GPU, and, in current times, an NPU and other components such as a 5G modem. Essentially, these chips comprise multiple components on a single “board” instead of being solely a CPU.

    Arm processors tend to exhibit more focused functionality compared to CPUs, and they often do not match CPUs in terms of power. Nevertheless, ARM processors generally demand less power to operate, thereby allowing devices utilizing ARM processors to sustain longer battery life.

    What distinguishes ARM64 from x64 and x86?

    In the realm of CPUs, x86 and the newer x64 represent the most prevalent architecture types used in laptops. While x86 can only accommodate a maximum of 4GB RAM, x64 can access a larger capacity, rendering it more potent. On the other hand, AArch64 (or ARM64) serves as a 64-bit extension of ARM architecture with the capability to access more than 4GB RAM. Nevertheless, it does not match the power of x86 or x64 architecture. ARM64 has traditionally been employed in smartphones and smaller devices, but its utilization in laptops and tablets is on the rise.

    Why is the adoption of Windows on ARM significant?

    In recent years, Apple, Microsoft’s foremost competitor, has achieved substantial success with MacBooks and iPads that utilize the company’s M1, M2, M3, and recently announced M4 ARM chips. Simultaneously, the Microsoft Surface series of devices has also garnered impressive sales with its SQ1, SQ2, and SQ3 ARM chips. However, numerous competitors offering Windows on ARM are now challenging macOS on ARM.

    Historically, Windows software has been designed for use on x86 and x64 processor computers, but the significantly more energy-efficient Arm SoCs (Systems on a Chip) have demonstrated superior battery life compared to traditional CPUs for smartphones and laptops. In essence, introducing Windows on ARM has empowered various manufacturers like Dell, Lenovo, and others to manufacture dependable Windows laptops delivering commendable performance alongside longer-lasting batteries in comparison to other CPU laptop options.

    It is currently rumored that Microsoft is developing a new Windows build (code-named Germanium) tailored around the Qualcomm Snapdragon X Elite, another ARM processor. Therefore, Windows shows no signs of veering away from ARM processors in the near future. If anything, the era of Windows on ARM is gaining momentum.

    What is a Snapdragon X SoC?

    A Snapdragon X SoC is a System on a Chip designed for Windows on ARM devices, featuring a 10-core Oryon CPU, an Adreno GPU, and a Hexagon NPU. You can find more information about it in our Snapdragon X Plus guide.

    While ARM processors and devices are currently in the spotlight, this technology is not new. ARM architecture dates back to 1990 when ARM (originally standing for Acorn RISC Machine) was established through a joint venture involving Apple, Acorn Computers, and VLSI (thanks CNBC). Today, the company operates independently as Arm Ltd.

    Instead of manufacturing its own CPUs like AMD and Intel, Arm Ltd. licenses core CPU components and ARM architecture. Companies that acquire these licenses can legally use these components and chip architecture to create more customized processors for their specific requirements.

    ARM processors were utilized in devices such as Nokia phones and Texas Instruments in the 1990s and later became the dominant architecture for touchscreen phones in the 2000s and 2010s. Notably, the first iPhone (2007) utilized ARM technology, as did many Motorola, HTC, and Samsung phones. Beyond mobile devices, Nintendo incorporated several ARM processors in its gaming handhelds, while Raspberry Pi also leverages them. Therefore, even if you are unfamiliar with ARM-based processors, you have likely encountered or utilized an ARM device at some point in your life.

    The initial Windows on ARM processor was the NVIDIA Tegra, which was introduced in 2012. Today, several major tech companies collaborate on ARM-based chips, including Intel, AMD, Qualcomm, Dell, Lenovo, Apple, Microsoft, and others. Although ARM-based processors have not historically been exceptionally powerful, this is not their primary focus. Companies continue to engage with ARM technology because it significantly enhances battery life while delivering balanced device performance. Hence, it is evident why smartphones and smaller devices that prioritize portability (such as the Microsoft Surface line) incorporate this technology.

    The most recent ARM architecture is Armv9, which made its debut in 2021.

    The ongoing rivalry between Apple and Qualcomm

    According to recent benchmarks shared by Qualcomm, the ARM64 Snapdragon X Plus surpasses Apple’s ARM M3 chip in speed. Importantly, this is not solely due to Qualcomm’s processor having more cores. However, the recently announced M4 is expected to outperform the Snapdragon X Elite, indicating that both are poised to deliver substantial performance.

    Some individuals argue that it is unfair to compare Apple’s M Series chips with Qualcomm Snapdragon X, and I will explain why this notion is completely unfounded. For an in-depth understanding, I recommend watching Snazzy Lab’s YouTube Video. In brief, I will highlight that the significant move made by Apple in 2020, shifting from Intel’s x86 chips to developing its own ARM-based M1 chip, garnered considerable attention. The M1 chip became associated with lightweight laptops offering excellent battery life and commendable performance. Subsequently, the company progressed to introduce the “M” series ARM Apple Silicon with the M2 and M3 chips.

    The M Series Apple Silicon, in some aspects, shares similarities with Snapdragon X Elite, not only because they are both ARM processors but also because they were initially developed by some of the same individuals.

    Importantly, it is practically impossible to discuss Apple ARM processors without bringing up Qualcomm. In 2019, a trio of former Apple employees—Gerard Williams III (Apple’s former chief architect for iPhone chips), John Bruno (former system architect at Apple), and Manu Gulati (former lead SoC architect at Apple)—leveraged their knowledge of ARM processors to establish a new company named Nuvia. Here, they continued to advance their designs beyond Apple’s constraints.

    Their achievements led to the acquisition of Nuvia by the semiconductor manufacturer, Qualcomm, in 2021. Interestingly, Arm sued Qualcomm over its acquisition of Nuvia, alleging that the deal violated Nuvia’s licensing agreements with Arm Ltd.

    The underlying takeaway is that M Series Apple Silicon shares some similarities with Snapdragon X Elite, not only because they are both ARM processors but also because they were originally developed by some of the same individuals. It is now evident that the trio that departed from Apple has managed to create ARM processors that outpace those developed by Apple.

    While comparing Apple and Qualcomm ARM processors can be intriguing, Qualcomm’s primary competition is not Apple but rather AMD and Intel. Presently, Qualcomm is the most significant competitor in the ARM processor arena.

    There was a time when Intel held a commanding position as the primary chipmaker in the United States. However, the landscape has changed, with AMD emerging as a robust CPU competitor often surpassing Intel’s CPU performance while offering products at relatively lower prices. Surprisingly, Qualcomm currently surpasses Intel in terms of company size (thanks CNBC).

    AMD and Qualcomm have dedicated significant effort in recent years to develop NPUs (neural processing units) to keep up with the increasing demand for AI (read more in my NPU guide). In contrast, Intel has only recently entered the NPU market with the introduction of its Intel Core Ultra line this year. However, Intel’s AI processors have not demonstrated the same level of performance as those from AMD or Qualcomm. Even Qualcomm’s own benchmark shows that Snapdragon X Elite outperforms Intel Core Ultra.

    Despite Intel’s strong reputation, the company is losing ground to its competitors who offer superior performance. Over time, Qualcomm’s ARM technology is expected to advance further, possibly leading to a significant increase in powerful Windows on ARM devices.

    This is not an exhaustive list of ARM processor devices, but rather a brief overview of the kinds of devices (past and present) powered by ARM.

    • Microsoft Surface devices, Apple iPad, Apple M-Series MacBooks, Lenovo ThinkPad laptops, Dell Inspiron laptops
    • Apple iPhone, Apple iPod, Motorola phones, Nokia phones, Samsung phones, HTC phones, Blackberry phones
    • Raspberry Pi, Roku 2
    • Game Boy Advance, Nintendo DS, Nintendo DSi, Nintendo 3DS, Nintendo Wii U, Nintendo Switch
    • Garmin Navigation Devices, TomTom navigation devices
    • HP Calculators, Texas Instruments

    There is much to learn about ARM processors, so here’s a comprehensive ARM FAQ to address any additional queries you may have.

    What is the full form of ARM?

    ARM originally stood for Acorn RISC Machine, named after the British tech company Acorn Computers, which was involved in its creation. Later, ARM became recognized as the acronym for Advanced RISC Machines. Today, the company behind this processor architecture is known as Arm Ltd.

    SoC vs processor vs CPU: What’s the distinction?

    A CPU is a set of hardware components, known as a processor, that collaborate to process fundamental instructions for a computer system. On the other hand, an SoC is a more specialized and less powerful computing option that offers a balanced performance level and greater efficiency. Both SoC and CPU have their respective uses, and one might be better suited for specific tasks than the other.

    What are the constraints of Windows on ARM?

    In the past, Windows on ARM was quite restricted, but that is no longer the case. Previously, many drivers and programs were often not compatible with ARM, but now the majority, if not all, of your apps will function on Windows on ARM laptops. The only potential exception to this could be smaller developers who have not released ARM64 drivers for their software.

    Why is Windows on ARM not as good?

    Though this was once a prevalent sentiment, it is worth addressing. At one time, Windows on ARM was much more limited than on a traditional Windows laptop because numerous programs lacked ARM64 extensions. However, this has changed over the years. Nowadays, most, if not all, of your apps can be run on a Windows on ARM device.

    Why is ARM so popular?

    ARM architecture empowers computer manufacturers to exercise more control over their products. From a consumer perspective, ARM processor devices are favored for their lightweight designs, excellent battery life, and impressive performance.

    Is Windows compatible with ARM?

    Yes. ARM devices can run Windows 10 and Windows 11 with ease.

    Should “ARM” be capitalized?

    Both “ARM” in all caps and “Arm” with only the first letter capitalized are acceptable, depending on the context. ARM originally stood for Acorn RISC machine, so many tech writers write the term in all caps when discussing the technology in general. However, the company Arm Limited currently capitalizes only the first letter in its name. Additionally, the 64-bit extension for ARM architecture is written with all caps, “ARM64,” while specific computing technologies are written with the first letter capitalized, such as “Arm Cortex-X1” or “Armv9-A.”

    It can be quite confusing. However, as a general rule, “ARM” is used when referring to the original company, specific extensions/drivers, or the technology in general, while “Arm” is appropriate when discussing the modern company or its specific computing technologies.

    Will ARM replace x86?

    Both x86 and ARM have their advantages, and while they may end up serving different purposes over time, it is unlikely that one will dominate the other in the next decade.

    Is Raspberry Pi based on ARM?

    Yes, the initial Raspberry Pi utilized a Broadcom BCM2835 SoC with an ARM processor. Since then, multiple Raspberry Pi models have also incorporated ARM processors. The latest Raspberry Pi 5 features a 64-bit 2.4 GHz quad-core ARM Cortex-A76 processor.

    Arm sued Qualcomm in 2022 over its purchase of Nuvia

    Arm has intensified its conflict with Qualcomm, two years after initiating legal action against its former close ally. According to Bloomberg, the British semiconductor firm has revoked the architecture license that permitted Qualcomm to utilize its intellectual property and chip design standards. As noted by the news source, Qualcomm, similar to numerous other chip manufacturers, relies on Arm’s software code necessary for chips to operate systems like operating systems. Reportedly, Arm has issued Qualcomm a 60-day cancellation notice — if an agreement isn’t reached in that timeframe, it could significantly affect the finances and operations of both firms.

    The SoftBank-backed semiconductor company took legal action against Qualcomm in 2022 after the latter acquired a company called Nuvia, which was another of its licensees. Arm claimed that the US firm failed to secure the required permissions to transfer Nuvia’s licenses. Consequently, Nuvia violated their agreement, and Arm asserted it had terminated those licenses, as outlined in its lawsuit. Qualcomm has been implementing technology developed by Nuvia in chips designed for AI PCs, such as those produced by Microsoft and HP. However, Arm demands that Qualcomm cease using technology developed by Nuvia and eliminate any Arm-based technology created before the acquisition.

    Should the firms fail to resolve the matter within the next 60 days, Qualcomm may need to cease sales of the majority of the chips that contribute to its $39 billion revenue, according to Bloomberg. The US chipmaker perceives this as a strategy by Arm to intimidate its business and secure greater royalties, with its spokesperson stating to Bloomberg and the Financial Times: “This is just more of what we’ve seen from Arm — more unfounded threats aimed at pressuring a long-term partner, interfering with our leading CPUs, and raising royalty rates despite the broad rights granted under our architecture license.” Qualcomm also charged Arm with trying to obstruct the legal process, describing its basis for terminating the license as “completely baseless,” and expressed confidence that its “rights under the agreement with Arm will be upheld.”

    In response, an Arm representative remarked: “In light of Qualcomm’s continuous significant violations of Arm’s licensing agreement, Arm has no alternative but to initiate formal proceedings requiring Qualcomm to address its breach or risk the cancellation of the agreement. This action is essential to safeguard the exceptional ecosystem that Arm and its valued partners have developed over more than three decades. Arm is fully prepared for the trial scheduled for December and remains assured that the Court will rule in Arm’s favor.”

    As reported, Arm has provided Qualcomm with a mandatory 60-day notice concerning the termination of the licensing agreement. This contract previously allowed Qualcomm to create its own chips utilizing Arm’s proprietary standards.

    Arm chose not to comment on the situation, while Qualcomm did not reply to a comment request from Reuters outside regular business hours.

    This announcement aligns with an ongoing legal struggle between the two technology companies, set to commence in federal court in Delaware this December.

    The British firm, predominantly owned by Japan’s SoftBank Group, initiated legal action against Qualcomm in 2022 for allegedly neglecting to negotiate a new licensing agreement following the acquisition of a new company.

    The lawsuit focuses on technology that Qualcomm acquired from Nuvia, a startup established by former Apple chip engineers, which Qualcomm purchased for $1.4 billion in 2021. Qualcomm intended to utilize Nuvia’s technology to rival Apple by developing chips capable of competing with the Apple M-series processors and challenging the desktop market led by Intel and AMD.

    While the acquisition was initially perceived as standard procedure, Arm contends that Qualcomm’s employment of Nuvia’s designs infringes on the licensing agreement. The licenses granted to Nuvia were specifically designated for a startup and could not be directly employed by Qualcomm without Arm’s consent.

    Arm has previously stressed that, “Arm is pursuing this claim to protect Arm, our partners, and the unmatched ecosystem we have collaboratively built. Arm and its partners have invested billions of dollars in creating industry-leading intellectual property. Because Qualcomm sought to transfer Nuvia licenses without Arm’s authorization, a standard restriction under Arm’s licensing agreements, Nuvia’s licenses were annulled in March 2022. Prior to and following that date, Arm made numerous good faith attempts to reach a resolution. Conversely, Qualcomm has breached the terms of the Arm licensing agreement by continuing development under the revoked licenses. Arm felt compelled to take legal action against Qualcomm and Nuvia to safeguard our intellectual property, our business, and to ensure customers can access legitimate Arm-based products.”

    Arm has allegedly provided Qualcomm with a 60-day cancellation notice as mandated by law. This license enables Qualcomm, based in San Diego, to create its own silicon according to Arm’s chip standards.

    The Bloomberg report emerged just as Qualcomm unveiled its latest mobile and automotive chips during its annual Snapdragon Summit held in Hawaii. These chips mark significant achievements for Qualcomm as they introduce new generative AI features for smartphones, including the Xiaomi 15 and Asus ROG 9 Phone, while car manufacturers Mercedes-Benz and Li Auto have pledged to implement the AI-compatible automotive chips in their forthcoming vehicles.

    At the time of publication, Qualcomm had not confirmed that these chips were developed based on Arm standards, but the Oryon CPUs central to both product lines are implicated in the lawsuit. The company has contested Arm’s cancellation in an official statement.

    “This situation reflects a continuation from ARM — more unfounded threats intended to coerce a longstanding partner, disrupt our performance-leading CPUs, and raise royalty rates despite the extensive rights granted under our architecture license. With a trial quickly approaching in December, Arm’s desperate maneuver seems aimed at undermining the legal proceedings, and its termination claim is entirely unfounded,” stated Qualcomm. “We are confident that Qualcomm’s entitlements under its arrangement with Arm will be upheld. Arm’s anticompetitive behavior will not be accepted.”

    Initially, Arm refrained from commenting but later issued a statement indicating its readiness for the December trial, which will seek to resolve competing lawsuits between the two companies.

    “In light of Qualcomm’s numerous significant violations of Arm’s license agreement, Arm has been compelled to take formal action requiring Qualcomm to rectify its breach or face the end of the agreement. This step is essential to safeguard the unparalleled ecosystem that Arm and its esteemed partners have cultivated over more than 30 years,” Arm’s statement indicated.

    The forthcoming trial scheduled for December coincides with Arm’s 60-day deadline. Arm has alleged that Qualcomm failed to renegotiate a deal post its acquisition of an Arm licensee in 2021, the chip startup Nuvia, and employed its chip designs in upcoming products. Qualcomm maintains that its existing Arm agreement encompasses Nuvia, according to Bloomberg. Arm initiated a lawsuit, and Qualcomm responded with a counter-suit.

    This scenario results in detrimental outcomes for all parties involved: Arm, Qualcomm, Microsoft, and most Android manufacturers.

    As it currently stands, this represents a no-win situation for every participant, but it also carries broader implications for the tech industry. The core of the dispute revolves around finances. Nuvia was compensating Arm with a reduced licensing fee during the development of the predecessor to the Oryon cores, now utilized in multiple Windows on Arm devices and começando to appear in smartphones. Qualcomm, in turn, pays a higher licensing fee to utilize Arm’s Cortex architecture for its mobile chips. Upon acquiring Nuvia, Qualcomm presumed that the licensing was transferred with the purchase, effectively obtaining a discount compared to what it would have to pay directly to Arm.

    With the increasing success of Windows on Arm, Arm (whether justified or not) believes it deserves a fair share, which would align more closely with the fees Qualcomm currently pays. Should Arm revoke the license for Cortex or any other intellectual property used by Qualcomm, it would find itself in a precarious position. Although Microsoft desires Windows on Arm devices, this segment constitutes a small portion of the processors Qualcomm manufactures. The smartphone and tablet sectors predominantly rely on Qualcomm’s offerings, making this a situation that transcends a simple corporate rivalry.

    The final element of this losing equation is that while losing access to Arm’s IP would have severe repercussions for Qualcomm, the relationship is not one-sided. In the last fiscal year, Qualcomm contributed to 10% of Arm’s total revenue (via Barrons). Considering Arm’s complete financial landscape, this represents a notable difference between year-end profits being positive or negative (via Yahoo).

    This kind of posturing seems illogical. Qualcomm will likely have to incur some additional licensing fees as a result of the court case regardless, so why provoke the situation? It has evolved into a case of (almost) mutually assured destruction, especially if one of Arm’s licensees exits the scene.
    Arm currently doesn’t make products; it creates designs and ideas that its licensees can turn into products. I can’t imagine that any of Arm’s other business partners are looking at this situation warmly. It’s one thing to take another company to court; that’s part and parcel of the cost of doing business. It’s another thing altogether to be going to court and also threaten to remove the very thing you’re going to court to address, on the same day you’re due in court.

    We can only hope that cooler heads will prevail in the remaining days before the court date and the second threat of the impending removal of Qualcomm’s licensing deals. Over the years, we’ve reviewed hundreds or possibly thousands of Qualcomm-powered devices, all using Arm IP, and for the most part, we loved them. Qualcomm not being able to use Arm’s architecture would be on a similar scale to losing AMD because Intel refuses to renew the x86 license that’s necessary to make Ryzen, Threadripper, Epyc, and custom silicon for gaming consoles. Except, in that situation, the government would step in as it has before when Intel flexed its muscles years ago with monopolistic practices.

    Arm actually going through with its threat would be the biggest self-own in tech

    It’s challenging to be a pure IP company that thrives on the success of the companies to which you license your hard work and engineering know-how. At least, it would be if you were any other company, but things are a little different when you’re used in 99% of the world’s smartphones, almost everything Apple makes, and a growing sector of Windows devices. Arm might manage to survive without Qualcomm, but it’d be close, and Qualcomm is almost gone without access to Arm’s IP. Not to mention the hundreds of companies that use Qualcomm’s Arm-based products in their own products. Losing either Arm or Qualcomm would be a huge blow to the technology sector, one that it would take many, many years to recover from.

    While there are other CPU architecture alternatives, Arm has a virtual monopoly in the mobile space. Qualcomm, Samsung, Apple, and others license IP from Arm to make their custom cores, so the chances of your smartphone using their IP are high. This is monopolistic behavior and the statement Qualcomm put out in response to the 60-day timeline says as much:

    This is more of the same from ARM – more unfounded threats designed to strongarm a longtime partner, interfere with our performance-leading CPUs, and increase royalty rates regardless of the broad rights under our architecture license. With a trial fast approaching in December, Arm’s desperate ploy appears to be an attempt to disrupt the legal process, and its claim for termination is completely baseless. We are confident that Qualcomm’s rights under its agreement with Arm will be affirmed. Arm’s anticompetitive conduct will not be tolerated.

    Of course, Arm released its own statement not long afterward, sticking to its guns about the 60-day termination notice:

    Following Qualcomm’s repeated material breaches of Arm’s license agreement, Arm is left with no choice but to take formal action requiring Qualcomm to remedy its breach or face termination of the agreement. This is necessary to protect the unparalleled ecosystem that Arm and its highly valued partners have built over more than 30 years. Arm is fully prepared for the trial in December and remains confident that the Court will find in Arm’s favor.

    Note that the company references the upcoming court date in the statement, which is a bold move when they’re essentially intimidating the other party involved.

    Arm does not produce products directly; it instead develops designs and concepts that its licensees can manufacture into products. I doubt any of Arm’s other business collaborators view this situation positively. While taking legal action against another company is a common aspect of business, threatening to revoke the very assets that are the subject of the court case on the same day as the hearing is altogether different.

    We can only wish for cooler heads to prevail in the days leading up to the court hearing and the second warning of potentially revoking Qualcomm’s licensing agreements. Over the years, we’ve evaluated countless Qualcomm-powered devices, all utilizing Arm’s intellectual property, and for the most part, we’ve been impressed. If Qualcomm were unable to use Arm’s architecture, it would parallel losing AMD if Intel decided not to renew the necessary x86 license for making Ryzen, Threadripper, Epyc, and custom chips for gaming systems. However, in that scenario, government intervention would likely occur, similar to previous instances when Intel exhibited monopolistic tendencies.

    If Arm follows through with its threat, it would be a major blunder in the tech industry

    Being a pure IP company that relies on the success of its licensees poses distinct challenges. This would be true for any other business, but the situation differs significantly since Arm is integral to 99% of the world’s smartphones, nearly all Apple’s products, and an increasing number of Windows devices. Arm may be able to survive without Qualcomm, but it would be a close call, and Qualcomm would almost certainly struggle without access to Arm’s intellectual property. Additionally, many companies that incorporate Qualcomm’s Arm-based products in their offerings would also be affected. The loss of either Arm or Qualcomm would deal a significant blow to the tech industry, one from which it would take many years to recover.

    While there are alternative CPU architectures available, Arm holds a de facto monopoly in the mobile market. Companies like Qualcomm, Samsung, and Apple license intellectual property from Arm to develop their custom cores, making it highly likely that your smartphone utilizes their technology. This reflects monopolistic practices, and Qualcomm’s response to the 60-day notice indicates as much:

    This reflects a familiar pattern from ARM – yet more baseless threats aimed at coercing a long-time partner, undermining our top-performing CPUs, and raising royalty fees despite the broad rights outlined in our architecture license. With a trial looming in December, Arm’s desperate attempt seems intended to disrupt the legal proceedings, and its claim for termination lacks any basis. We believe that Qualcomm’s rights under its agreement with Arm will be upheld. Arm’s anticompetitive behavior will not be accepted.

    Naturally, Arm swiftly issued its own statement, maintaining its position regarding the 60-day termination notice:

    In light of Qualcomm’s ongoing material breaches of Arm’s licensing agreement, Arm has no alternative but to pursue formal action requiring Qualcomm to rectify its breach or face the termination of the agreement. This step is essential to safeguard the unparalleled ecosystem that Arm and its esteemed partners have cultivated over more than three decades. Arm is fully prepared for the upcoming trial in December and remains confident that the Court will rule in Arm’s favor.

    It’s noteworthy that the company mentions the upcoming court date in its statement, a daring approach when they are essentially intimidating the other party involved.

  • SpaceX in particular has now overtaken the Europeans with significantly cheaper technologies

    In 2024, NASA is planning an important manned test flight to the Moon. ESA wants to become more independent again with its new Ariane 6 rocket. An overview of the space travel highlights in 2024.

    The new heavy-lift rocket Ariane 6 is finally scheduled to launch in summer 2024 – four years later than planned. The pressure is high: the European Space Agency (ESA) currently has no longer its own heavy-lift rockets. The last rocket of the Ariane 5 generation took off in July 2023. SpaceX in particular has now overtaken the Europeans with significantly cheaper technologies.

    With the new Ariane 6, Europe wants secure direct access to space in the future. The complete upper stage of the new rocket type was built in Bremen. The main engine and the upper stage engine were tested by the German Aerospace Center in Lampoldshausen, Baden -Württemberg. Due to the coronavirus pandemic and technical difficulties, the maiden flight into space had to be postponed again and again. It is currently scheduled for summer 2024.

    Mars moon mission to send rocks to Earth

    The Japanese space agency JAXA will send a probe to the Martian moon Phobos in September 2024. After three years of flight, the probe will drop a rover onto Phobos from a height of about 100 meters. The low gravity of the small Martian moon makes it a particular challenge. The rover must move very slowly and carefully to avoid suddenly taking off into space.

    The rock samples collected by the rover will later be flown back to Earth. If this succeeds, it would be a first and also a test with a view to neighboring Mars. The robots have already analyzed Martian rocks on site there, but samples of them have not yet been sent back to Earth. However, initial plans for such a return mission are already in place: ESA and NASA are planning a Mars Sample Return Mission by 2033. According to current plans, China would like to launch such a return mission Even earlier.

    Hera mission to investigate asteroid defense

    How can the Earth be protected from an asteroid in an emergency? ESA’s new HERA mission is designed to investigate this question in more detail and will therefore fly to the asteroid Didymos and its moon Dimorphos in 2024. In 2022, NASA crashed a probe into the small satellite Dimorphos. The impact changed Dimorphos’ orbit around Didymos.

    The HERA probe will now examine the crater created by the impact in more detail. The results can help prevent an asteroid impact on Earth in the event of an emergency in the future and develop further missions to test defense techniques. The launch of the space probe is planned for October 2024.

    Important test for a later moon landing

    In a few years, people will land on the Moon again. Preparations for NASA’s Artemis III mission are becoming more and more concrete. In November 2024, an important test flight will take place – the Artemis II mission. Three astronauts, one male and one female, will fly to the Moon in the Orion spacecraft. A landing is not planned; the crew will orbit the Moon.

    Depending on how the mission goes, the crew will return to Earth after eight to 21 days. If everything goes well, people could land on the Moon again in the coming years, more than 50 years after the first manned Moon landing. The Apollo crew flew to the Moon’s equator, where landing is comparatively easy. At the south pole, however, the landscape is rugged. So far, NASA has selected 13 possible landing regions around the Moon’s south pole.

    The first manned test in November 2024 is considered an important milestone for the return to the Moon. Five manned test flights were completed before the first Apollo landing on the Moon. In the series of Artemis missions, only one manned test flight is currently planned before the first landing attempt, but more could follow.

    NASA mission searches for ice on the moon

    The VIPER rover will also be sent to the moon at the end of 2024. It is to land near the south pole of the Earth’s satellite and examine the Nobile crater. This lunar crater was probably formed by a collision with another celestial body. In this very icy region with temperatures below minus 200 degrees Celsius, NASA is looking for frozen water, not only on the lunar surface, but also in the underlying soil layers. The data collected will be used to create a map of the available resources on the moon. According to NASA, the findings could help in the future to use the water reserves on the moon for longer manned missions and the construction of a lunar base.

    The Chinese mission Chang’e 6 also wants to go to the Moon. The lander is to take two kilograms of soil samples from the Moon and bring them back to Earth after two months. Experts want to further unravel the history of the Moon’s formation by analyzing the rock – and they hope to find new answers to the question of how the entire solar system came into being.

    Is there life on Jupiter’s moon Europa?

    The Europa Clipper space probe will fly to Jupiter’s icy moon Europa at the end of 2024. The NASA probe is intended to examine Europa’s thick ice crust and, above all, to find oceans hidden beneath it, in which life is theoretically possible. The high radiation dose near Jupiter makes the probe a challenge. Its sensitive instruments are therefore specially shielded. The probe will fly close to Jupiter’s moon Europa 45 times in order to scan almost the entire moon. After that, the fuel will be used up or the radiation dose will have already destroyed the probe.

    A launch towards Venus is also planned for 2024. Shukrayaan-1 from the Indian Space Agency is intended to closely analyze the atmosphere of Venus and the volcanic activities on its surface.

    In addition to the fixed plans, surprises are still possible when looking into space in 2024. Not least thanks to the James Webb Space Telescope, which is constantly providing new data and producing new research results.

    “Hubble” observations

    Black hole ejected from galaxy

    According to researchers from Baltimore, a black hole with a mass of more than a billion suns has shot out of its home galaxy. It is now racing into the vastness of space at millions of kilometers per hour.

    A gigantic gravitational wave tsunami has hurled a black hole out of the heart of its home galaxy. That is at least what astronomers have concluded from observations with the Hubble Space Telescope. The black hole, with a mass of more than a billion suns, is shooting out of its home galaxy at a speed of 7.6 million kilometers per hour.

    Scientists led by Marco Chiaberge from the Space Telescope Science Institute in Baltimore used the space telescope to study a galaxy about eight billion light years away.

    A light year is the distance that light travels in one year and corresponds to almost ten trillion kilometers.

    Brighter than an entire galaxy

    The galaxy belongs to a galaxy cluster – and the astronomers were looking for signs of merging star systems. To their surprise, they observed a so-called quasar in the outskirts of the galaxy they were studying. Quasars are usually the cores of active galaxies. They are powered by a central black hole.

    The black holes themselves cannot be observed, but the quasars usually shine brighter than an entire galaxy. “Black holes live in the centers of galaxies, so it is unusual to observe a quasar in the center,” explains Chiaberge. The researchers calculated that this quasar with its supermassive black hole is around 35,000 light years away from the center of its galaxy. That is further than the distance from the sun to the center of our home galaxy, the Milky Way. The speed of the quasar with the catalog number “3C 186” suggests that it will leave its galaxy in about 20 million years and then wander through space.

    To give such a massive black hole this speed, the energy of 100 million supernovae exploding simultaneously is needed, as the astronomers explain. The scientists studied the unusual system with numerous telescopes. “When we combined the observations from Hubble, the Chandra X-ray Observatory and the Sloan Digital Sky Survey, all pointed to the same scenario,” reports Chiaberge.

    Merged with a neighboring galaxy?

    The researchers assume that the home galaxy of the unusual black hole actually merged with a neighboring galaxy. In the process, its two supermassive black holes in the center also merged. However, this did not happen all at once. Instead, the black holes orbit each other ever more closely and quickly, emitting spiral gravitational waves – similar to how a rotating lawn sprinkler distributes water. If the two black holes do not have the same mass and the same rotation rate, the gravitational waves can be more concentrated in one direction. As soon as the black holes finally merge, they stop producing gravitational waves.

    The recoil then hurls the merged black hole in the opposite direction to the previously bundled gravitational waves. According to the astronomers, this is exactly what happened with the gigantic black hole in the heart of “3C 186”. The only other explanation is that the quasar is actually located somewhere far behind the observed galaxy in another galaxy. However, there is no trace of such a background galaxy in the observations in numerous wavelength ranges.

    “2023 BU” recently discovered

    Asteroids come unusually close to Earth

    As big as a truck and as close as almost no asteroid before, “2023 BU” will race past the Earth – at an altitude of just 3,600 kilometers at the southern tip of South America. According to NASA, there is no danger of an impact.

    An asteroid will pass unusually close to Earth on Friday night. The celestial body, known as “2023 BU”, will reach the point in its orbit closest to Earth early Friday at 1:27 a.m. German time, according to the US space agency NASA. The asteroid, which is about the size of a delivery truck, will then rush past the southern tip of South America at an altitude of just 3,600 kilometers.

    According to NASA, there is no risk of an impact. For comparison: geostationary satellites are around 35,000 kilometers away from Earth, and the ISS space station is around 400 kilometers away. However, it will most likely be one of the closest approaches to Earth ever recorded, according to NASA.

    Even if the space rock came a lot closer to Earth, nothing would probably happen. According to researchers, most of the asteroid would burn up in the atmosphere. Some of the larger pieces could fall as meteorites.

    Discovered just a few days ago

    The celestial body was discovered just a few days ago by amateur astronomer Gennady Borisov at the Margo Observatory in Crimea.

    Borisov made a name for himself in 2019 when he used a self-built telescope to discover a comet that crossed our solar system on its journey from the depths of space. The comet 2I/Borisov attracted worldwide interest among astronomers at the time.

    Technical problems with US moon landing mission

    For the first time in over 50 years, a mission to the moon has been launched in the USA. However, the private lunar module has technical problems. According to operator Astrobotic, the module was unable to take the desired position in relation to the sun.

    Shortly after the launch of a US mission with the aim of making the first successful commercial landing on the moon, a malfunction occurred, according to the company. Initially, the launch went as planned and the systems worked as expected, said Pittsburgh-based company Astrobotic.

    “Unfortunately, there was a malfunction that prevented the shuttle from taking up a stable position facing the sun.” The team is reacting to the situation and will pass on further information as soon as it is available, they said. The “Peregrine” lander was launched this morning on board a “Vulcan Centaur” rocket made by ULA from the Cape Canaveral spaceport.

    The capsule was supposed to land on Earth’s satellite at the end of February in an area called Sinus Viscositatis (Bay of Stickiness), completing the first – unmanned – US moon landing since the Apollo mission over 50 years ago.

    The cargo flight to the Moon was the first of several of its kind planned for 2024. The lunar lander required for this was financed by the American space agency NASA, but is a development and property of the private US space company Astrobotic. The lander “Peregrine” has scientific equipment, technical experiments and commercial cargo from several countries on board.

    What exactly is “Peregrine”?

    The lander “Peregrine” – in English “Peregrine Falcon” – is a type of truck for transporting things to the moon. This spacecraft cannot transport people, but it can transport scientific measuring instruments, technical equipment, robots or even objects that companies want to place on the moon for a variety of reasons and business interests.

    The lunar lander is two meters high and 2.5 meters wide. “Peregrine” lands on four legs, slowed down by rocket engines. These legs carry a platform on which a total of 21 different “packages” are mounted.

    But unlike the parcel delivery service on Earth, the cargo items are not distributed after landing, but remain on board or are placed on the lunar surface in the immediate vicinity of the lander – small robots and rovers, for example.

    Peregrine stays on the moon

    Another difference between the shipping trucks: the Peregrine is a disposable transporter. The vehicle remains on the lunar surface after landing and does not return to Earth. A shuttle service between Earth and the Moon is therefore not possible.

    Incorrect altitude calculation on “Peregrine Mission 1”

    In April 2023, a Japanese company failed in a similar mission. The company Ispace stated that the reason was an incorrect calculation of the lander’s height during the landing attempt. During the “Peregrine Mission 1”, private individuals were able to buy space to transport material to the moon in the lander, which is 1.9 meters high and has a diameter of 2.5 meters.

    The US space agency NASA also wants to prepare its own expeditions to Earth’s companion with several devices on the journey. NASA wants to investigate the moon’s exosphere, among other things, during the mission. In addition, thermal properties and the hydrogen content of the material on the moon’s surface (regolith) are to be investigated.

    NASA mission “Artemis II”

    Three men and a woman fly to the moon

    A good 50 years after the last “Apollo” mission, NASA wants to send people to the moon again. The US space agency has now announced who will take part in the “Artemis II” mission at the end of 2024.

    The US space agency NASA has named four astronauts for the “Artemis II” mission, which is to fly around the moon next year: NASA astronaut Christina Koch from the USA, her US colleagues Victor Glover and Reid Wiseman, and Canadian Jeremy Hansen. They would be the first people to be near the moon since the astronauts of the last “Apollo” mission set foot on the Earth’s satellite for a while in 1972, as NASA announced when announcing the participants.

    Koch, an engineer who already holds the record for the longest uninterrupted space flight by a woman, was named as a mission specialist, along with US Navy officer Glover, who was selected as the “Artemis II” pilot. Glover would be the first black astronaut on a lunar mission.

    Hanson is the first Canadian ever selected to fly to the moon. Wiseman, a veteran of the International Space Station, was named commander of the Artemis II mission.

    Launch scheduled for late 2024

    The “Artemis II” mission is currently planned for November 2024. It will be the first manned “Artemis” launch after the successful test of the “Artemis I” mission in December. The unmanned “Orion” capsule traveled around 1.4 million miles through space, flew around the Moon and landed in the Pacific Ocean after around 26 days in space. It had collected important data.

    NASA wants to go back to the moon

    The unmanned test mission was considered an important step towards the return of humans to the moon, with the long-term goal of a trip to Mars. Around a year after “Artemis II”, another manned flight, including a moon landing, is to follow with “Artemis III”.

    NASA plans to land astronauts on the lunar surface later this decade to establish a permanent outpost that will provide a springboard for future Mars exploration. The European Space Agency (ESA) and space agencies from several other countries are involved in “Artemis.”

    Editor’s note: An earlier version of the text referred to a “colored astronaut.” We have replaced the word “colored” with “black.”

    NASA mission “Artemis II”

    “To the moon, to Mars and beyond”

    NASA has introduced four astronauts who will fly to the moon next year. As the crew of the “Artemis II” mission, they will orbit the moon for ten days – and prepare for a manned landing.

    It was a huge show that NASA put on at the Johnson Space Center in Houston, Texas. It was ceremoniously revealed that four astronauts will fly to the moon in the Orion space capsule at the end of next year.

    Three Americans, including a woman and an African American for the first time, and a Canadian will spend ten days orbiting the moon. They will also be the first people in over 50 years to see the moon up close – all in preparation for the next manned landing.

    Test mission before landing

    “The Artemis II crew represents thousands of people who are working tirelessly to take us to the stars,” said NASA chief Bill Nelson in Houston. “The world’s largest and most powerful rocket will catapult them into space. They will fly to the moon,” Nelson continued.

    Like “Artemis I”, when the Orion space capsule was launched into space unmanned as a test, “Artemis II” is also a test mission. The astronauts don’t only want to collect data, but also put the Orion’s functionality through its paces.

    “We will hear: ready for launch. And then we will fly into orbit for eight minutes on the Space Launch System rocket. During that time we will test all of Orion’s systems and observe how it maneuvers in space. And then – if everything goes well – we will go to the Moon,” said experienced astronaut Christina Hammock Koch. Like her two US colleagues, the engineer has already spent time on the International Space Station ISS.

    Preparations for “Artemis 3”

    The actual lunar landing mission “Artemis 3” is scheduled to begin at the end of 2025. According to current plans, the astronauts will then land with the space capsule on the south pole of the Moon. A place where there is ice and therefore water and thus an area that, according to NASA, could be the best place to set up a permanent lunar base.

    From there, the space agency wants to undertake further missions into space – for example, to Mars. At least NASA boss Bill Nelson is already dreaming of it: “Together we will usher in a new era of research: the Artemis generation. Together we will set off. To the moon, to Mars and even further.”

    The way back to the moon

    Findings of the “Artemis I” mission

    Three months ago, the “Orion” space capsule of NASA’s “Artemis I” moon mission landed in the Pacific Ocean. The test flight is considered an important step on humanity’s path back to the moon. NASA is now already planning the manned follow-up mission.

    After the mission is before the mission: The scientists at NASA, its partner agencies and industrial partners hardly had time to rest after the successful landing of “Artemis I”. Preparations for “Artemis II” are underway. This follow-up mission will be the first time that humans will be able to fly to the moon since the “Apollo” flights in the 1960s and 1970s.

    Good cooperation

    Thomas Zurbuchen, the former head of science at NASA, summed up in an interview with tagesschau.de: “‘Artemis I’ was an incredible success. It was a mission with a lot of risk. The rocket had never flown in this configuration before, as had the ‘Orion’ capsule with the European Service Module. It all worked out.”

    NASA broke new ground in several respects with the Artemis program: For the first time, the United States relied on another space agency, the European Space Agency (ESA), to build a very important and critical part of a manned spacecraft.

    In Europe – at Airbus in Bremen – a key element of the “Orion” spacecraft is being assembled: the European Service Module (ESM). It provides the propulsion for “Orion” and supplies the astronauts with electricity, air and water. The ESM module is the heart of the spacecraft, and the pressure on those responsible in Europe was accordingly great.

    European balance sheet

    The results are also very positive for them. At a press conference in Bremen in February, Airbus announced that the first evaluation of the test data had shown that the “Orion” spacecraft had consumed much less fuel and electrical energy than previously assumed. Marc Steckling, the head of Space Exploration at Airbus: “We used the first mission to put the module through its paces and found that it generated 15 percent more electricity because the solar cells are very good. And we found that the propulsion system is very efficient. We consumed almost two tons of fuel – which is 20 percent – less.”

    This will enable future missions with a longer duration or with more cargo, for example in the case of module transports to build the so-called “Lunar Gateway”, an intermediate station in lunar orbit. It is to be built in the 2020s and will be a kind of “bus stop” in space on the way to the lunar surface.

    Stress test for “Artemis I”

    Steckling continues: “We flew more maneuvers to simulate situations that do not normally occur. For example, we completely changed the angle of the solar cells in relation to the sun. ‘Artemis I’ was of course the obvious choice because there were no astronauts on board and the system could be put through its paces.”

    “We flew longer, faster, with greater radiation exposure and many other things. The goal was to punish the system to make sure that ‘Artemis II’ really works because there are people on it,” adds former NASA science chief Zurbuchen.

    Airbus delivered the module for the “Artemis II” mission to Florida in October 2021. It is now being tested and integrated at the Kennedy Space Center.

    Astronauts to the moon

    It is not yet clear which astronauts will be on board the “Artemis II” mission. However, it is assumed that no one from Europe will be among them, as is the case with “Artemis III”. This mission is scheduled to launch in 2025 at the earliest and will not only orbit the moon, but also land on the lunar surface. NASA wants to bring the first woman and the first “person of color” to the moon.

    ESA astronaut Alexander Gerst is considered a hot candidate for the first European moon landing. He was in Cape Canaveral when “Artemis I” was launched in November. He was fascinated by the force with which the so-called SLS moon rocket, the most powerful rocket ever built, took off. At the Airbus press conference, he talked about his impressions: “You are several kilometers away and the sound wave takes about 20 seconds to reach you. You see the rocket launch before you hear it. And the seismic wave is even faster than sound. You feel the earth shaking before the sound wave reaches you. It was an impressive thing to see the most powerful rocket that has ever flown take off.”

    NASA checks

    According to Zurbuchen, on the return flight of “Artemis I”, when the space capsule re-entered the Earth’s atmosphere, the heat shield on the underside of the capsule melted somewhat more than expected: “We have sensors on the inside that showed where the heat distribution and erosion is. Now we need to understand exactly what happened. That’s one of the things we’re working on at the moment.”

    Protection for astronauts from radiation

    Another is the evaluation of a radiation experiment inside the capsule. It is still ongoing. There have been no astronauts on board “Orion” yet, but there were just two measuring dummies. In the international research project entitled MARE (Matroshka AstroRad Radiation Experiment) under the leadership of the German Aerospace Center (DLR), thousands of detectors were installed in two “phantoms” to measure radiation in space and its effects on the human body.

    A DLR team read some of the detectors in the USA in January after the dummies were removed from the space capsule. They are now being analyzed. The results are important for better protecting astronauts on future manned missions. Such findings from “Artemis I” pave the way for follow-on missions back to the Moon.

    Russian space probe

    “Luna-25” crashed on the moon

    Setback for Russian space travel: The space probe “Luna-25” crashed shortly before the planned landing on the moon. This was announced by the space agency Roscosmos. “Luna-25” was launched into space over a week ago.

    Russia’s first moon mission since the end of the Soviet Union has failed. The probe “Luna-25” hit the lunar surface after an “unplanned situation” and ceased to exist, the Russian space agency Roscosmos announced in Moscow. The reasons for this are being clarified, it said.

    “Luna-25” is part of the Russian lunar program, which envisages the construction of its own space station there by 2040. The probe was launched into space more than a week ago and entered orbit around the Moon last Wednesday. Since then, the 1,800 kilogram spacecraft has been looking for a suitable landing site, according to Russian sources.

    On Saturday, Roscosmos reported an “unusual situation” on board “Luna-25” that made planned maneuvers impossible. The lander was scheduled to land on the moon’s south pole on Monday. This is of particular interest to scientists because it is constantly in shadow and therefore there may be frozen water there. On Thursday, Roscosmos presented a photo of the moon’s surface taken by the probe.

    Actually, “Luna” should have been on the move a long time ago. The first planned launch date of a lunar probe was in 2012. Last year, a launch was again planned for May, but was then delayed again due to technical problems. Roscosmos originally worked with the European Space Agency (ESA) on the lunar program. However, after Russia’s invasion of Ukraine in February 2022, ESA ended its cooperation with Moscow.

    Roscosmos said the expedition was designed to demonstrate that Russia is capable of putting a payload on the moon and to ensure that it has guaranteed access to the lunar surface. The sanctions imposed after the invasion of Ukraine are making it difficult for Russia to access Western technology.

    Incident involving space probe “Luna-25”

    Russia’s space agency has reported an “unplanned situation” with its space probe. Specialists are now working to determine the cause. “Luna-25” is actually supposed to land on the moon in two days.

    According to information from Moscow, two days before the planned landing on the moon, there was an unexpected incident on the Russian space probe “Luna-25”. In preparation for the imminent landing on the lunar surface, the probe was scheduled to enter a new orbit of the Earth’s satellite on Saturday at 2:10 pm Moscow time (1:10 pm CEST), the Russian space agency Roscosmos announced.

    “During the operation, an unplanned situation arose on board the automatic station, which did not allow the maneuver to be carried out under the specified parameters,” it said. Specialists are now investigating the incident and determining the cause. No further details were given. The probe is scheduled to land on the moon next Monday. It is still unclear whether this will remain the case

    Probe launched a week ago

    “Luna-25” is part of the Russian lunar program, which plans to build its own space station there by 2040. The probe was launched into space more than a week ago and entered orbit around the Moon last Wednesday. Since then, the 1,800-kilogram spacecraft has been searching for a suitable landing site, according to Russian sources. On Thursday, Roscosmos presented a photo of the lunar surface taken by the probe.

    Actually, “Luna” should have been on the move a long time ago. The first planned launch date of a lunar probe was in 2012. Last year, a launch was again planned for May, but was then delayed again due to technical problems. Roscosmos originally worked with the European Space Agency (ESA) on the lunar program. However, after Russia invaded Ukraine in February 2022, ESA ended its cooperation with Moscow.

    Last flight of Ariane 5

    Success story with a bumpy start

    The European carrier rocket Ariane 5 has launched for the last time. On board is the German satellite “Heinrich Hertz”. In its almost 30-year history, the rocket has flown many important missions. A look back.

    The 117th launch marked the end of the era of the European Ariane 5 launch vehicle. During the night, Ariane 5 took off from the Kourou spaceport in French Guiana for the last time. It is to launch two satellites into space.

    A good opportunity to look back at the history of Ariane 5 – a story that did not begin very successfully.

    Maiden flight of Ariane 5 rocket failed

    On June 4, 1996, the first Ariane 5 took off from Kourou on its maiden flight. But just 40 seconds after takeoff, the rocket exploded over the rainforest. The cause: a programming error that caused the data to falsely show a major change in the rocket’s course. The flight computer then initiated an extreme corrective maneuver in which two boosters broke off from the side of the rocket and the rocket initiated its self-destruct mechanism.

    The second launch was also only a partial success. The upper stage of the rocket did not reach full power and the satellites on board the Ariane 5 entered an orbit that was too low.

    Carrier for many scientific missions

    After that, however, the success rate rose sharply. From 1998 onwards, Ariane 5 launched a number of satellites into space, including important scientific missions. One of these is “Rosetta”. This mission was the first time that a probe landed on a comet. Since 2018, the “BepiColombo” probe has been on its way to the planet closest to the sun, Mercury. It is scheduled to arrive there in 2025 and collect information about the planet’s surface, magnetic field and core.

    At the end of 2021, an Ariane 5 sent the James Webb Space Telescope on its journey with such precision and fuel economy that the telescope’s mission life was probably more than doubled. And “Juice”, a probe that will search for signs of life on Jupiter’s icy moons, was also safely launched into space by an Ariane 5 in April 2023.

    A German satellite at its last launch

    With its last launch, Ariane 5 will put the communications satellites “Syracuse 4B” and “Heinrich Hertz” into Earth orbit. “Heinrich Hertz” is the first communications satellite in twenty years to be developed entirely in Germany. It is about the size of a minibus and is expected to spend 15 years in space.

    During this time, “Heinrich Hertz” will test new technologies for satellite-based communication in space. This is because there is a high level of radiation in orbit and temperatures fluctuate between extreme values. There are around 20 experiments on communication, antenna and satellite technology on board the satellite.

    In addition, “Heinrich Hertz” will serve as a relay station. It can forward signals from other satellites and thus maintain their contact time with ground stations for longer.

    The satellite is named after the German physicist Heinrich Hertz, who was the first to detect electromagnetic waves in Karlsruhe in 1886, thus helping modern communications technology to achieve a breakthrough. His name is best known for the unit “Hertz” named after him, which is used to measure frequencies.

    An intelligent satellite

    The special thing about the “Heinrich Hertz” satellite is that it can continue to develop in space – it can be reprogrammed, so to speak. Normally, the functions of satellites cannot be changed once they have been put into orbit. However, “Heinrich Hertz” can, for example, change its transmission and reception frequencies, redirect its antennas and thus adjust the region from which data can be received and sent.

    “With this great flexibility, it is also possible, for example, to support disaster scenarios,” says Jens Müller from the TU Ilmenau, who helped develop the satellite’s telecommunications component. “Take the flood disaster in the Ahr Valley: there we were the first to complain about the loss of the telecommunications structure.” In such cases, satellites such as “Heinrich Hertz”, which can be used flexibly, could help to quickly rebuild the communication channels.

    Ariane 6 ready for use by the end of 2023

    The launch of “Heinrich Hertz” marks the end of the Ariane 5 era. Its successor, the Ariane 6, is expected to be ready for launch at the end of 2023. According to the German Aerospace Center (DLR), the new rocket will be more powerful, more flexible and more cost-effective than the previous version.

    In order to transport both less heavy and many small satellites into space, two different models of the Ariane 6 were developed: one with two and one with four solid boosters. This will enable the new launch vehicle to transport up to eleven tonnes of payload into geostationary orbit – twice as much as the Ariane 5. With the Ariane 6, the European space community wants to secure its independent access to space and continue to transport its own missions into Earth’s orbit.

    ESA astronaut Gerst

    “Each of us has a dream”

    In 2008, Alexander Gerst was one of 8,413 applicants for admission to the ESA astronaut corps. In an interview with tagesschau.de, he promotes this unusual profession and encourages interested parties to follow his example.

    About: ESA astronaut Alexander Gerst

    Alexander Gerst has been a member of the ESA astronaut corps since 2009. In 2014 and 2018 he flew to the International Space Station ISS on board a Russian Soyuz rocket. Gerst was born in Künzelsau in 1976 and is a geophysicist, volcanologist and astronaut.

    tagesschau.de: Mr. Gerst, the ESA is looking for new astronauts. Who is this job for?

    Alexander Gerst: I think you should apply if you have always dreamed of seeing the Earth from the outside, if you are curious and enjoy technology and adventure. In my view, the main thing when applying is not how good your chances of getting the job are. We all have a dream and we owe it to our 80-year-old selves to give that dream a fair chance. I did that twelve years ago and it was worth it. In any case, you will learn a lot about yourself in a selection process like this. It is an exciting adventure to be a part of.

    Next destination: the moon?

    tagesschau.de: When you applied to ESA in 2008, a flight to the International Space Station (ISS) was the most likely mission. The future generation of astronauts has many other options open to them.

    Gerst: The destinations we will soon be flying to will become more diverse. In the future, the Moon will be added, and later Mars. First, however, the new astronauts will go into Earth orbit, perhaps to the ISS, to gain experience there. Missions to the Moon are much more complex, which is why this experience is important. The first missions to the Moon will probably be carried out by the current experienced astronauts in the corps, and the new colleagues will gradually join them. That is a great prospect for European space travel.

    tagesschau.de: Does that mean that we will soon see you in space again?

    Gerst: One thing you always ask yourself as an astronaut is: where is the next flight going and when will it be? That is an uncertainty you have to live with. When I was selected, I didn’t know when my first mission would be. I was then extremely lucky that it came soon, and even a second one. I am still active in the corps, and the obvious prospect is of course a moon mission. First of all, however, we have to build the Lunar Gateway together with our international partners. This is a space platform that orbits the moon as a base for missions to the lunar surface. As a representative of the ESA astronaut corps, I am involved in the planning and construction. The project is in full swing.

    I also lead a team at ESA that is developing future strategies for the exploration of Earth’s orbit. Part of being an astronaut is that, even if you are not specifically nominated for a mission, you keep fitting in all areas. I do sport, of course, and have training sessions in between, for example flight training and underwater training with the spacesuit. So I don’t get bored on Earth either.

    “This is a huge privilege”

    tagesschau.de: You were on the ISS for six months each in 2014 and 2018. What was different about the missions?

    Gerst: When you fly into space for the first time, it’s a special flight because you are crossing a boundary that you have never crossed before. You don’t know how you will function beyond this boundary, whether you are made for space or not.

    A second mission is completely different: you approach it more relaxed because you have a lot more experience and you know what you can do. That’s why you often get a more complex mission the second time around because you have more mental capacity free and can therefore get a little further out of your comfort zone. That’s why I was nominated as commander of the ISS. It’s a complex job; you have to look after your crew and training team long before the mission, and you bear a lot of responsibility for the success of the mission. And on my second mission I launched into space with an astronaut and an astronaut, neither of whom had ever been in space before.

    This gradual build-up of mission complexity is also the reason why experienced astronauts are selected for flights to the Moon. They have already shown that they can work well up there and who are ready for a more complex mission with greater uncertainties, which of course also entails a somewhat higher risk.

    tagesschau.de: What excites you most about the job of an astronaut?

    Gerst: I was fascinated by many different aspects: There is the international collaboration with very inspiring individuals, the mission teams, scientists from different countries who come together in a large project. But what is even more amazing for me is the perspective you gain as an astronaut: the view from the window of the spaceship onto our planet, combined with the understanding of where you are at the moment. This “worldview” is then, in the truest sense of the word, an outside perspective on our world. That is a huge privilege, I thought it was great.

    ISS anniversary

    20 years of human outposts in space

    The ISS has been permanently inhabited by astronauts since November 2, 2000. Researchers praise the multinational project. But the ISS will only serve as an outpost for humanity for a few more years.

    On cloudless nights, the ISS can be seen flying in the sky with the naked eye: a bright dot that circles the planet Earth about every 90 minutes. With its solar panels, the flying laboratory is about the size of a football field.

    Twenty years ago, the first long-term crew docked with the ISS, two Russian cosmonauts and a US astronaut. Since then, many multinational teams have followed; for ESA, the ISS has become “a true outpost of humanity”. It is jointly operated and financed by the space agencies of the USA, Russia, Canada, Japan and the European Space Agency (ESA).

    German astronauts on the ISS

    Among the ISS astronauts from 21 nations, there have been three Germans so far: Thomas Reiter, Hans Schlegel and Alexander Gerst. Both Reiter and Gerst were part of long-term crew, meaning they spent several months on the ISS. Reiter was also the first European long-term flyer in 2006. He remembers:

    Living and working together on the ISS – at the forefront of science, representing the scientific teams on Earth – is fantastic. But then being able to go out and work on the ISS is incomparable. The spacewalks are the highlight.”

    What the MIR did not have to offer

    For Reiter, his stay on the ISS was already his second visit to a space station. In 1995, the ESA astronaut had already flown to the Russian space station MIR, a space station built by the Soviet Union. It orbited the Earth from 1986 to 2001. He compares:

    The biggest difference to MIR is simply the space you have up there. The ISS is much more spacious, has more modern technology and allows you to conduct research more efficiently. At the MIR station, everything was very cramped. We had to assemble the equipment we needed to carry out the experiments and then put it away again after work.

    Still in operation for ten years

    If it runs smoothly from a technical perspective, the ISS could remain in operation for at least another ten years. During this time, it will be used increasingly commercially, giving private industry the opportunity to conduct experiments in zero gravity in its own modules.

    With many industrial partners, the research capacities on the ISS can be optimally utilized and even expanded, explains ESA astronaut Matthias Maurer, who is scheduled to be the next German to fly to the ISS in 2021. But, says Maurer: “This is always just a supplement to the current program. I don’t think we’ll see the classic providers, i.e. the space agencies, disappear.”

    The successor has already been chosen

    The ISS will be taken out of service by 2030 at the latest, and a new space station will be launched by then: the Lunar Orbital Platform-Gateway. ESA astronaut Alexander Gerst explains: “This is a small space station that humanity is now planning and building. It will orbit the moon.”

    The station, which will also be operated by the ISS partners, will not be permanently manned. Among other things, it will serve as a stopover for new manned lunar missions, a kind of bus stop in space.

    Gerst said the new project has gotten off the ground and believes that understanding and acceptance of space travel has grown in recent years:

    I think a lot of people now really understand what we’re doing out there: exploring our cosmic environment and bringing scientific knowledge back to Earth to make life better down here, for example, researching diseases or new materials.”

    China also wants to put a space station into orbit in the next few years. Cooperation with China on the ISS failed in the 1990s due to a veto by the USA.

    World politics leaves its mark

    While the international astronaut teams worked well together on the ISS, things were sometimes very turbulent down on Earth. During the Ukraine crisis in 2014, for example, the USA partially suspended its space cooperation with Russia. But NASA stressed that there should be no compromises when it comes to the operation of the ISS.

    This was certainly also due to the fact that the USA itself was not able to send astronauts into space at that time. After the end of the Space Shuttle program in 2011, the Americans were dependent on the Russian Soyuz capsules.

    This prompted the then Russian Deputy Prime Minister Dmitri Rogozin to comment that the USA would have to bring its astronauts “to the ISS on a trampoline” without Russia.

    Half a year ago, there was a late retaliation from the USA to Moscow: After the successful flight of the private US company SpaceX to the ISS, SpaceX boss Elon Musk pointedly said: “The trampoline works.”

    Anti-Asteroid Program

    Forward defense in space

    Hundreds of asteroids are on a collision course with the Earth. It’s high time for a defense program. ESA and NASA are now going on the hunt together. The contracts have now been signed.

    The probability of winning the lottery on a Saturday is 1 in 140 million. “I don’t play the lottery,” says Rolf Densing, head of the ESA control center in Darmstadt. “But if these were chances of winning, you would have to play.”

    On his monitor he sees a list of asteroids and the probability that the Earth will be hit by each of them. For example, there is the asteroid “2010RF12”. Probability of being hit is 1 in 14 in 2095. Or “2018JD”: Probability of being hit is 1 in 675 in 2067. Or “2008JL3”: Probability of being hit is 1 in 6993 in 2027. The list goes on and on.

    ESA scientists are currently observing more than 1,000 asteroids that are on a potential collision course with Earth. Most of the impacts are only likely to occur in the coming decades. But one thing is clear: something is coming for Earth.

    Injuries and damage in Chelyabinsk

    People in Chelyabinsk, Russia, have already experienced this. On a winter’s day seven years ago, an approaching asteroid broke over the city. The shock wave shattered around 200,000 square meters of glass. More than 1,000 people were injured. This event has been shown once again that it is not a question of “if”. Rather, the question is when the next impact will occur, says ESA specialist Rolf Densing.

    Course: Asteroid Dimorphos. Arrival: 2022

    Together with NASA and ESA, we are now launching an asteroid defense program. The corresponding contracts for the European part of the mission were signed today. An American probe will fly to the asteroid “Dimorphos” next summer. It is scheduled to hit the asteroid in autumn 2022. The plan is to knock the asteroid, which has a diameter of around 160 meters, off course. A European probe will then evaluate the experiment.

    Fine work in the vicinity of a small celestial body – this is what Europeans have been experts at since the “Rosetta” mission. For ten years, the ESA navigated the “Rosetta” to probe seven billion kilometers through space in order to then land on a comet and study it. A pioneering mission that ended four years ago.

    Division of labor in space

    The former head of ESA mission operations, Paolo Ferri, said: “The Americans are our masters when it comes to landing on large planets. But we are the only ones who have experience of flying to small celestial bodies such as asteroids and carrying out investigations there.” Division of Labor in Space, then. The Americans direct a probe into an asteroid, the Europeans then head towards it to examine the crater and determine whether the impact was enough to throw the asteroid off course.

    The probe is being built in Bremen

    Rehearsals are being carried out on a celestial body that cannot pose a threat to the Earth. Dimorphos flies about 1 even million kilometers past the Earth. Things have to move quickly now. After the Americans bombard the asteroid, the Europeans want to launch the control mission in 2024. The probe, which bears the name of the Greek goddess Hera, is being built in Bremen. It will be no bigger than a desk and will have small mini-satellites on board that can examine the asteroid up close. An asteroid on a collision course: Americans and Europeans want to rehearse together for an emergency. “We are currently in the middle of the Corona crisis. We don’t need a second crisis of this magnitude,” says Densing.

  • Europe is investing a record amount in space travel. The ESA has set itself some ambitious goals

    Europe is investing a record amount in space travel. The ESA has set itself some ambitious goals: preventing asteroid impacts, embarking on a moon mission and clearing away space debris.

    It was a good 20 years ago that US actor Bruce Willis set-off into space: In the Hollywood blockbuster “Armageddon”, an asteroid is racing towards the earth. The only way towards stop it is to fly it and blow it up.

    What was science fiction back then could soon be reality. The European Space Agency ESA approved the so-called HERA mission, an asteroid defense mission led by Germany, at its Council of Ministers conference at the end of November.

    Protection against asteroid impact

    The ESA, together with the US space agency NASA, will investigate how humanity can protect itself from an asteroid impact on Earth: “We know that an asteroid will hit the Earth again at some point, and we have to be prepared for that moment. So far , no one has tried to seriously deflect an asteroid to find out exactly what happens then,” says Thomas Jarzombek, the German government’s coordinator for aerospace, explaining the ambitious project.

    What is the ESA?

    European Space Agency

    Based in Paris, France, founded in 1975

    Coordinates the space activities of the 22 member states *Record budget for the next three or five years: 14.4 billion euros, with Germany being the largest contributor with 3.3 billion euros * ESA’s priorities for the next three years: “Earth observation”, “space transport” and “human and robotic exploration”

    In June 2021, a NASA probe will fly to the double asteroidDidymos/Didymoon and impact there in October 2022. An ESA probe will then set off and measure the asteroid and the resulting crater. The mission is intended to find out whether asteroids can be diverted from their trajectory.

    Cleaning up space junk

    In addition to asteroid defence, the ESA is also negotiating the issue of space debris. Former astronaut and ESA coordinator Thomas Reiter says: “For the first time worldwide, we will begin to clear away the garbage because there is more and more space debris flying around up there. It’s time to clean up, to put it very carefully.”

    Robotic moon mission

    Another flagship project is the robotic moon mission. An Ariane rocket with a landing module will fly to the moon. “This is technically demanding project because so far there are only three nations that have really managed to land on the moon in a controlled manner – the USA, Russia and China,” says Jarzombek. “This could also contribute to the Americans’ moon mission. Everyone is interested in finding out where the interesting places on the moon are for later research.”

    It is still unclear when the robotic ESA mission will set off for the moon. The Americans want to start manned lunar flights again with the Artemis program from 2024. “The goal is not, as with the Apollo missions, to fly to the moon, bring back rocks and that’s it. This time it’s about making it sustainable,” explains Reiter. That’s why cargo transporters are needed to get scientific equipment to the lunar surface and conductance research there on a permanent basis.

    To the moon and fill up

    “From the moon, with one sixth of the gravity of the earth, it is easier to travel into the depths of our solar system as well as to supply fuel to satellites in geostationary orbit,” says Reiter.

    ESA astronaut Matthias Maurer, who will be the fourth Germanto fly to the ISS in the coming years, adds: “In the future, you could fly to the moon with a smaller rocket, fill up at the gas station there and then take off, for example towards Mars.”

    Earth observation by satellite

    The largest share of the new ESA budget, around 2.5 billion euros, will go to earth observation using satellites. “Especially in times of climate change, it is important to better understand what is happening in the atmosphere and at the polar caps,” explains Jarzombek.

    Much of the data from ESA satellites on weather and climateis already available free of charge on the Internet. It is used by start-up companies that are developing satellites with special sensors, says Jarzombek.

    “For example, there is a company that supports forest fire fighting. They are developing their own satellites with very special infrared optics. In addition to their own satellite data, however, they need further information, which is why they take additional photo data from ESA.”

    New satellites and services for climate monitoring, agriculture and disaster prevention are expected to begin operations in the next few years.

    Mass ejections from the Sun

    In addition to the weather on Earth, ESA will also be increasingly studying the weather in space. One mission is looking into the so-called coronal mass ejections of the sun – charged particles that are dangerous for the Earth: “The sun occasionally hurls electrically charged particles towards the Earth, which can have enormous consequences for our modern industrial societies,” explains Reiter. The result could be disruptions in radio traffic and even blackouts in the power supply.

    That is why a mission towards the sun is to be launched in a few years: “We want to fly to a position from which we can look at the sun and the earth from the side. From this perspective, we can see immediately when a coronal mass ejection moves towards the earth,” says Reiter.

    ESA mission

    A garbage disposal for space

    The European Space Agency (ESA) is launching the world’s first cleanup mission in space. “ClearSpace-1” is designed to remove debris. It will start in 2025.

    Space may be empty of air, but it is home to a lot of garbage. Tens of thousands of pieces of debris from old satellites or rockets are flying around in space. These pieces can fall to Earth or cause collisions. A cleanup mission is intended to reduce the risk of accidents. It is set to begin in 2025, as the ESA announced in Darmstadt. This was decided by the European space ministers at their most recent conference in Seville, Spain. In Germany, Federal Minister for Economic Affairs Peter Altmaier is responsible for space travel.

    In collaboration with a commercial consortium, a Swiss startup has now been commissioned to realize “ClearSpace-1”. ESA experts will be monitoring the project from Darmstadt, and it is scheduled to start in March next year.

    Business with a future

    “This example will set a precedent,” said ESADirector General Jan Wörner. He sees a future market in the removal of space debris. Space is infrastructure that is used every day for a variety of applications, said Wörner. Protecting this infrastructure is of great value.

    Lack of space in endless expansions

    Where space is already tight today, space could soon become even tighter. Experts expect thousands more satellites to be launched in the coming years. This increases the likelihood of collisions. According to the ESA, the aim of the “ClearSpace-1” mission is for a space probe to capture an upper stage of an old ESA rocket with four grippers and return it to Earth’s orbit to burn up. Later, other space probes will capture several large pieces of debris.

    Donors Germany

    The Federal Republic is contributing twelve million euros to”ClearSpace-1″. In total, the ESA member states have decided to invest 14.4 billion euros in European space travel in the coming years. Germany is increasing its total contribution from 1.9 to 3.3 billion euros compared to the 2016 ESA conference, making it the largest contributor.

    ESA Mars mission launched

    And the red planet beckons forever

    ExoMars is running according to plan: A “Proton” rocket with a satellite and landing unit was launched from Baikonur towards Mars. In 217 days, the satellite will be at the edge of the Martian atmosphere. Sebastian Kisters explains the special features of the ESA mission.

    What does ESA want on Mars?

    More than 40 space probes have already been sent to Mars. But one question is still unanswered: Is there life on our neighboring planet? The only thing that is certain so far is that there is methane in the Martian atmosphere. On Earth, the gas is mainly released by living things. However, it could also be of volcanic origin. The “ExoMars” mission is now intended to provide clarity. “For the first time, there are instruments that are capable of doing this,” says Paolo Ferri, the head of ESA mission operations.

    On Monday at 10:31 am Central European Time, the European Space Agency (ESA) launched a satellite with a landing unit into space.Together they weigh over four tons. This means that one record is already certain: it is the heaviest Mars mission of all time. With fuel and allignition stages, the rocket has a launch weight of 697 tons.

    It will take the satellite 217 days to reach the edge of the Martian atmosphere. Various instruments will then examine its composition. The most exciting part of the mission will begin on October 19 at 4:45 pm The landing module will then race towards the red planet at a speed of 5.8 kilometers per second. A parachute and three jets will slow it down shortly before impact. The landing will be at a maximum speed of five meters per second, or about 18 km/h. If the maneuver is successful, it will provide crucial information for a second part of the mission.

    In 2018, ESA plans to launch a rover with a drill to Mars. So far, robots have only scratched the surface of the planet. In three years, however, samples from a depth of two meters will be available. If there was ever life on Mars – or still is – then it would only be at this depth. On the surface, intense UV radiation is likely to have destroyed all life.

    Who controls the mission?

    What Houston is for NASA, Darmstadt is for ESA. The rapid maneuver is controlled from the European Space Operations Center in southernHesse. However, control is relative in this mission. Each control command takes over nine minutes to reach the probe from Darmstadt, over 75 million kilometers away. For comparison: the moon is 384,000 kilometers from Earth.

    Why is the launch taking place in Baikonur?

    NASA actually wanted to take part in the mission. But in 2012 it became too expensive for the Americans. ESA was able to gain the Russian space agency Roskosmos as a new cooperation partner. So now the launch will take place from the legendary spaceport in Baikonur. On October 4, 1957, the Russians launched the first satellite from Baikonur into space:”Sputnik”, which means “companion” or “lane companion” in German. The first victory in the race into space went to the Soviets. The West suffered from the “Sputnik shock” “. The first person was also launched into space from Baikonur: Yuri Gagarin, on April 12, 1961.

    Baikonur had two advantages for the Soviets: the dry desert climate and the location. Rockets that were launched in a northeasterly direction initially flew several thousand kilometers over Soviet territory. The city was long one of the most secret places in the world. To keep spies away, Baikonur was long marked 380 kilometers to the west on Soviet maps.

    Today the spaceport is located on Kazakh territory. Russia transfers over 100 million dollars a year to be allowed to continue using the site. The place is no longer secret: astronauts and satellites from all over the world begin their journey into space here.

    How does the launch go?

    At 10:31 am the earth shakes in the Kazakh desert. The 697-ton “Proton” rocket takes off. It goes straight up for the first kilometer. The rocket needs 22 seconds to do this. One minute after takeoff it has already reached a height of ten kilometers and is already traveling at 1500km/h towards Mars. Another minute later the “Proton” is already 45 kilometers from Earth.

    The “Proton” was long considered a reliable workhorse among launch vehicles. By the turn of the millennium, there had been more than 250 launches. The success rate was 96 percent. In recent years, however, there have been repeated problems. In 2013, a satellite mission was history seconds after launch; the rocket exploded. In 2014 and 2015, satellite board “Proton” rockets also failed to reach their targets. They crashed from a great height.

    Experienced scientists are also relying on lucky charms in the control center in Darmstadt in the coming hours. They believe the decision to fly with the Russians was the right one in any case. Paolo Ferri, ESA’s head of mission operations, says: “I believe science can help solve political problems. We are showing that countries can and must cooperate. This is a sign that politicians must follow.”

    When will the first person fly to Mars?

    The technology is there. “But the costs and risks make it impossible to send people to Mars at the moment,” says Paolo Ferri, head of ESA mission operations. The biggest problem is radiation in space. Astronauts cannot yet be adequately protected from it on a long journey to Mars. They would probably die quickly of cancer.

    ESA contract for Swiss startup

    The junk in orbit should be removed

    Thousands of remnants of decommissioned rockets and satellites are whizzing around the Earth. The Swiss startup ClearSpace is now to become the world’s first company to dispose of debris in space on behalf of the European Space Agency.

    After almost sixty years of space travel with several thousand launches, a veritable cloud of junk is now orbiting the Earth. Old rockets, parts of satellites and abandoned aircraft are buzzing around in orbit. In the early days of space travel, no one thought about this, says Luc Piguet.

    The 48-year-old electronics engineer is one of the two bosses of the Swiss company ClearSpace, which was commissioned by the European Space Agency (ESA) to carry out the world’s first space cleanup mission. The garbage is a danger, says Luc Piguet. “When two objects that no longer function collide, it generates a very large amount of junk in space, and that is a risk for all missions we have in the future.”

    Scrap burns up in the atmosphere

    Cleaning up is urgently needed. There are now around 42,000 objects flying through space. Like projectiles, they can be dangerous for satellites or space stations. The company ClearSpace was founded in the vice of the EPFL Lausanne technical university. ClearSpace means free space in German.

    ClearSpace wants to help make the orbit free of junk again, says the 48-year-old electronics engineer. “What we’re actually doing is taking these objects that no longer work out of space. We pick them up and bring them back into the Earth’s atmosphere, where they burn up,” explains Piguet.

    Braking manoeuver is complicated

    For the cleanup operation, a satellite called”ClearSpace-1″ will be built and sent into space to collect the garbage. The device will find objects and slow them down. “We will reduce the speed so that it comes back down into the Earth’s atmosphere, ” says Piguet.

    In, it is known where the debris is in orbit. This is made possible by a database that is fed with coordinates from radar monitoring. The information is accurate to within about two to three kilometers. The satellite must find the exact position of the objects themselves and then seize them for the braking process.

    “This is one of the biggest challenges of the mission,” explains Piguet. Since there is no gravitational pull, there is no frictional resistance between the objects. “A very small touch can trigger large movements. And when the object is moving, it becomes complicated to even touch it.”

    Hope for a new market

    The subsequent burning up of the scrap pieces upon entry into the Earth’s atmosphere, however, is relatively problem-free, because new satellites in particular are specially designed to be seen in a kind of shooting star shower.

    The fact that ESA has awarded the 100 million euro cleanup contract to the Swiss company ClearSpace is a real accolade for the company, which has around 30 employees. “This is fantastic for us,” says Piguet. However, the Swiss cannot handle the task alone, so they are working with major partners – such as Airbus.

    Everyone involved hopes that in the long term a market will emerge for scrap disposal in orbit. Whether the system actually works will be seen from 2025 onwards. That is when the “ClearSpace-1” satellite is due to be sent into space.

    Anti-Asteroid Program

    Forward defense in space

    Hundreds of asteroids are on a collision course with the Earth. It is high time for a defense program. ESA and NASA are now going on the hunt together. The contracts have now been signed.

    The probability of winning the lottery on a Saturday is 1 in 140 million. “I don’t play the lottery,” says Rolf Densing, head of the ESA control center in Darmstadt. “But if these were chances of winning, you would have to play.”

    On his monitor he sees a list of asteroids and the probability that the Earth will be hit by each of them. For example, there is the asteroid “2010RF12”. Probability of being hit is 1 in 14 in 2095.Or “2018JD”: Probability of being hit is 1 in 675 in 2067. Or”2008JL3″: Probability of being hit is 1 in 6993 in 2027. The list goes on and on.

    ESA scientists are currently observing more than 1,000 asteroids that are on a potential collision course with Earth. Most of the impacts are only likely to occur in the coming decades. But one thing is clear: something is coming for Earth.

    Injuries and damage in Chelyabinsk

    People in Chelyabinsk, Russia, have already experienced this. On a winter’s day seven years ago, an approaching asteroid broke over the city. The shock wave shattered around 200,000 square meters of glass. More than 1,000 people were injured. This event has shown once again that it is not a question of “if”. Rather, the question is when the next impact will occur, says ESA specialist Rolf Densing.

    Course: Asteroid Dimorphos. Arrival: 2022

    Together with NASA, ESA is now launching an asteroid defense program. The corresponding contracts for the European part of the mission we resigned today. An American probe will fly to the asteroid “Dimorphos”next summer. It is scheduled to hit the asteroid in autumn 2022. The plan is to knock the asteroid, which has a diameter of around 160 meters, off course. An European probe will then evaluate the experiment.

    Fine work in the vicinity of a small celestial body – this is what Europeans have been experts at since the “Rosetta” mission. For ten years, the ESA navigated the “Rosetta” probe seven billion kilometers through space in order to then land on a comet and study it. A pioneering mission that ended four years ago.

    Division of labor in space

    The former head of ESA mission operations, Paolo Ferri, says: “The Americans are our masters when it comes to landing on large planets. But we are the only ones who have experience of flying to small celestial bodies such as asteroids and carrying out investigations there.”Division of labor in space, then. The Americans directed a probe into an asteroid, the Europeans then head towards it to examine the crater and determine whether the impact was enough to throw the asteroid off course.

    The probe is being built in Bremen

    Rehearsals are being carried out on a celestial body that cannot pose a threat to the Earth. Dimorphos flies about eleven million kilometers past the Earth. Things have to move quickly now. After the Americans bombard the asteroid, the Europeans want to launch the control mission in 2024. The probe, which bears the name of the Greek goddess Hera, is being built in Bremen.

    It will be no bigger than a desk and will have small mini-satellites onboard that can examine the asteroid up close. An asteroid on a collision course: Americans and Europeans want to rehearse together for an emergency.”We are currently in the middle of the Corona crisis. We don’t need a second crisis of this magnitude,” says Densing.

    Space law

    Vacuum, but not lawless

    There’s a lot going on in space: celestial bodies, hundreds of satellites and, last but not least, a lot of junk. That’s why rules apply even in infinite distances. An ESA conference is now discussing who is allowed to do what in space.

    What international agreements are there?

    Space does not belong to the sovereign territory of any state. Therefore, five United Nations agreements form the basis of space law. The most important is the Outer Space Treaty of 1967, which more than 100 states have ratified. Among other things, it regulates that every nation has free access to space and that no country may make territorial claims on other celestial bodies.

    The Moon Treaty of 1979 declares the moon and other celestial bodies to be the common heritage of mankind. However, it has so far been ratified by less than 20 states, partly because of possible economic interests in the extraction of raw materials in space.

    What rules apply to raw materials in space?

    Many celestial bodies are brimming with useful resources, such as rare metals. The USA and Luxembourg have already passed laws that potentially declare raw materials mined in space to be their property.

    According to Stephan Hobe, head of the Institute for AirLaw, Space Law and Cyber Law at the University of Cologne, this is “simply and plainly illegal”. One can only pass laws on something that one has control over. And according to the Outer Space Treaty, celestial bodies andtheir raw materials are not included.

    What happens to space junk?

    There is currently no agreement that obliges spacefaring nations to avoid waste or to dispose of the waste that has already been generated. In recent years, awareness of the problem of waste in space has grown, but legal regulation is still a major challenge, says space lawyer Hobe. It must try to retrospectively define responsibility for actions that werepreviously permitted. “At the moment, no one is responsible for the wastein orbit.”

    According to the European Space Agency (ESA), the spaceinstitutions have agreed – without any legal obligation – to keep particularlyimportant orbits free of debris. At an ESA conference in Darmstadt, lawyers, engineers and business representatives are now discussing further regulations on space debris and safety.

    Is there a space traffic regulation?

    There are currently no right-of-way or other traffic rules for spacecraft, explains Holger Krag, head of the ESA Space Debris Office.”A healthy pragmatism has developed in operations.” Around 95 percentof potentially dangerous encounters are with inactive objects such as debris ordeactivated satellites, so the question of right-of-way does not arise. In the remaining cases, the operators of the respective satellites coordinate with each other, reports Krag.

    Who is liable for damage caused by space travel?

    The UN agreements concerning the legal status of space oblige states to be liable for damage caused by their space activities. A distinction is made between two cases: In the case of damage in space, for example the collision of two satellites, a state is only liable if it can be proven that it was at fault for the accident. If there is damage on earth – for example if a satellite crashes – the state that operates the satellite is always liable.

    Can space be used for military purposes?

    Space is largely demilitarized. This means that no weapons may be stationed in Earth’s orbit or on other celestial bodies. This is stipulated by UN agreements. However, these regulations are now considered to be outdated.

    Because the US government is pushing ahead with its plans to create its own armed forces in space, concerns are growing about an arms race in space. In 2014, Russia and China therefore presented a draft treaty to theUN Disarmament Conference in Geneva that would ban certain weapons. But becausethe development of weapons is progressing so rapidly, many believe that this is not enough – also because harmless satellites can become weapons if malicious intent is used.

    Who is allowed to conduct research in space?

    For peaceful purposes, every nation may build stations in orbit and on other celestial bodies. The principle of mutual openness applies: no state may completely seal off its research station from other states.

    How is space law enforced?

    There is no separate space court. States could appeal to the International Court of Justice in The Hague in the event of violations of the UN space treaties. Companies or private individuals could, under certain circumstances, bring claims against third parties in national civil courts -provided that there is a legal regulation governing the dispute.

    US space company commissioned

    NASA is developing suits for the moon

    The private company Axiom Space is producing new space suits for $228.5 million. NASA commissioned the company because it wants to send people back to the moon with the Artemis mission.

    The private US space company Axiom Space is to develop a space suit system that NASA plans to use for its first moon landing in over 50 years. As part of the “Artemis” program, the “Artemis III” mission is to fly people to the moon and land on the Earth’s satellite in 2025 at the earliest.

    Axiom Space, based in Houston, Texas, will be responsiblefor the design, development, certification and ultimately production of the space suits and associated equipment, the US space agency said.

    The contract also includes testing the suits in a”space-like environment.” However, acceptance and approval ultimately rests with NASA itself.

    NASA struggles with technical problems

    Axiom made headlines in April with the first private charter flight to the International Space Station (ISS). Three private individuals flew into space in a “SpaceX” space capsule with an experienced pilot -ticket prices were reportedly 50 million US dollars.

    Meanwhile, NASA’s “Artemis” space program is struggling with problems. The launch of the “Artemis I” mission has already been aborted twice due to technical problems with the newly developed SLS rocket.

    Radiation experiment

    Dolls in Space

    Two dolls flew into space and back in a spaceship as part of a radiation experiment. Thomas Berger from the German Aerospace Center explains why the Artemis I mission is important for the future of manned space travel.

    tagesschau.de: In December, the two manikins landed safely back on Earth in the “Orion” space capsule. How happy were you?

    Thomas Berger: Very happy. The two dolls, which we scientists have called “Helga” and “Zohar”, have travelled more than two million kilometres in space after around 25 days. Orion is a – in English they say “human-rated” spaceship, a spaceship built for humans that has travelled further away from the Earth than any such spaceship has ever been before. After landing in the Pacific, Orion was brought back to NASA, to the Kennedy Space Center in Florida. Then “Helga” and”Zohar” were removed from the spaceship by our NASA colleagues and arrived safely at the DLR in Cologne a good two weeks ago.

    Dr. Thomas Berger is head of the “Biophysics” working group at the German Aerospace Center in Cologne. His research focuses on radiation protection and the development of radiation detectors.

    tagesschau.de: Thousands of detectors were built into the two measuring dummies. They measured how much radiation affected the dummies I nspace. How did that work?

    Berger: The dolls are made up of 38 plastic discs that have different densities and thus replicate the different organs of the human body. We measure the radiation with active and passive detectors: The active measuring devices are battery-operated. They save the measured radiation do seat a specific point in time, so every five minutes in the case of our detectors.

    This means that after evaluating the measurement data, we can say how high the radiation dose was in free space at a specific point in time, for example when Orion flew past the moon. The active detectors, therefore, provide a time-resolved distribution of the radiation over the entire mission. That is the advantage of these measuring devices.

    tagesschau.de: At which points on the dolls did the detectors measure the radiation?

    Berger: Several active measuring devices were installed inthe lungs, stomach, uterus and spinal cord. These are the most radiation-sensitive organs in the human body and therefore, after the evaluation, we can say, for example, how high the radiation dose in the lungs was on the fifth day of the mission.

    6000 detectors in each doll

    tagesschau.de: And what was the purpose of the passive detectors?

    Berger: We installed around 6,000 passive detectors – these are small crystals – in each Phantom, so a total of 12,000. 80 percent came from the DLR, 20 percent from NASA. The entire spacecraft was equipped with detectors like a Christmas tree. The crystals are called thermo luminescence detectors. In principle, they store the ionizing radiation that hit them in space in their crystal lattices. When we heat them for analysis, they emit thestored energy in the form of light, and this light can be measured. The light emitted by the crystals is proportional to the radiation dose that they absorbed over the course of the experiment. This means that we can use these crystals to map the total dose over the mission and ultimately obtain a three-dimensional radiation distribution.

    tagesschau.de: What are these crystals made of?

    Berger: The crystals are made of lithium fluoride. These detectors have been used in radiation protection for ages. For example, if you work in a nuclear power plant or in radiology in a hospital, you also wear dosimeters that measure your radiation dose. These crystals are also built into them.

    Test dolls have women’s bodies

    tagesschau.de: The test dummies you developed are women’sbodies. Why?

    Berger: On the one hand, there are now more female astronauts, which is absolutely to be welcomed. Unfortunately, however, the general radiation-induced cancer risk is higher for women than for men. The body’s overall cancer risk is made up of organ cancer risks, and for women, breast cancer is an additional risk. And that is why the overall risk is higher for women than for men. There are not yet so many basic measurements for the female body, and that is why we flew these two female bodies.

    tagesschau.de: One of the two dolls wore a radiation protection vest during the flight. What insights are you hoping to gain?

    Berger: The doll “Zohar”, which is financed by the Israeli Space Agency (ISA), wore a 26 kilogram vest, the AstoRad vest, which was developed by the Israeli company StemRad and is made of high-density polyethylene. The basic idea behind this vest is that it should give astronauts, when they fly in space – as part of the Orion mission or on future exploration missions to Mars – additional radiation protection, especially in the event of a solar flare. Last week, for example, the northern lights could be seen in northern Germany.

    This is a clear sign that charged particles were emitted by the sun, which then created the northern lights by interacting with the molecules in the atmosphere. Then you see the beautiful lights. But at the same time, these particles emitted by the sun pose a high risk: astronauts could get radiation sickness. This vest was developed to provide additional protection against this dose of radiation. Since only one of the two dolls wore this protective vest, we can see what the vest really does.

    Analysis of results is ongoing

    tagesschau.de: When do you expect to see the results of your evaluations? After all, people are supposed to be on board the Orion spacecraft for Artemis II in 2024.

    Berger: We read the active detectors in January at the Kennedy Space Center immediately after the handover, so we saved the data for posterity and science, so to speak. We are currently working on the analysis. The passive detectors, which were mainly provided by the DLR, will certainly keep us busy for the next six months or a year.

    Former astronaut has an accident

    “Earthrise” photographer Anders dies

    “Earthrise” is the name of one of the most famous photos of the Earth. It was taken by astronaut William Anders on Christmas Eve1968. The Apollo 8 participant has now died in an accident.

    Former US astronaut William Anders, who took one of the most iconic photos in space history, has died. He died when a small plane he was piloting crashed into the sea northwest of the US coastal metropolis of Seattle, US media reported, citing his son Greg. Anders was 90 years old. He was flying alone in the plane, it was said. The crash is being investigated by the authorities.

    A special photo on Christmas Eve

    In 1968, Anders was part of Apollo 8, the first manned flight to the moon, although it did not land. On Christmas Eve, he happened totake the photo “Earthrise,” which shows the moon in the foreground and the Earth rising in the background. Along with Anders, the astronauts Frank Borman and James Lovell were on board.

    During their mission, the astronauts were supposed to take photos of the moon. “I always call it ‘ironic,’” Anders told the Seattle Times. “We went there to discover the moon. But what we really discovered is the Earth.”

    The US space agency NASA later published Anders’ photo under the simple number “AS08-14-2383”. It changed humanity’s view of the earth and became a symbol of the earth’s fragility. It is sometimes seen as the trigger for the environmental movement.

    “Here we are, on an insignificant planet”

    It was taken with a Hasselblad camera. Anders himself was dissatisfied with the quality of the picture. It wasn’t completely sharp, he told the Seattle Times. But the sight of the small, blue globe, half in shadow behind the horizon of the gray moon, also changed him and his way of thinking.

    Here we are, on an insignificant planet orbiting a not particularly significant star in a galaxy of millions of stars, which is not significant when there are millions and millions of galaxies in the universe -so are we really that significant? I don’t think so.

    “Anders has given humanity one of the most profound gifts an astronaut can give,” commented NASA chief Bill Nelson via the online platform X.

    Career after space career

    After his career at NASA, Anders worked as a presidential advisor, US ambassador, and for various nuclear energy and aviation companies. He most recently lived on the San Juan Islands on the northwest coast of the USA. Anders was married and had six children.

    NASA astronaut

    Apollo 8 commander Frank Borman dies

    With his crew, he completed the first mission around the moon – and thus paved the way for the landing on the Earth’s satellite a few months later: Now NASA astronaut and Apollo 8 commander Frank Borman has died.

    The US astronaut and commander of the Apollo 8 mission Frank Borman has died. Borman died on Tuesday in the US state of Montana at the age of 95, the space agency NASA announced on Thursday (local time). “Today were member one of NASA’s finest. Astronaut Frank Borman was a true American hero,” said NASA Director Bill Nelson.

    From the Cape Canaveral spaceport, Commander Borman and his colleagues James Lovell and William Anders launched to the moon on December 21,1968 as part of the Apollo program, paving the way for the moon landing a few months later.

    Christmas message from lunar orbit

    The trio’s journey lasted three days, and on Christmas Eve Apollo 8 entered lunar orbit. The astronauts sent a Christmas message with Bible quotations to Earth via live broadcast. Mission leader Borman concluded by saying: “And from the crew of Apollo 8, we close with good night, goodluck, and Merry Christmas, may God bless you all – all of you on the good Earth.”

    Apollo 8 orbited the moon ten times. On the fourth orbit,Anders took the iconic “Earthrise” photo, which showed the Earth rising above the lunar horizon. Borman later described what humanity’s home planet looked like from a distance.

    We were the first people to see the world in its majestic totality, a tremendously emotional experience for each of us. We said nothing to each other, but I was sure our thoughts were identical – they were of our families on this spinning globe. And perhaps we shared another thought I had: This must be what God sees.

    Later head of the Apollo program

    A native of Indiana, Borman developed a passion for airplanes at the age of 15, which eventually led him to the Air Force and later to NASA.

    From 1950 he was a fighter pilot and later also an assistant professor of thermodynamics at the West Point Military Academy. In 1967 he was a member of a committee that investigated the Apollo spacecraft fire that killed three astronauts. He was later appointed director of the Apollo program and led the team that redesigned the Apollo spacecraft.

    Borman with numerous honors

    After leaving the Air Force in 1970, Borman became a consultant to Eastern Airline before holding various senior positions there and becoming its president. Bormann also served as a special envoy to the U.S.President.

    He received numerous awards, including the Congressional Space Medal of Honor. In 1990, Borman was inducted into the International Aerospace Hall of Fame and in 1993 into the US Astronaut Hall of Fame. Borman was married and had two sons and four grand children.

    Rocket from SpaceX

    “Starship” test flight successful on fourth attempt

    Transport to the Moon – and Mars?

    It worked on the fourth attempt: the giant rocket”Starship” has returned to Earth after a successful test flight. Itis expected to fly humans to the moon for the first time in 2026.

    The US space company SpaceX has successfully completed a test flight of its mega rocket “Starship” for the first time on its fourth attempt. The 120-meter-long spacecraft successfully splashed down in the Indian Ocean after a flight around the Earth for about an hour, as the company of multi-billionaire Elon Musk announced on the online service X.

    The rockets were lost during the first three test flights.”Splash down confirmed! Congratulations to the entire SpaceX team for an exciting fourth test flight of Starship!” the company said. The landing on the sea was also shown in a live stream.

    The rocket was launched in the morning (local time) from the Starbase spaceport in Boca Chica, Texas. According to SpaceX, the focus of the fourth test was on the controlled and safe return of the rocket to Earth and its reusability.

    SpaceX hopes to eventually take people to Mars with the”Starship” rocket. Before that, the new mega-rocket, which is supposed to be completely recyclable, will transport NASA astronauts to the moon.

    In 2026, the US space agency plans to use the”Starship” rocket to bring astronauts to Earth’s satellite for the first time in more than 50 years. For the mission called “Artemis 3”, the new rocket must be able to be brought into orbit and then refueled with several “Starship tankers” for its long journey.

    Three failed attempts

    During its first test flight in April 2023, the”Starship” rocket staggered and blew up in the air. During the second attempt in November 2023, the rocket exploded after separating from the propulsion stage after a successful launch.

    The third test flight took place in mid-March 2024. The rocket traveled a significantly longer distance than in the first two test flights before SpaceX reported its “loss” 49 minutes after launch.

    Explosions shortly after takeoff

    Second Starship test also ends prematurely

    One day, “Starship” rockets will take people to the moon and beyond. But SpaceX’s second test flight also went differently than hoped: Both rocket stages exploded shortly after launch.

    The largest rocket system ever built in the history of space travel was also unable to complete a second test flight. The unmanned 120-meter-high rocket took off from SpaceX’s Starbase spaceport in the US state of Texas. However, eight minutes later, both rocket stages exploded, SpaceX reported.

    A few minutes after launch, the two rocket stages separated. The lower rocket stage exploded shortly afterwards and did not land in the Gulf of Mexico as planned. The upper rocket stage continued to fly for a while and apparently also reached space for the first time. But then contact was lost.”We lost the data from the second stage – we think we may have lost the second stage,” said moderator John Insprucker in the SpaceX live stream. Itwas actually supposed to climb to an altitude of 240 kilometers and land in the Pacific near Hawaii after about an hour and a half.

    A SpaceX spokesman described the test flight as an”incredibly successful day” despite the “rapid unplanned disassembly” of the propulsion stage and rocket.

    First test flight ended after four minutes

    In mid-April, a “Starship” set off for an unmannedtest launch for the first time – and four minutes later, it staggered and exploded and broke apart. Billionaire Elon Musk, whose company SpaceX developed and built the “Starship” rocket system, did not see this as a setback, but rather as an “exciting test launch”.

    The US aviation authority FAA investigated the incident and demanded numerous improvements as a condition for the next test launch. SpaceX announced that it had implemented numerous improvements and changes. The FAA then allowed further tests.

    The second test launch with a “Starship” was originally scheduled for Friday, but was then postponed to this Saturday. Musk stated that a drive unit on a control surface had to be replaced before hand.

    SpaceX wants to go to Mars with “Starship”

    The “Starship” – consisting of the approximately 70-meter-long “Super Heavy” booster and the approximately 50-meter-long upper stage, also called “Starship” – is intended to enable manned missions to the moon and Mars. The system is designed so that the spaceship and rocket can be reused after returning to Earth. The system, which is around 120 meters long in total, is expected to be able to transport well over 100 tons of cargo in the future. NASA wants to use the”Starship” to bring astronauts to the moon on the “Artemis” mission. SpaceX hopes to reach Mars one day.

    “Starship” rocket explodes during first test flight

    The first test flight of the longest “Starship”rocket system ever built ended with an unplanned explosion just minutes afterlaunch. The space company SpaceX nevertheless rated the test as a success.

    The largest and most powerful rocket system ever built,”Starship”, broke apart during its first test flight just minutes after launch.

    Elon Musk’s private space company SpaceX nevertheless rated the test a success. “As if the flight test wasn’t exciting enough, there was a rapid unplanned breakup of Starship before stage separation,” SpaceX said on Twitter. “In a test like this, success depends on what we learn,and today’s test will help us improve Starship’s reliability.”

    The team will now extensively evaluate the test data and work towards the next test launch.

    Test should have lasted another 90 minutes

    The “Starship” initially lifted off from the earthin Brownsville, Texas, with a lot of fire and smoke. A few minutes later -shortly before the two rocket stages separated  the live images showed how the rocket system began to tumble and finally burst into flames.

    The test was actually supposed to last another 90 minutes and end with the landing of both stages. During the test flight, the first rocket stage was supposed to fall into the Gulf of Mexico after separation. The”Starship” space shuttle was supposed to fly to an altitude of more than 150 kilometers, orbit almost the entire Earth and then fall into the Pacific.

    “We wanted to make it to the end, but we’re happy we made it this far,” SpaceX officials said.

    “Starship” is intended to enable manned missions to the Moon and Mars

    The test flight had been eagerly awaited.”Starship” – consisting of the approximately 70-meter-long”Super Heavy” booster and the approximately 50-meter-long upper stage, also called “Starship” – is intended to enable manned missions to the Moon and Mars.

    The first test launch had actually been planned for Monday, but was then postponed until Thursday due to a problem with a valve.  Approvalf or the test flight was only granted by the US aviation authority last Friday.

    The “Starship” system is designed so that the spacecraft and rocket can be reused after returning to Earth. The upper stagehad already been tested several times on its own, but this was the first joint flight test of the entire rocket system.

    System total length around 120 meters

    On paper, Starship, with a total height of 120 meters, is almost twice as powerful as the “Space Launch System” of the US space agency NASA, which launched the Orion capsule on an unmanned flight around the moon in November.

    The “Starship” system, which is around 120 meters long in total, will be able to transport well over 100 tons of cargo in the future. The rocket will be powered by 33 “Raptor” engines that run on liquid methane and liquid oxygen. The system will be able to be refueled in space.

    NASA has selected “Starship” to fly humans to the moon for the first time in more than 50 years on the Artemis 3 mission at the end of 2025. The rocket should even be able to make flights to Mars and beyond.

    Launch of “Starship” rocket planned

    SpaceX plans to launch the largest rocket ever built today. The “Starship” is set to embark on a test flight after the first attempt failed in April. NASA is relying on the system for future moon missions.

    Will it work this time? This question accompanies the launch of the “Starship” rocket system planned for today. Today, the world’s largest rocket ever built, with a height of 120 meters, is scheduled to takeoff from the SpaceX space port in Boca Chica in southern Texas. It will then almost orbit the earth and finally land in the ocean off Hawaii.

    SpaceX sent a noise warning to residents of the surrounding Cameron County. It is possible that a “loud noise” will be heard from the 33 engines. What one hears on site will also depend on the weather and other conditions. There is a time window of just 20 minutes for the launch. The test flight was actually planned for Friday, but had to be postponed due to technical problems.

    First test flight failed

    A first test flight of the giant rocket in April failed spectacularly. The “Starship” got out of control due to several engine failures and had to be blown up just minutes after takeoff. Ash rained down miles away and a fire broke out in a park.

    Environmentalists filed a lawsuit against the US Federal Aviation Administration (FAA), which had approved the launch. The FAA investigated the incident and demanded numerous improvements as a condition for another test launch. SpaceX announced that it had implemented many improvements and changes.

    SpaceX boss Elon Musk called the failed first test not a setback, but an “exciting test launch.”

    NASA hopes for “Starship”

    The “Starship” – consisting of the approximately 70-meter-long “Super Heavy” booster and the approximately 50-meter-long upper stage, also called “Starship” – is intended to enable manned missions to the Moon and Mars and to be able to transport well over 100 tons of cargo. The system is designed so that the spaceship and rocket can be reused after returning to Earth. NASA wants to use the rocket system for its “Artemis” moon mission in 2025. It is considered unlikely that the schedule can be met.

    Maiden flight is postponed

    It is the largest rocket in the world to date: US billionaire Elon Musk’s “Starship” is intended to make transportation into space cheaper. But the first test flight had to be postponed due to technical problems.

    The first test flight of the “Starship” rocket from tech billionaire Elon Musk’s private space company SpaceX had to be canceled at the last minute. Shortly before launch, there was a technical problem, the company announced during the live broadcast scheduled for the launch.

    Musk himself wrote on Twitter that a pressure valve had apparently frozen. According to SpaceX, a new launch attempt will not take place until Wednesday at the earliest.

    The 120-meter-high heavy-lift rocket was supposed to take off in the afternoon Central European Time from the company’s own spaceport, Starbase, in Boca Chica, Texas. The “Starship” is considered the largest and most powerful rocket ever built. It is planned that it will fly for about 90 minutes after launch.

    Both rocket stages in use for the first time

    In the future, the rocket system will be able to transport well over 100 tons of cargo. SpaceX’s new rocket consists of two stages: the”Super Heavy” propulsion stage with 33 engines and the upper stage, the actual “Starship”, with a further six engines. It also serves as a spaceship for cargo and passengers. The “Starship” is designed so that the spaceship and rocket can be reused after landing on Earth.

    During the test flight into Earth orbit, both rocket stages will be used together for the first time. The first rocket stage will separate from the space craft just under three minutes after launch and then sink into the sea. The second stage will continue its flight, orbit the Earth once and then land in the sea off Hawaii.

    Plans for flights to the Moon and Mars

    There had been repeated set backs in tests in the past, with prototypes of the rocket exploding several times. It was only in February that almost all of the engines of the first “Starship” stage were successfully ignited for the first time.

    There are high hopes for the giant rocket. The US space agency NASA wants to use it to bring astronauts back to the moon. It has also commissioned the private company to develop a commercial lunar lander. This is intended to bring astronauts to the Earth’s satellite. The contract is worth 2.89 billion dollars.

    The moon mission is considered a stepping stone for the first manned flight to Mars. The USA is the only country to date to have brought twelve astronauts to the Earth’s satellite with the Apollo missions.

    Musk wants to reduce the cost of space travel

    But Elon Musk also has a declared economic goal: transport should become cheaper. Unlike the test flight, in future both rocket stages will return to the launch pad. If this succeeds, “Starship” would be SpaceX’s first rocket that could be completely reused. This could make rocket launches even cheaper.

    Musk hopes that an orbital flight of this spacecraft could cost less than ten million dollars in the future. Customers currently pay around 67 million dollars for a launch of SpaceX’s “Falcon 9” rocket. Musk’s private space company is already a leader among commercial providers.

    SpaceX worth an estimated $137 billion

    Musk, who was involved in the founding of the payment service provider PayPal and is the head of the electric car manufacturer Tesla, founded SpaceX in 2002. The space company was created with the goal of making cheaper space flights possible.

    SpaceX is one of the largest private companies in the world. The space company has an estimated value of $137 billion.

  • NASA Decides to Bring Starliner Spacecraft Back to Earth

    The manned “Starliner” space capsule has reached its destination and docked with the ISS. Due to a series of problems, the project is years behind schedule – and this time, too, not everything went smoothly.

    The “Starliner”, carrying two NASA astronauts, has docked at the International Space Station ISS – but on the way there it once again struggled with numerous technical problems. A helium leak had already been discovered before the launch, but according to NASA’s assessment it did not pose a danger to the flight.

    However, two more leaks occurred during the flight. Later, four of the capsule’s 28 engines failed. NASA astronauts Butch Wilmore and Suni Williams managed to restart three of them, so there were enough safety reserves for the rest of the flight.

    The “Starliner” missed the first opportunity to dock and circled next to the station for another hour. It was only on the second attempt that the spacecraft managed to dock with the ISS. All three leaks are now being monitored, valves have been closed and the spacecraft is still stable, NASA said.

    Project is still behind schedule

    The “Starliner” set off on its first manned test flight on Wednesday after years of delays. The test flight had previously been postponed several times due to various technical problems with the spacecraft and rocket. In May 2022, the “Starliner”, developed and built by the US aircraft manufacturer Boeing, completed its first successful unmanned flight to the ISS and spent four days there – an important test for the spacecraft.

    In the future, it will transport astronauts to the ISS as an alternative to SpaceX’s “Crew Dragon” space capsule. However, due to a number of problems, the project is far behind schedule, while the “Crew Dragon” has been regularly bringing astronauts to the ISS for several years.

    Two astronauts on board

    “Starliner” sets off on first manned test flight

    After years of delays, the “Starliner” space capsule from the US company Boeing has launched into space with humans for the first time. The rocket took off from the Cape Canaveral spaceport and is on its way to the ISS.

    After many delays, the crisis-ridden spacecraft “Starliner” has set off on its first manned test flight to the International Space Station ISS. With NASA astronauts Barry Wilmore and Suni Williams on board, the spacecraft launched from the Cape Canaveral spaceport in the US state of Florida using an “Atlas V” rocket, as live images from the US space agency NASA showed.

    “Here we go, Atlas! Here we go, Starliner! Good luck, Butch and Suni!” came the call from the control center shortly before the rocket took off.

    “Two brave NASA astronauts are on their way on this historic first test flight of a brand new spacecraft,” said NASA boss Bill Nelson. Boeing’s “Starliner” marks a new chapter in American research. “Manned spaceflight is a bold task – but that’s why it’s worth it. It’s an exciting time for NASA, our commercial partners and the future of exploration.”

    Test flights previously postponed several times

    The test flight had previously been postponed several times due to various technical problems with the spacecraft and rocket, and had been canceled twice in recent weeks just minutes before a scheduled launch. Among other things, there were problems with a helium leak and a faulty computer system on the ground.

    The “Starliner” – a partially reusable spacecraft that consists of a capsule for the crew that is around three meters high and a service module and, unlike the “Crew Dragon”, lands on the ground rather than on water – is expected to arrive at the ISS on Thursday. Wilmore and Williams are expected to stay there for around a week.

    Williams, 58, and Wilmore, 61, have each been in space twice before. “It almost feels unreal,” Williams said in advance of her return to the ISS. The “Crew Dragon” space capsule currently docked at the ISS was moved to another docking station for the arrival of the “Starliner.”

    Spare part is on board

    In short notice, the “Starliner” also took an urgently needed spare part for the ISS: a urine pump that helps convert the astronauts’ urine into drinking water broke earlier than expected, NASA said. A quickly obtained spare part has now been packed into the “Starliner.”

    In May 2022, the Starliner, developed and built by US aircraft manufacturer Boeing, completed its first successful unmanned flight to the ISS and spent four days there – an important test for the spacecraft.

    During the first test without a crew in 2019, the “Starliner” did not even arrive at the ISS. In the future, it will transport astronauts to the ISS as an alternative to SpaceX’s “Crew Dragon” space capsule. However, due to a number of problems, the project is far behind schedule, while the Crew Dragon has been regularly bringing astronauts to the ISS for several years.

    Boeing space capsule

    “Starliner” reaches ISS for the first time

    The crisis-ridden Boeing space capsule “Starliner” has made it to the International Space Station ISS for the first time. It docked during the night. Several previous attempts had failed.

    After launching from the Cape Canaveral spaceport, Boeing’s “Starliner” has arrived at the ISS for the first time. The unmanned spacecraft docked at the human outpost around 400 kilometers above Earth during the night after a flight lasting just over a day.

    The critical test for the crisis-ridden “Starliner” has thus been successful for the time being. The project is more than two years behind schedule due to a number of problems.

    Project is well behind schedule

    Boeing’s spacecraft will be used as an alternative to SpaceX’s “Crew Dragon” space capsule to transport astronauts to the ISS. This should have happened a long time ago, but during an initial test in December 2019, the spacecraft did not make it to the ISS – partly because of a problem with the automatic ignition of the engines.

    Last year, the mission was postponed several times – and then the “Starliner” finally had to go back to the workshop due to valve problems.

    NASA speaks of “milestone”

    NASA described the launch as a “milestone.” On board are over 300 kilograms of cargo, including supplies for the ISS crew. The “Starliner” is a partially reusable spacecraft consisting of a capsule for the crew and a service module that can be launched with a rocket. Once approved for the transport of people, it can carry up to four crew members to the ISS.

    Test flight to ISS

    Boeing’s “Starliner” misses orbit

    It was supposed to be good news for the struggling US aircraft manufacturer Boeing – but the test flight of its “Starliner” spacecraft to the ISS failed. The capsule missed the correct orbit.

    The unmanned test flight of Boeing’s new Starliner space capsule to the International Space Station (ISS) has failed after a serious error.

    After the launch of a rocket with the unmanned space capsule on board at the US space station Cape Canaveral, a problem arose with the automatic ignition of the engines due to a timer that was running incorrectly. As a result, the spacecraft consumed so much fuel that it could no longer reach the orbit of the ISS.

    Early return on Sunday

    The spacecraft was scheduled to dock with the ISS for the first time on Saturday and return to Earth in eight days. On board is a doll called “Rosie” as an astronaut dummy, as well as Christmas treats and gifts for the ISS crew. The capsule is now scheduled to be returned to Earth on Sunday and land in the US state of New Mexico.

    Boeing developed the “Starliner” for NASA. It is intended to bring American astronauts to the ISS in the future. NASA is currently dependent on Russian Soyuz rockets for manned missions. In 2011, it discontinued its own shuttle program after three decades. Since then, NASA astronauts have flown from the Russian space agency’s launch site in Kazakhstan. This cost the US space agency up to 86 million dollars (77 million euros) per flight.

    Strategy change in US space policy

    Under former US President Barack Obama, a change in strategy was introduced: instead of developing new rockets themselves, it was decided to hire private companies to do the work. Boeing and its competitor SpaceX subsequently received billions to develop manned space shuttles “Made in the USA”.

    Both US companies are two years behind schedule, but both now appear to be almost finished with development. The first manned flight of the capsule, which has a diameter of 4.50 meters, was previously planned for next year. However, final tests such as the “Starliner” flight are still necessary before their space shuttles can be approved for manned space travel.

    SpaceX test in March was successful

    SpaceX has already completed a successful test flight to the ISS in March with its space shuttle “CrewDragon”, which is intended for manned missions. Also on board was a dummy, named “Ripley” after the main character in the “Alien” films. The dummies are equipped with sensors to replicate the flight conditions for real people.

    The US aircraft manufacturer Boeing recently came under massive pressure due to the disaster surrounding its 737 Max jet. After two devastating crashes, the aircraft was banned from taking off. It is uncertain when it will be able to take off again.

    “Starliner” capsule on its way to the ISS

    Boeing’s troubled space program can finally report a success: A “Starliner” capsule launched on an unmanned test flight to the ISS. However, there were problems this time too.

    The space shuttle was initially launched at 6:54 pm Eastern Time from the spaceport at Cape Canaveral in the US state of Florida. From then on, the unmanned “Starliner” needed a total of 32 minutes to reach a safe orbit around the Earth. It was powered by an Atlas 5 rocket, which was finally successfully separated from the capsule in various stages.

    Two nozzles defective

    When positioning the “Starliner”, only ten of the twelve correction nozzles worked, Mark Stich of the US space agency NASA said at a press conference after the launch. For the others, it must be determined whether it is the instruments that are causing the commands to arrive. “At the moment, we can carry out all maneuvers with the remaining correction nozzles.”

    From Earth orbit to the ISS

    From orbit, the “Starline” is approaching the International Space Station (ISS) and initially docked – a maneuver that is not expected to take place until Saturday night. But for now, the successful launch was a relief for NASA and the manufacturer Boeing – the mission had been marked by failures up to now.

    In December 2019, a software error meant that a flight attempt had to be aborted before the docking maneuver with the ISS. Another launch last August had to be stopped during the countdown because valves did not open. It was only the third attempt that brought the hoped-for breakthrough.

    “There will be a few more sleepless nights.”

    Now the team is relieved. However, Boeing’s Mark Nappi warns against premature euphoria. The mission has four phases. “The first includes launch preparations and launch. And that went really well. Then the orbit and docking. And that will be another exciting day. So there will be a few more sleepless nights.”

    The aim is to use the “Starliner” to bring both astronauts and cargo to the space station in the future. Even though the current mission is unmanned, a test dummy called “Rosie the Rocketeer” sits in one of the chairs. There is also food and other goods on board for the ISS crew.

    US astronauts and cosmonauts reach ISS

    Space is one of the few areas in which the US and Russia have continued to work together since the beginning of the Ukraine war around two years ago. Three US astronauts and one cosmonaut have now arrived on the ISS space station.

    Three US astronauts and a Russian cosmonaut have arrived at the International Space Station ISS after their journey through space. The space capsule with the space station’s new occupants docked at the ISS.

    After the hatch was opened, the new arrivals Matthew Dominick, Jeanette Epps and Michael Barratt from the USA and Alexander Grebenkin from Russia were warmly welcomed by the current ISS crew, as could be seen on the video broadcast by the US space agency NASA.

    New crew to stay for half a year

    The crew members who have now arrived will replace four of the seven current ISS occupants. After a transition phase of a few days, the four crew members from the USA, Denmark, Japan and Russia will return to Earth.

    The new crew will work on the ISS for around six months, and various medical experiments are planned, including those involving stem cells. For Epps, Dominick and Grebenkin, this is their first space flight, and Barratt will be visiting the space station for the third time.

    Success for space company SpaceX

    As in the past, tech billionaire Elon Musk’s private space company SpaceX carried out the mission to replace personnel on the ISS space station. The so-called “Crew-8” took off on Sunday on board a “Crew Dragon” capsule with the help of a Falcon 9 rocket from the Kennedy Space Center in the US state of Florida.

    The flight had been postponed several times. It had originally been planned for February 22nd. This was the second time since the start of Russia’s war of aggression against Ukraine and the resulting tensions between the USA and Russia that astronauts from both countries went into space together. Space is one of the few areas in which both countries still work together.

    Russia and USA continue joint flights to the ISS

    The International Space Station is one of the few projects on which the USA and Russia are still working together. Both sides have now signed an agreement to continue joint flights to the ISS until 2025.

    Russia and the USA have agreed to extend their agreement for joint flights to the International Space Station ISS. In July and December, both sides signed two additional agreements to continue the cross-flights until 2025, the Russian space agency Roscosmos announced.

    The aim is to maintain the reliability of the ISS’s operations, Roscosmos explained. This will also ensure that at least one NASA astronaut and one Russian cosmonaut are on the station at any one time.

    In the so-called cross-flights, a US astronaut flies to the ISS as part of the crew of a Russian spacecraft and vice versa. In September, two Russian cosmonauts and a US astronaut flew to the ISS on board a Russian Soyuz capsule from the Kazakh spaceport Baikonur.

    Russia withdraws from ISS

    The ISS, which has been orbiting the Earth since 1998, is one of the few areas in which Russia and the USA are still cooperating even after the start of the Russian war of aggression in Ukraine in February 2022. Its operation was originally planned until 2024, but the US space agency NASA does not want to decommission it until 2030. However, Roscosmos had already announced in July 2022 that it would withdraw from the project after 2024. Instead, Moscow is planning its own space station.

    President Vladimir Putin announced in October that the first module for the Russian space station would be launched into space in 2027. At the same time, he said that despite the recent setbacks, he would stick to the previous space program goals, including above all the Russian moon program.

    Russia plans to leave the International Space Station (ISS) in 2024. This was announced by the new head of the space agency, Borissov. Cooperation with the USA has been repeatedly called into question in the course of the Ukraine war.

    Russia plans to exit the International Space Station ISS after 2024. This was stated by the new head of the Russian space agency Roscosmos, Yuri Borisov, at a meeting with Russian President Vladimir Putin.

    Tensions between Moscow and Washington

    “Of course, we will fulfill all our obligations to our partners, but the decision to withdraw from this station after 2024 has been made,” said Borisov. The former Deputy Prime Minister Borisov has been the head of the Russian space agency since mid-July, succeeding agency head Dmitry Rogozin.

    Rogozin had recently repeatedly questioned cooperation with the USA in light of political tensions between Moscow and Washington in the wake of the war in Ukraine. After the start of the Russian offensive in Ukraine, the West imposed a series of sanctions against Russia. However, cooperation continued on the ISS.

    Borissov said that construction of a Russian space station would begin before the exit. The ISS had recently become old and had repeatedly caused a stir due to breakdowns. Rogozin had previously not ruled out the possibility of detaching the Russian module from the ISS and continuing to operate it independently. He also hinted at the possible use of the station for military earth observation.

    NASA: No official notification yet

    The US space agency, NASA, said that Roscosmos had not yet officially communicated the decision. “We have not received any official notification from our Russian partners,” said Robyn Gatens, the ISS director based at NASA headquarters. She also made it clear that the US had no interest in ending its cooperation with Russia: the Russians, like all other parties involved, have always been good partners, and they want to continue that as agreed until the end of this decade.

    US astronaut Kjell Lindgren spoke from the ISS itself: The crew has not yet received any official notification, he said. However, they will continue to work together successfully on the ISS to achieve scientific results and ensure the safety of the crew and the space station.

    Following the war of aggression against Ukraine ordered by Kremlin leader Putin, Western countries have also imposed sanctions on the Russian space industry. The ISS is a joint project of the USA, Canada, Japan, the European Space Agency (ESA) and Russia. It is divided into a Russian and a US segment.

    Moscow questions cooperation with ISS

    Russia still has plans to work on the International Space Station (ISS) until 2024. What will happen after that is unclear. Russia could detach its module and use it for military earth observation.

    In view of the war in Ukraine, Russia has questioned its cooperation in the operation of the International Space Station (ISS). “The decision on the fate of the ISS will depend largely on the situation in and around our country,” said the head of the Russian space agency, Dmitri Rogozin, to the Tass news agency. In the event of an exit, however, Moscow will inform its partners a year in advance.

    Russia’s participation in the station is provisionally planned until 2024. Previously, Rogozin had not ruled out the possibility of disconnecting the Russian module from the ISS and continuing to operate it independently. He also hinted at a possible use of the station for military earth observation.

    Russian space travel also sanctioned

    The Russian government also wants to decide by mid-May whether it will continue to take Western astronauts to the ISS on its spaceships. Following the war of aggression against Ukraine ordered by Kremlin chief Vladimir Putin, Western countries have also imposed sanctions on the Russian space industry.

    Russia’s war and its consequences in space

    The ISS was initiated as an international peace project – but now Russia is waging war. What does that mean for the future of the space station? And what for the seven astronauts on board?

    Four Americans, two Russians and one German – that is the current crew on the International Space Station ISS. How is the international crew dealing with the war in Ukraine? And how is the German ESA astronaut Matthias Maurer, who flew to the ISS in November, doing? A press conference at which the astronaut actually wanted to answer journalists’ questions today was cancelled by the ESA.

    “We get along well.”

    Instead, Josef Aschbacher, the ESA Director General, provides information and explains that Matthias Maurer is doing well. In an interview with tagesschau.de, he talks about a phone call he had with him:

    Mr Maurer says: ‘Don’t worry about me. We get on well.’ But he says that of course they in the space station also see what is happening here on the ground. That is very worrying. One message he wants to convey is that from the perspective of space you can see how fragile our beautiful planet is. It is incomprehensible that war is being waged here.

    ISS planned a peace project

    The ESA boss has left open what will happen to the International Space Station as a whole. The space station is operated by NASA, the Russian space agency Roscosmos and the European Space Agency (ESA), among others. “The system is very closely intertwined, we are dependent on each other,” Aschbacher explains. “That was exactly the purpose of the ISS when it was built, that it was a peace project between different nations, peoples and cultures and that worked very well for more than 20 years. Now we have a new situation.”

    He does not go into more detail, but cooperation seems to be assured, at least in the short term. An immediate withdrawal by the Russians could be problematic, explains former astronaut Ulrich Walter: “That would be difficult because a central module of the space station, a kind of control module, is from the Russians. So if the Russians were to withdraw immediately, we would have relatively big problems. But we don’t need to worry about that at the moment, because the Russians have not said they are withdrawing.”

    Connected in space

    The example of the ISS shows how interconnected the cooperation between states in space is. SWR science editor Uwe Gradwohl gives examples: “American rockets fly with Russian engines. A company based in the USA markets the launches of the Russian Proton carrier rocket. The first stage of the Antares rocket is built in Ukraine, with Russian propulsion and launches from Florida.”

    Since the end of the Cold War, ESA has always been careful to carry out joint projects with many players, be it the Americans, Russians or the Chinese. ESA astronaut Maurer took part in survival training for astronauts in China and flew to the ISS in a space capsule from the US company SpaceX. His predecessor Alexander Gerst launched into space with the Russians from the Baikonur Cosmodrome in 2014 and 2018.

    Sanctions against Russia also affect Mars mission

    Now states and space agencies like the ESA have to reorganize themselves as the sanctions against Russia take effect. In concrete terms, the European-Russian Mars mission “ExoMars” is currently in jeopardy. This autumn, a European Mars rover was supposed to launch from Baikonur on board a Russian Proton rocket.

    “The Mars rover is an ESA project, but it also contains Russian hardware. And without the Russian landing platform, the rover cannot roll onto the surface of Mars,” explains science editor Gradwohl. Replacing the Russian part of the project with European developments is not possible any time soon. “Even the time until the next launch opportunity to Mars in 2024 is probably too short.”

    “Chinese are now better than the Russians”

    The Crimean crisis in 2014 already had long-term consequences for space travel. At that time, for example, the US Congress decided that US military and government payloads could no longer flow into space on rockets equipped with Russian rocket engines. However, a very long transition period until the end of 2022 was set.

    The question of which alliances will form in space in the future is exciting. For example, are Russia and China getting closer? Former astronaut Walter does not believe that: “The Chinese have been copying Russian space technology since the 1960s and in the 1990s improved Russian technology to such an extent that they are now better than the Russians.” The Russians have always lacked money – unlike the Chinese, who have a lot of money. “They are massively expanding space travel. They are now really superior to the Russians, and in some cases even to the Americans, in space travel. They do not need partners,” says Walter.

    ESA astronaut Matthias will probably return from the ISS in May – again on board a SpaceX capsule. Before that, his US astronaut colleague Mark Vande Hei was scheduled to make the return flight. He flew into space on board a Soyuz capsule last year. The plan was for him to fly back with the Russians. It will be interesting to see how this story ends.

    NASA cancels test flight with “Starliner” spacecraft

    The “Starliner” was supposed to transport astronauts long ago, but the project is far behind schedule. The first manned test flight has now had to be postponed again.

    The eagerly awaited launch of the first manned Boeing “Starliner” rocket into space has had to be postponed due to technical problems. The test flight, which had already been postponed several times, will now be rescheduled for Friday at the earliest, Boeing announced.

    The flight was aborted less than two hours before the evening countdown (local time) because a problem with a valve in the second rocket stage could not be fixed in time, NASA said in its live stream.

    The CST-200 “Starliner” was to be launched on an “Atlas-V” rocket from the United Launch Alliance, a joint venture between Boeing and Lockheed Martin, from the Kennedy Space Center of the US space agency NASA in Florida and bring two astronauts to the International Space Station ISS.

    The two NASA astronauts, Barry Wilmore and Suni Williams, left the “Starliner” capsule and returned to their crew quarters at the Cape Canaveral spaceport.

    New start attempt on Friday

    The new launch date is now set for Friday. The “Crew Dragon” docked at the ISS and moved to another docking station for the arrival of the “Starliner.”

    In May 2022, the “Starliner”, developed and built by the US aircraft manufacturer Boeing, completed its first successful unmanned flight to the ISS and spent four days there – an important test for the spacecraft.

    Prestige project for Boeing

    After a series of crises in commercial aviation, Boeing is already facing intense public scrutiny and a comprehensive review of its quality control. With the “Starliner”, the company wanted to send a signal of success.

    In order to save the program, which is years behind schedule, the planned costs have so far been exceeded by more than $1.5 billion. While Boeing is struggling, Tesla boss Elon Musk’s SpaceX has become a reliable partner for NASA in transporting space.

    Three Americans and one Russian launched to the ISS

    For the second time since the attack on Ukraine, astronauts from Russia and the USA have flown into space together. On board the ISS, they will work together on research projects and grow “organoids.”

    After several postponements, three US astronauts and a Russian cosmonaut have launched on a six-month mission on the International Space Station ISS. A “Falcon 9” rocket from tech billionaire Elon Musk’s private US space company SpaceX lifted off late Sunday evening (local time) from the Kennedy Space Center in Florida.

    A launch attempt planned for the previous day had been cancelled due to strong winds. This was not the first postponement of the flight: the launch had originally been planned for February 22nd.

    With the start of the mission, astronauts from both countries flew into space together for the umpteenth time, despite the Russian war of aggression against Ukraine and the resulting immense tensions between the USA and Russia.

    Space flight for research purposes

    NASA astronauts Matthew Dominick, Michael Barratt, their colleague Jeanette Epps and cosmonaut Alexander Grebenkin are expected to arrive at the ISS on Tuesday morning (CET). For Epps, Grebenkin and Dominick, this will be their first space flight, and Barratt will be visiting the ISS for the third time.

    There are currently seven crew members on board the space station. In a few days, four of the existing crew members, from the USA, Denmark, Japan and Russia, will return to Earth.

    The new crew wants to conduct experiments on the International Space Station with stem cells and grow so-called organoids. Such tiny organs can be used to research organ diseases, among other things.

    First private space mission launched

    The private space company SpaceX has sent paying guests to the International Space Station for the first time. Three businessmen and a former NASA astronaut are scheduled to dock with the ISS early Saturday morning.

    The first private space mission, Axiom AX1, took off for the International Space Station ISS at exactly 11:17 am local time. The launch of the SpaceX “Falcon 9 rocket” with the crew capsule “Dragon” at its top from the Kennedy Space Center in Florida went smoothly under a bright blue sky. As planned, the capsule with the four astronauts on board separated from the rocket shortly after takeoff, which then successfully returned to Earth.

    20 hours through space

    The four astronauts – three businessmen from the USA, Canada and Israel, as well as a former NASA astronaut – are now on a 20-hour journey through space to the International Space Station. They are scheduled to dock there early Saturday morning and be brought on board the ISS after the two-hour maneuver.

    The four private citizens will then stay on the ISS for eight days and carry out various experiments – including cancer research, the aging process of cells and everything related to human life in space.

    700 hours of training

    It is the first space mission by the private Texas company Axiom in collaboration with SpaceX and NASA. Axiom plans to add its own module to the ISS in two years to enable private space travel and research. Axiom is planning well over 100 such missions in the coming years.

    The three private astronauts trained around 700 hours for their eight-day trip into space and each paid 50 million US dollars.

    US-Americans and Russians fly to the ISS

    Even in times of political tension, we can achieve something together, says US astronaut Rubio before launching to the International Space Station. He did not speak to his two Russian crew colleagues about the war in Ukraine beforehand.

    Despite the Russian war of aggression in Ukraine, an American and two Russians are flying together to the International Space Station (ISS) today. The three are scheduled to take off from the Russian spaceport Baikonur in the Republic of Kazakhstan on board a Soyuz rocket this afternoon. This was announced by the Russian space agency Roscosmos.

    It is the first joint flight since the start of the Russian war of aggression in Ukraine on February 24. In October, Russian cosmonaut Anna Kikina is also scheduled to fly to the ISS from the USA. Both flights are intended to give hope that cooperation on the ISS will continue. Russia recently announced that it wants to withdraw from the project after 2024. No exact date has been announced so far.

    Supplies on board

    In addition to the crew, there are also 120 kilograms of supplies for the ISS on board the Soyuz capsule – for example hygiene and medical items, scientific equipment and the astronauts’ personal belongings. For the Russian cosmonaut Sergei Prokopyev, this is the second flight to the ISS, and for his colleague Dmitri Petelin and US astronaut Frank Rubio, this is the first time.

    “The program is quite full – in addition to the rapid docking, five spacewalks are planned,” Prokopyev said before the start of the 68th ISS mission. 48 experiments are planned – including one with a 3D printer in zero gravity. Various figures are to be printed out of different materials.

    The three do not talk about the war

    “It’s an honor for me to follow in the footsteps of previous astronauts,” said NASA astronaut Rubio. Space travel is an opportunity to achieve something together, even in times of political tension.

    Apparently the crew did not talk about the war beforehand. All three made it clear that they would miss their families most of all during the six months they would be on the ISS.

    The commander of the 67th expedition, Oleg Artemyev, the cosmonauts Denis Matveyev and Sergei Korsakov, the NASA astronauts Bob Hines, Kjell Lindgren, Jessica Watkins and the Italian Samantha Cristoforetti from the European Space Agency ESA are already on the ISS. They are orbiting around 400 kilometers above the Earth.

    NASA and Roscosmos cooperate

    Back to the ISS together

    US astronauts and Russian cosmonauts will fly to the International Space Station together again in the future – sometimes rockets will be launched in Baikonur, sometimes in Florida. This is intended to ensure their presence on the ISS.

    The space agencies of the USA and Russia are working more closely together again. From September onwards, flights to the International Space Station ISS will take place with a mixed crew in Russian and US capsules.

    Roscosmos said the agreement was in the interests of both countries and would deepen cooperation in the area of ​​the ISS. This would ensure that in the event of delayed or cancelled rocket launches, there would always be at least one Roscosmos and one NASA member on the ISS. NASA said the decision was made for security reasons and to ensure the “US presence in space.”

    Unusual timing?

    The agreement was made public shortly after Russian President Vladimir Putin replaced the head of Roscosmos. The previous head ofthe agency, Dmitry Rogozin, had taken a confrontational stance towards the West since the beginning of the Ukraine war. He threatened to withdraw from the ISS and described cooperation with the West as “impossible”.

    SpaceX now also flies cosmonauts

    In September, astronaut Frank Rubio is scheduled to launch from the Baikonur Cosmodrome together with Sergei Prokopyev and Dimitri Petelinon board a Russian capsule.

    Later, Russian cosmonaut Anna Kikina will be brought to theISS from Florida together with two US and one Japanese astronaut. She will be the first female cosmonaut to launch into space on a SpaceX rocket.

    US astronauts had already flown into space on Russian rockets from the Baikonur Cosmodrome in Kazakhstan by 2020. After that, mannedISS flights also took off from US soil again.

    Soyuz capsule launched for retrieval operation

    In mid-December, astronauts on the ISS discovered a leak in their return rocket – they have been stuck ever since. An unmanned Russian Soyuz capsule has now launched to the space station earlier than planned to bring it back to Earth.

    An unmanned Soyuz rocket has launched on a return mission to the International Space Station ISS. The Soyuz MS-23 took off from the spaceport in Baikonur, Kazakhstan, on Friday morning, as live images from the US space agency NASA showed. The spacecraft is scheduled to dock with the ISS on Sunday and bring three astronauts stranded on the space station back toEarth in September.

    The launch and entry into orbit of the spacecraft “went normally,” said the Russian space agency Roscosmos. Docking with the ISS is scheduled for February 26 at 1:01 a.m. GMT (2:01 a.m. CET).

    Soyuz MS-23 was originally scheduled to launch in March

    Originally, US astronaut Frank Rubio and the two Russian cosmonauts Sergei Prokopyev and Dmitri Petel in were supposed to fly back to Earth in the Soyuz MS-22 space capsule at the end of March. However, this was apparently damaged in mid-December by a small meteorite that caused a leak in the spacecraft’s cooling system. The mission of the three astronauts was extended by six months due to the breakdown. The Soyuz MS-23 space capsule was actually only supposed to launch to the ISS in mid-March with three successors for Rubio, Propkolev and Petelin on board. However, due to the damage to the MS-22 capsule, the launch was brought forward and now took place without a crew.

    The astronaut and the two cosmonauts will now spend almost a year on the space station. The capsules will usually remain docked at the station for the entire duration of the crew’s research mission in order to be able to return astronauts to Earth in the event of an emergency.

    In addition to the three affected crew members, there are four other people on board the ISS: the Russian Anna Kikina, the US astronaut Nicole Mann and her compatriot Josh Cassada, and the Japanese Koichi Wakata. They flew into space in October as part of the so-called Crew-5 mission on board a space capsule belonging to the US space company SpaceX, owned by multi-billionaire Elon Musk.

    Next week, the four participants of the subsequent Crew 6 mission – two US citizens, an Emirati and a Russian – will join them. They are also to arrive in a SpaceX capsule that is scheduled to launch from Florida on Monday. After a “handover” lasting several days, the Crew 5 astronauts will then be to return to Earth.

    A glimmer of hope for rapprochement?

    German astronaut Reinhold Ewald did not want to talk of a “rescue mission” in the case of the current Soyuz mission. “The crew is not stranded somewhere. Even if many systems fail, the Soyuz has ways and means of steering the capsule home.” In one way, the problems on Earth are greater than in space. “This is a major effort for Russian space travel. The Soyuz, which is being sent up unmanned, was intended for a crew. This is already a significant disruption to the sequence. Russia does not produce them in advance,” said the 66-year-old.

    Ewald, who flew to the Russian space station “Mir” in a Soyuz capsule in 1997 and conducted research for three weeks, pointed to the pragmatic ongoing cooperation between NASA and Roscosmos despite the war in Ukraine: “The situation is bad enough. This is perhaps a glimmer of hope that we are coming closer together again.” He does not believe that Russia will soon withdraw from the ISS program. “Moscow only recently sent up a science module. I think that Russia will use its investment in the station for as long as possible.”

    Since the start of the Russian offensive in Ukraine a year ago, space has been one of the few areas in which Russia and the USA still work together. The ISS has been in operation since 1998.

    ISS astronauts will not return until September

    In mid-December, astronauts discovered a leak in their capsule – and have been stuck on the ISS ever since. A new Russian space capsule is scheduled to launch next Friday to pick up the three men – and will not return until September.

    The Russian space agency Roscosmos plans to bring three astronauts stuck on the International Space Station ISS back to Earth in September instead of next month – one year after their departure.

    Roscosmos announced that the return flight with the Soyuz MS-23 space capsule is currently planned for September. Originally, the US astronaut Frank Rubio and the two Russian cosmonauts Sergei Prokopyev and Dmitri Petelin were supposed to fly back to Earth with the Soyuz MS-22 at the end of March.

    Micrometeorite probably caused the leak

    After a leak was discovered in the spacecraft’s cooling system in mid-December, the astronauts who have been stationed on the ISS since September are now to be picked up by the Soyuz MS-23. According to Roscosmos, the Soyuz MS-23 is scheduled to launch next Friday from the spaceport in Baikonur, Kazakhstan, but will not return to Earth until September.

    Both the US space agencies NASA and Roscosmos believe the leak was caused by a micrometeorite, a tiny particle of space rock that hit the capsule at high speed.

    According to Roscosmos, the astronauts have responded “positively” to the extension of their mission on the ISS, and the longer stay in space does not pose any danger to their health.

    No “Starliner” test flight again

    The aviation group Boeing has again postponed the test flight of the “Starliner” space capsule, which was planned for today. The company is also aiming for an important role in space travel – and is therefore under pressure to succeed.

    Boeing’s Starliner space capsule will not launch today either: The aviation company, which is also aiming for space, has cancelled the launch planned for 7:20 pm Central European Time. Boeing Space confirmed the media reports on Twitter. The reason is apparently problems with valves in the space capsule’s propulsion system. In a written statement, it says that the company is considering launching tomorrow.

    “We are disappointed by today’s events and the need to once again postpone the Starliner launch,” said John Vollmer, manager of Boeing’s Commercial Crew Program. Manned spaceflight is a complex, precise undertaking that does not forgive mistakes. “The Boeing and NASA teams are taking the time they need to ensure the safety and integrity of the spacecraft and to achieve the mission’s goals.”

    Keeping pace in a fast-growing, competitive industry

    Today’s launch was actually supposed to be an important step for the US aviation group to keep up with the growing competition in the commercial space travel business. In addition to the aviation group, Tesla CEO Elon Musk’s SpaceX, Amazon founder Jeff Bezos’ Blue Origin and British entrepreneur Richard Branson’s Virgin Galactic are also involved. The industry is growing rapidly.

    First attempt failed

    With the second launch attempt, Boeing wanted to get its space program back on track. The spacecraft was to be launched on an Atlas V rocket from the Cape Canaveral spaceport in Florida. 24 hours later, the “Starliner” was scheduled to dock with the International Space Station ISS. The space capsule was to bring 180 kilograms of cargo to the station, including supplies for the current ISS crew. “Starliner” would then have returned to Earth with around 250 kilograms of cargo, including air tanks.

    A first unmanned test flight of the “Starliner” capsule failed in 2019 due to software problems. The spacecraft was unable to reach the ISS at the time and returned to Earth prematurely.

    Most recently, the date for the second flight to the ISS, originally scheduled for last Friday, had to be postponed until today. The reason was an incident when the new Russian laboratory module “Nauka” docked with the ISS. However, the weather forecast was not ideal for today’s attempt either.

    Space business promises stable sales

    For Boeing, the success of the “Starliner” is of enormous importance because, in addition to the aviation division with its aircraft fleet, the defense, space and security division, in which the “Starliner” project is also located, is expected to ensure stable earnings for the group in the future. The division already generates around a third of the group’s revenues. And in contrast to the aircraft division, which saw dramatic declines last year due to Corona, Boeing came through the crisis well with tanker aircraft for the military and commercial satellites and services in this area.

    The “Starliner” project goes back to a decision made by the US space agency NASA ten years ago. NASA had discontinued its shuttle program due to high costs and after two accidents, was subsequently dependent on Russian rockets to get to the ISS. In order to become more independent from Russia again, the US government under then-President Barack Obama commissioned Elon Musk’s SpaceX company and the aviation giant Boeing to build space shuttles.

    First cargo, then astronauts

    SpaceX has already brought people to the ISS. In May 2020, two astronauts flew to the ISS for the first time on board the “Crew Dragon” capsule. In April 2021, four astronauts were on board for the trip to the station. As soon as the unmanned “Starliner” flight is successful, the Boeing capsule will also be able to transport people after further tests at the end of the year.

    NASA deliberately wants to be able to rely on several companies for its space flights. A recent dispute over a NASA contract for a lunar lander showed that other competitors are also putting out feelers for billion-dollar contracts in space travel: Blue Origin, with Bezos at the helm, wanted to snatch a contract away from SpaceX and even offered to cover development costs running into billions. A complaint by Blue Origin against the NASA contract to SpaceX with the US General Audit Office GOA was rejected a few days ago.

    Ten years after commissioning

    First manned Starliner flight to the ISS

    After years of delays, the Starliner space capsule from the US company Boeing is set to fly into space with humans for the first time. The rocket will launch early Tuesday morning from the Cape Canaveral spaceport.

    Ten years ago, then NASA chief Charles Bolden announced a program with which the USA would again conduct its own flights to the International Space Station. The idea was not to be dependent on Russia when it came to transporting people and goods to the ISS after the end of the Space Shuttle era.

    At this time, the US space agency did not want to develop and build the space shuttles itself, but commissioned the aircraft manufacturer Boeing and the company SpaceX owned by tech billionaire Elon Musk. From that point of view, Boeing seemed to have the advantage with its “Starliner” spacecraft. The company received an order worth more than four billion dollars, SpaceX 2.6 billion. But things did not go smoothly at Boeing, and there were crises, problems and delays.

    Difficulties with the tests

    During a first unmanned test in 2019, the Starliner did not even arrive at the ISS due to software problems. A second unmanned test flight was successful in 2022, but then numerous problems arose again, which further delayed a planned manned test flight.

    Now, on Tuesday morning at 4:34 a.m. Central European Summer Time, the first manned Starliner flight is scheduled to take off from the Cape Canaveral spaceport in the US state of Florida, with NASA astronauts Barry Wilmore and Sunita Williams on board. The capsule is to be launched into space by an Atlas V rocket developed by Boeing and Lockheed Martin.

    Boeing’s Starliner is a partially reusable spacecraft consisting of a crew capsule approximately three meters high and a service module, designed to carry four crew members and, unlike the Crew Dragon, lands on Earth rather than on water.

    “We are ready, the spaceship is ready.”

    The spacecraft is expected to arrive at the ISS on Wednesday, where Wilmore and Williams are expected to stay for about a week. “It almost feels unreal,” said 58-year-old Williams, who has already been on board the ISS twice, at a press conference beforehand. “We wouldn’t be here if we weren’t ready,” said her 61-year-old astronaut colleague Wilmore, who has also been in space twice. “We are ready. The spacecraft is ready and the team is ready.”

    SpaceX won the race

    Meanwhile, competitor SpaceX has long since overtaken Boeing. In 2020, its Crew Dragon successfully undertook its first manned test to the ISS, and since then astronaut transports have become routine. There are currently eight regular crew members on the space station, having arrived there with the Dragon.

    ISS crew

    Back on Earth

    After almost half a year in space, part of the crew of the International Space Station is back on Earth. The four astronauts, from the USA, Russia and Japan, landed in the sea off the coast of Florida.

    A space capsule has brought two women and two men back to Earth from the International Space Station ISS. On board a “Crew Dragon” of the private space company SpaceX, Nicole Mann and Josh Cassada from the USA, the Russian Anna Kikina and the Japanese Koichi Wakata landed in the sea off the coast of the US state of Florida, as live images from the US space agency NASA showed.

    “That was an amazing ride,” Mann radioed after landing. “We’re glad to be home again,” said the commander of “Crew-5” – the fifth NASA flight of a “Crew Dragon” spacecraft with crew. Around 20 hours earlier, the space capsule had undocked from the ISS after the crew had to wait several days for the return flight due to bad weather.

    First joint mission since attack on Ukraine

    The “Crew-5” was launched in October from the Cape Canaveral spaceport. It was the first joint launch of NASA astronauts and a Russian cosmonaut from American soil since the beginning of the Russian war of aggression against Ukraine.

    Also on board the ISS are the Russian cosmonauts Sergei Prokopyev and Dmitri Petelin as well as NASA astronaut Frank Rubio and the crew of “Crew-6”, which arrived just a few days ago and consists of the Americans Stephen Bowen and Warren Hoburg, the Russian Andrei Fedyaev and the Emirati Sultan al-Nijadi.

    The “Starliner” breakdown list has gotten longer

    Boeing does not only develop airplanes, but also spaceships. But the “Starliner” is particularly impressive due to its mishaps. The launch date for the first manned flight has been postponed again.

    The list of problems with the “Starliner” is long. In December 2019, it was supposed to fly to the International Space Station (ISS) for the first time without a crew. But the spacecraft never arrived at the ISS during this flight. Due to a software problem, the “Starliner” used up too much fuel too early, was unable to reach the ISS’s altitude and had to abort its flight after two days.

    The repeat of the flight in August 2021 ended even before it began. The humid weather in Florida had caused valves in the fuel lines to corrode, so that several of these valves were jammed before takeoff. The “Starliner” rolled from the launch site back to the workshop.

    It was only on the third attempt that the “Starliner” reached the ISS, two and a half years late. However, there was no crew on board for all of these flights. The first flight with people on board was actually planned for this summer – but the American space agency NASA and the Boeing company announced at a press conference that the launch would be postponed until spring 2024.

    Parachutes and faulty adhesive tapes

    What is not working and is preventing the one remaining, crucial qualification flight of the “Starliner” to the International Space Station are, on the one hand, fastenings for the parachutes and, on the other hand, adhesive tape. The fiberglass tape used to bundle electrical cables in the “Starliner” has proven to be flammable under very specific circumstances – this was not clearly noted in the aerospace engineers’ databases. The tape was installed in good conscience. Now it has to be removed.

    With the parachutes, it turned out that connections between the parachute lines and the return capsule could be overloaded.

    The design and material of the connection on the parachutes have already been changed. Whether this solves the problems will be determined during a test landing after being dropped from a high altitude, but this cannot be taken place until November. The unexpectedly flammable tape has already has been largely removed from the upper half of the space capsule or covered with non-flammable material. It will take a few more weeks to replace it in the lower half of the “Starliner.”

    “Starliner” still a loss-making business for Boeing

    If the next flight, the first with a crew, is successful, Boeing would be the second company alongside SpaceX to have the license to transport space personnel to the ISS. With an estimated 50 to 60 million dollars per passenger, this would be a lucrative business.

    But it remains to be seen whether Boeing will ever make a profit with the “Starliner” with these impressive ticket prices. NASA agreed a fixed price of five billion dollars with Boeing for the development of the “Starliner” – but the many breakdowns and delays have resulted in costs of more than six billion dollars for Boeing.

    NASA sticks to project

    Nevertheless, the project will not be abandoned – not only because the next, decisive flight could make the project a success and make all the mishaps forgotten. But also because NASA started its commercial spacecraft program more than ten years ago so that one day it would no longer be dependent on just one type of spacecraft from one manufacturer.

    Although SpaceX has been reliably bringing new crew to the ISS every six months, NASA believes that splitting this business between the two companies would make sense. Memories are still fresh of the years when the USA had no spaceships of its own to transport men and women to the ISS because the space shuttles were unusable and had to buy flight opportunities from the Russian space agency.

  • Spaceship costs Boeing hundreds of millions of dollars

    On the second completely private mission, a four-person crew set off for the International Space Station ISS for around ten days. For the first time, a woman from Saudi Arabia is on her way into space.

    A four-person crew flew to the International Space Station ISS on a rocket from the US company SpaceX. Former NASA astronaut Peggy Whitson, ex-racing driver John Shoffner, Saudi Arabian astronaut Rayyanah Barnawi and her colleague Ali Alqarni flew into space on board a “Dragon” capsule. They are expected to stay in the ISS for around ten days as part of the “Axiom-2” mission and carry out scientific experiments.

    Passengers have to pay themselves

    The trip is organized by the private space company Axiom Space in cooperation with NASA and SpaceX. According to media reports, the passengers will each pay around 50 million euros for the trip. Axiom Space, founded in 2016 in Houston, Texas, by former NASA manager Michael Suffredini and Iranian-American entrepreneur Kam Ghaffarian, sees itself as a future major player in the space market.

    In April 2022, “Axiom-1” was the first completely private mission to the ISS. At that time, the Spanish-American astronaut Michael López-Alegría, the US entrepreneur Larry Connor, the Israeli entrepreneur Eytan Stibbe and the Canadian investor Mark Pathy flew to the ISS. There had been individual space tourists on the ISS several times before, but the “Axiom-1” mission was the first completely private crew.

    First woman from Saudi Arabia in space

    Researcher Barnaui is the first woman from Saudi Arabia to take part in a space flight. She and Air Force pilot Ali al-Karni are also the first representatives of their country to do so since 1985, when a Saudi prince took off on a space flight aboard the space shuttle Discovery as part of NASA’s space program.

    The mission was led by former NASA astronaut Whitson, who now works for SpaceX. The former first commander of the ISS held her country’s record for the longest total stay in space at 665 days. Also on board was US businessman John Shoffner.

    “Starliner” astronaut flight postponed

    Astronauts are due to be launched to the ISS on Boeing’s “Starliner” a long time ago. Now the spacecraft will not be ready for its first manned flight until March next year at the earliest.

    Contrary to plans, the Starliner spacecraft will not take off for its first astronaut flight this year. The space capsule will not be ready for launch until early next year at the earliest. This was announced by the US space agency NASA and the US aircraft manufacturer Boeing at a press conference.

    Boeing’s program manager Nappi explained: “According to current plans, we expect to be ready with the spacecraft at the beginning of March.”

    However, this does not mean that there is already a launch date for early March. This will have to be set at a later date and will depend on the seats on the ISS and the capabilities of the United Launch Alliance company, which is supplying the rocket for the launch of the spacecraft.

    More glitches in final test phase

    New technical problems arose during the final test phase in the spring: with the parachute system and with an adhesive tape that proved to be flammable. This was actually intended to protect the capsule’s wiring.

    The first manned flight of the spacecraft was supposed to take place in July. This date had also been postponed several times before.

    So far, the only unmanned test flights have been successful

    Almost ten years ago, NASA commissioned Boeing to bring astronauts to the International Space Station (ISS) and back to Earth. This was achieved for the first time in 2022 – but only unmanned. At the time, NASA spoke of a “milestone”.

    Spaceship costs Boeing hundreds of millions of dollars

    NASA wants to use Boeing’s space capsule as an alternative to SpaceX’s Dragon space capsule for transporting to the ISS. The ongoing delays to the first flight of the “Starliner” have already cost Boeing an additional $257 million, as the US aircraft manufacturer announced last month The “Starliner” is a partially reusable spacecraft that

    consists of a capsule for the crew and a service module that can be launched with a rocket. Once approved for transporting people, it can transport up to four crew members to the ISS.

    “Starliner” back on Earth

    Test successfulBoeing’s “Starliner” was actually supposed to transport astronauts to and from the ISS a long time ago. But the project is plagued by a number of problems and is far behind schedule. Now a test flight has been successful for the first time.

    The “Starliner” spacecraft, developed by Boeing on behalf of the US space agency NASA, has completed its first successful test flight to the International Space Station ISS. After around four days at the ISS, the unmanned spacecraft undocked from the space station on Wednesday as planned and landed a few hours later in the US state of New Mexico, Boeing and NASA announced.

    It was an important test for the crisis-ridden “Starliner”. The spacecraft is intended to transport astronauts to the ISS in the future – as an alternative to SpaceX’s “Crew Dragon” space capsule.

    Project plagued by problems for a long time

    However, due to a number of problems, the project is far behind schedule. The “Starliner” is supposed to have been transporting astronauts to and from the ISS long ago, but during an initial test in December 2019, the spacecraft did not make it to the ISS – partly due to a problem with the automatic ignition of the engines.

    Last year, the tests were postponed several times – and then the “Starliner” finally had to go back to the workshop due to valve problems.

    NASA speaks of “milestone”

    NASA described the successful launch from Cape Canaveral Spaceport last week as a “milestone.” The spacecraft was carrying over 300 kilograms of cargo, including supplies for the ISS crew.

    The Starliner is a partially reusable spacecraft consisting of a capsule for the crew and a service module that can be launched by rocket. Once approved for human transport, it can carry up to four crew members to the ISS.

    Boeing’s “Starliner” launch aborted again

    Boeing’s “Starliner” was supposed to transport astronauts to the ISS long ago. But the project is plagued by crises. Now another launch has had to be canceled.

    A planned first manned test flight of the crisis-ridden spacecraft “Starliner” was aborted again shortly before takeoff. Around three minutes before the space capsule was due to take off from the Cape Canaveral spaceport in the US state of Florida, there was initially a temporary stop, then the launch was canceled entirely by the US space agency NASA.

    The reasons for this were initially unclear. There was probably an error in a computer system on the ground, the space agency said.

    “Starliner” as an alternative to the “Dragon” capsule

    The test flight was aborted shortly before takeoff about a month ago and has been postponed again and again since then. NASA astronauts Barry Wilmore and Suni Williams were actually supposed to set off for the International Space Station ISS on board the “Starliner” and stay there for about a week. According to NASA, there would be further launch opportunities on Sunday and on June 5 and 6. However, it was initially unclear whether one could be used.

    NASA plans to regularly use the “Starliner” alongside SpaceX’s “Dragon” capsule to transport astronauts to the space station. The “Dragon” capsule from multi-billionaire Elon Musk’s company has been transporting astronauts to the ISS on behalf of NASA for several years. The “Starliner” program, on the other hand, has suffered a number of setbacks in recent years.

    SpaceX mission

    Launch of giant rocket “Starship” postponed

    Things are not going well with the “Starship” of Elon Musk’s space company SpaceX: After the explosion during the first test flight, there are now also problems with the second attempt at the largest rocket in the world.

    The second test of the largest rocket system ever built in the history of space travel has been postponed from Friday to Saturday. A drive unit on a control surface – a so-called grid fin actuator – still needs to be replaced, wrote Elon Musk on the short message service X, formerly Twitter, which he owns. “That’s why the launch has been postponed to Saturday.” A grid fin is a component that is intended to stabilize the rocket when it returns to Earth and when it lands. It is brought into position by the actuator.

    SpaceX announced shortly afterwards that a 20-minute launch window for the “Starship” would open at 2 pm Central European Time on Saturday. Musk’s space company also sent a noise warning to the population of the surrounding Cameron County in the US state of Texas: There may be a loud noise from the 33 engines. What one hears on site also depends on the weather and other conditions. The second launch attempt was actually scheduled for Friday afternoon Central European Time.

    Requirements of the supervisory authority before second launch

    The “Starship” took off for an unmanned test launch for the first time in mid-April – and four minutes later, it staggered and exploded and broke apart. Billionaire Musk, whose company SpaceX developed and built the “Starship” rocket system, did not see this as a setback, but rather as an “exciting test launch”. Musk had always been stressed that another attempt with another “Starship” should follow as soon as possible.

    However, after an investigation, the US aviation authority FAA called for numerous improvements to be made before the next test launch could take place. SpaceX has now announced that it has implemented a number of changes. The FAA then gave the green light. The company has now “met all the necessary criteria, particularly with regard to safety, the environment and financial responsibility,” the authority explained.

    Environmentalists have nevertheless opposed the second launch attempt. “We fear that the second launch will once again cause significant environmental damage,” said Jared Margolis, a lawyer for the environmental organization Center for Biological Diversity. A lawsuit is pending against the FAA because of the first “Starship” launch attempt. Several environmental organizations accuse the agency of not having properly assessed the environmental risks posed by the new giant rocket.

    “Starship” to bring astronauts to the moon

    The “Starship” – consisting of the approximately 70-meter-long “Super Heavy” booster and the approximately 50-meter-long upper stage, also called “Starship” – is intended to enable manned missions to the Moon and Mars. The system is designed so that the spacecraft and rocket can be reused after returning to Earth.

    The system, which is about 120 meters long in total, will be able to transport well over a hundred tons of cargo in the future. NASA wants to use the “Starship” to bring astronauts to the moon. SpaceX hopes to reach Mars one day.

    The “Starliner” breakdown list has gotten longer

    Boeing does not only develop airplanes, but also spaceships. But the “Starliner” is particularly impressive due to its mishaps. The launch date for the first manned flight has been postponed again.

    The list of problems with the “Starliner” is long. In December 2019, it was supposed to fly to the International Space Station (ISS) for the first time without a crew. But the spacecraft never arrived at the ISS during this flight. Due to a software problem, the “Starliner” used up too much fuel too early, was unable to reach the ISS’s altitude and had to abort its flight after two days.

    The repeat of the flight in August 2021 ended even before it began. The humid weather in Florida had caused valves in the fuel lines to corrode, so that several of these valves were jammed before takeoff. The “Starliner” rolled from the launch site back to the workshop.

    It was only on the third attempt that the “Starliner” reached the ISS, two and a half years late. However, there was no crew on board for all of these flights. The first flight with people on board was actually planned for this summer – but the American space agency NASA and the Boeing company announced at a press conference that the launch would be postponed until spring 2024.

    Parachutes and faulty adhesive tapes

    What is not working and is preventing the one remaining, crucial qualification flight of the “Starliner” to the International Space Station are, on the one hand, fastenings for the parachutes and, on the other hand, adhesive tape. The fiberglass tape used to bundle electrical cables in the “Starliner” has proven to be flammable under very specific circumstances – this was not clearly noted in the aerospace engineers’ databases. The tape was installed in good conscience. Now it has to be removed.

    With the parachutes, it turned out that connections between the parachute lines and the return capsule could be overloaded.

    The design and material of the connection on the parachutes have already been changed. Whether this solves the problems will be determined during a test landing after being dropped from a high altitude, but this cannot be taken place until November. The unexpectedly flammable tape has already been largely removed from the upper half of the space capsule or covered with non-flammable material. It will take a few more weeks to replace it in the lower half of the “Starliner.”

    “Starliner” still a loss-making business for Boeing

    If the next flight, the first with a crew, is successful, Boeing would be the second company alongside SpaceX to have the license to transport space personnel to the ISS. With an estimated 50 to 60 million dollars per passenger, this would be a lucrative business.

    But it remains to be seen whether Boeing will ever make a profit with the “Starliner” with these impressive ticket prices. NASA agreed a fixed price of five billion dollars with Boeing for the development of the “Starliner” – but the many breakdowns and delays have resulted in costs of more than six billion dollars for Boeing.

    NASA sticks to project

    Nevertheless, the project will not be abandoned – not only because the next, decisive flight could make the project a success and make all the mishaps forgotten. But also because NASA started its commercial spacecraft program more than ten years ago so that one day it would no longer be dependent on just one type of spacecraft from one manufacturer.

    Although SpaceX has been reliably bringing new crew to the ISS every six months, NASA believes that splitting this business between the two companies would make sense. Memories are still fresh of the years when the USA had no spaceships of its own to transport men and women to the ISS because the space shuttles were unusable and had to buy flight opportunities from the Russian space agency.

    Possible ISS replacement

    Airbus joins private space station projectAirbus is getting involved in the development of a private space station that could replace the ISS. The aircraft manufacturer will work with the US company Voyager Space to advance the Starlab project.

    The European aircraft manufacturer and aerospace company Airbus is participating in a private space station that could replace the aging International Space Station ISS. Airbus wants to participate in the development, construction and operation of the Starlab station of the US company Voyager Space.

    Starlab to be sent into orbit in 2028

    A joint venture is planned, as Voyager Space and Airbus announced. According to current plans, Starlab is to enter orbit in 2028.

    The space station will provide space for various research laboratories and will therefore play an important role in space research. In addition to Voyager Space, the Hilton hotel group is also involved as a partner in the planning of the residential quarters.

    Replacement of the ISS by several space stations

    The ISS is gradually dismantled and crashed at the end of the decade. It is becoming apparent that there will be several stations instead of one huge structure.

    The Voyager company Nanoracks received $160 million from the US space agency NASA at the end of 2021 to develop Starlab. Airbus has already commissioned Voyager Space in January to provide technical support and know-how for the project.

    Longest ISS mission

    Astronauts set an involuntary recordThe astronauts Rubio, Prokopyev and Petelin were actually supposed to stay in space for 180 days. But a glitch delayed their return home. US astronaut Rubio is now especially looking forward to hugging his children.

    US astronaut Frank Rubio and the two Russian cosmonauts Sergei Prokopyev and Dmitri Petelin have landed safely back on Earth – after more than a year in space.

    The three astronauts, who had launched to the ISS on September 21 last year, landed in the steppe of the Central Asian Republic of Kazakhstan on Wednesday in the Soyuz MS-23 space capsule. The Russian space agency Roscosmos showed photos and videos of the picture-perfect landing. The astronauts were reportedly doing well.

    With 371 days in space, it was the longest mission on the International Space Station ISS, according to Roscosmos. However, it was not actually planned that way. The astronauts were supposed to return after six months, but damage to their Soyuz capsule prevented them from returning home as planned. A replacement spacecraft had to be found, which meant that the length of their stay was doubled.

    Only two cosmonauts have spent longer periods in space: Valery Polyakov spent a total of 437 days in space from 1994 to 1995, and Sergei Avdeev spent 379 days from 1998 to 1999. Both had worked on the ISS’s predecessor station, Mir.

    US record

    Meanwhile, American astronaut Frank Rubio set a US record. Due to the involuntary extension of the ISS mission, Rubio stayed in space for 371 days instead of the planned 180 days, thus beating the record of his compatriot Mark Vande Hei.

    “It’s good to be home,” said Rubio. If he had known in advance how long the mission would be, he would not have agreed. “Hugging my wife and children will be the most important thing, and that’s probably what I’ll focus on in the first few days,” said the military doctor and helicopter pilot.

    Damage to the Soyuz capsule

    A leak was discovered in the trio’s Soyuz MS-22 capsule in December, which experts believe was caused by the impact of a micrometeorite. This had damaged the outer coating and a cooling unit. Both Roscosmos and NASA assured at the time that they saw no danger to the operation of the ISS. Due to the damage, the return planned for March was postponed. The Soyuz MS-23 was sent to the ISS from the Baikonur Cosmodrome in Kazakhstan as a replacement spacecraft.

    International Space Station

    New crew launched at the ISS

    A four-person crew under the command of American Jasmin Moghbeli is on its way to the International Space Station ISS. The astronauts bring different personal backgrounds with them on board.

    This morning, a four-person crew took off from the spaceport of the US space agency NASA toward the International Space Station ISS. Just under 24 hours later, they are expected to dock with the ISS around 400 kilometers above the Earth. The launch was originally planned for Friday, but was then postponed by a day.

    Until now, the US space agency NASA has always had two or three of its astronauts on each of the SpaceX flights from the USA to the ISS. This time it is just NASA astronaut Jasmin Moghbeli. In addition to the commander, Andreas Mogensen from the European Space Agency ESA, the Japanese Satoshi Furukawa and the Russian Konstantin Borissow are also on board. They took off on board the SpaceX capsule “Dragon” from the Kennedy Space Center in Cape Canaveral, powered by a Falcon 9 rocket.

    Different experiences

    Moghbeli was born in Germany and grew up in New York. Her parents fled Iran during the 1979 revolution. She flew combat helicopters in Afghanistan for the US military. Now she is embarking on her first space flight. She wanted to show Iranian girls that they could have big goals, she said before the flight. “Believing in yourself is something very powerful.”

    Mogensen used to work on oil drilling platforms off the West African coast. He is sure that this experience led him to be selected as Denmark’s first astronaut. Furukawa worked as a surgeon for ten years before becoming an astronaut. He and Mogensen have both been on the ISS. Borissov is new to space travel. He runs a freediving school in Moscow and is a judge for the sport. Persian stew, Danish chocolate and Japanese fish will be among the dishes the astronauts will take to the ISS.

    Seventh rotation of the IS crew since 2020

    A few days after docking, the “Dragon” capsule is scheduled to bring another four-person crew, consisting of the Americans Stephen Bowen and Warren Hoburg, the Russian Andrei Fedyaev and the Emirati Sultan al-Nijadi, back to Earth. A total of seven people are currently on the International Space Station.

    According to NASA, the current mission is the eighth manned flight of the “Dragon” spacecraft from multi-billionaire Elon Musk’s SpaceX company. The aircraft manufacturer Boeing was also hired for the mission, but has not yet transported any astronauts. Due to problems, the Boeing crew capsule will remain on the ground until 2024.

    According to NASA, this is the seventh rotation of the ISS crew since 2020. The new crew members will conduct research in space for six months. Since the start of the Russian offensive in Ukraine a year ago, the International Space Station is one of the few areas in which Russia and the USA still work together.

    On the far side

    China announces successful landing of lunar probeThe mission is considered a milestone in Chinese space travel: Another probe has landed on the far side of the Moon. “Chang’e 6” is intended to collect soil and rock samples there and bring them back to Earth.

    According to state media, the Chinese lunar probe “Chang’e-6” has successfully landed on the far side of the moon. As the state news agency Xinhua reported, citing the Chinese space agency, “Chang’e-6” landed as planned in the so-called South Pole-Aitken Basin, a huge lunar crater.

    The probe is intended to collect rock samples from the far side of the Moon for the first time and bring them to Earth – it would be the first time in human history that this could be achieved.

    China launched its next mission to explore the far side of the moon at the beginning of May. A carrier rocket carrying the probe named after the Chinese moon goddess took off from the Wenchang spaceport on the southern Chinese island of Hainan on May 3. The technically complex mission is expected to last a total of 53 days. The probe is expected to collect a total of 2 kilograms of rock and soil samples in the South Pole-Aitken Basin and carry out other experiments.

    First landing in 2019

    China also wants to become a major power in space travel and is investing billions in its space program. And with success: in 2019, the People’s Republic succeeded in landing a probe on the far side of the moon for the first time with “Chang’e-4”. In the follow-up mission “Chang’e-5” in 2020, China became the third country after the USA and the Soviet Union to bring moon rocks to Earth – albeit from the front of the moon. Now China wants to be the first country to analyze samples from the far side of the moon.

    Scientists have high hopes for research into the far side of the Moon because its surface is not covered in lava. It could therefore be easier to access rocks that could provide information about the formation of the Earth’s satellite.

    China reaches for moon rocks

    It will be one of China’s most difficult space flights to date: a spacecraft is to bring moon rocks to Earth for the first time in 40 years. The mission is seen as China’s forerunner to Mars.

    China has launched an unmanned spacecraft to land on the moon. The flight is intended to bring rock samples back to Earth for the first time in more than four decades. The “Long March 5” rocket took off early Tuesday morning (local time) from the space station in Wenchang on the southern Chinese island of Hainan. The spacecraft, named after the Chinese moon goddess “Chang’e”, is expected to bring a lander to the moon on Sunday to collect rock and drill samples.

    If the return is successful, China would be the third spacefaring nation to succeed in such a venture, after the USA and the Soviet Union in the 1960s and 1970s. The spacecraft is to land in the volcanic region named after the German astronomer Karl Rümker (1788-1862). It is located in the “Ocean of Storms” in the upper left part of the side of the moon facing the earth.

    Two weeks time

    The mission is considered one of the most complicated that China’s space program has ever undertaken: for the first time, a Chinese ascent stage would launch from the Moon, take rock samples with it and perform a docking maneuver in the orbit of the Earth’s satellite before the return capsule flies back to Earth. The landing module can only stay on the lunar surface for one lunar day – about 14 Earth days – because it cannot be heated and cannot withstand the freezing cold nights.

    The 8,200 kilogram spacecraft consists of four modules: the orbiter with the return capsule and the lander with the ascent stage. After touching down on the lunar surface, the lander will use a long arm to collect around two kilograms of lunar rock and samples from boreholes up to two meters deep and store them in a chamber.

    Researchers hope that the samples will provide important new insights into the volcanic activity and history of the Moon. The USA’s Apollo missions brought back around 380 kilograms of moon rock. The Soviet Union collected a total of 300 grams – most recently with the unmanned “Luna 24” mission in 1976, when around 170 grams of moon dust were brought to Earth.

    Reaching for Mars?

    If the mission is successful, it would be a major step forward for China’s space program. Some experts believe it could pave the way for a similar mission to Mars or even a manned mission to the Moon.

    Astronomer Jonathan McDowell of the Harvard-Smithsonian Center for Astrophysics pointed out that China has already landed on the moon twice with the “Chang’e 3” and “Chang’e 4” missions. With a “Chang’e 5” test mission in 2014, it proved that it can navigate back to Earth and land a capsule there. Now it just needs to show that it can take samples on the moon and launch from there again. “That’s why I’m pretty optimistic that China can pull it off,” said McDowell.

    China is pursuing an ambitious space program with missions to the Moon and Mars as well as the construction of its own space station. In January 2019, China became the first spacefaring nation to land “Chang’e 4” on the relatively unexplored far side of the Moon. A rover was deployed to continue exploring the surface.

    Numerous moon missions planned

    The renaissance of manned space travel63 years ago today, the first human flew into space – the first manned mission to the Moon followed a few years later. Now several countries are planning to return to Earth’s satellite, first with probes and rovers, then with humans.

    The USA, China, Russia, India – several countries want to return to our Earth’s satellite. The race to the moon is in full swing. It’s about prestige, science and economic interests. 55 years after NASA flew astronauts to the moon for the first time, the return is imminent. In 2025, the USA wants to orbit the moon with four astronauts on the “Artemis II” mission. The manned moon landing is planned for 2026.

    “Astronautical space travel is currently experiencing a renaissance,” says ESA Director General Josef Aschbacher. Humans will be irreplaceable in the exploration of the Moon, together with robotics and artificial intelligence: “You need a good combination of both. A human exploring the terrain on the Moon, for example, can see more than a robot with all imaginable sensors and cameras. They think much further, much more holistically, in other words more holistically, than all the software and robots we can program. Apart from that, an astronaut can cover much greater distances to explore areas than a rover.”

    Robots lack scientific intuition

    ESA astronaut Alexander Gerst refers to the American “Viking” probes that flew to Mars in the 1970s to clarify, among other things, the question of whether there is water on the Red Planet.

    He explains: “Traces of water have actually been found, but we now know that water ice is often found directly beneath the Earth’s surface. If the excavator had dug just a few centimeters deeper, it would probably have found bare ice.” Robots simply do not have the scientific intuition that trained astronauts have.

    NASA plans new lunar vehicles

    NASA has just announced that it has awarded three space companies the contract to build a lunar vehicle. It is still unclear who will win the contract, but one thing is clear: the so-called LTV (lunar terrain vehicle) will be able to travel over the lunar surface with and without astronauts and cover long distances. The NASA press release states:

    The LTV will be able to cope with the extreme conditions of the Moon’s south pole and will feature advanced technologies for energy management, autonomous driving, and state-of-the-art communications and navigation systems. Crews will use the LTV to explore, transport scientific equipment, and collect samples from the lunar surface much further than they could go on foot, allowing for greater scientific yield.

    The first man in space

    63 years ago today, Soviet cosmonaut Yuri Gagarin became the first person to orbit the Earth. His flight in the Vostok 1 space capsule lasted one hour and 48 minutes. Since then, generations of astronauts have flown into space. With the construction of the International Space Station (ISS) in 1998, a certain routine set in. Long-term stays of six months are common practice. However, the planned moon flights now present the astronauts with completely different challenges.

    Astronaut Gerst says that the planned moon flights will open a completely new chapter in astronautics: “The planned moon missions are complex, with many logistical steps. You have to land there and are trapped on the moon by its gravity. The environment is hostile to life, with temperatures well below minus 100 degrees on the night side and well over 100 degrees on the day side, plus space radiation and vacuum.”

    In contrast to the “Apollo” missions, the “Artemis” missions are now flying to the moon to conduct sustainable science in international cooperation. The first moon flights between 1969 and 1972 were national solo efforts by NASA and took place during the Cold War. USA against the Soviet Union – back then it was about prestige and the fight for spheres of influence, with astronauts as heroes of the respective nation.

    Reasons for returning to the moon

    Why do so many nations actually want to go back to the moon? After the Apollo missions, NASA stopped the program. Today it is clear: the moon is more than just a collection of dead rock.

    ESA Director General Aschbacher explains: “With lunar exploration, completely new economic sectors will develop in the future. Mineral resources will be mined. There is ice on the moon, which can be converted into rocket fuel or oxygen for astronauts. It’s about setting up a camp in which to conduct research. Research and economic aspects will go hand in hand. We don’t yet know what the moon has to offer.”

    For him, there is no question that Europe must also participate in the exploration of the moon in order not to be left behind technologically and to remain competitive: “What is at stake is how Europe positions itself in the next ten, twenty years, as a continent that develops technology. We have excellent technologies in the automotive sector, in mechanical engineering, in many other areas, and space travel will be a domain of the future.”

    Space travel is expensive

    Aschbacher justifies the immense costs of space travel and explains that the ESA is not only investing money in the training and flights of astronauts:

    “We must not forget that ESA spends about half of its money on Earth observation, navigation and telecommunications. Our focus is on planet Earth. NASA has focused more on exploration, i.e. on astronaut space travel, to the Moon and Mars. This shows the slightly different priorities of the space agencies.” He calculates that the ESA budget in 2024 will be around 7.8 billion euros – money that will be used to provide a lot of technology, but also information about the Earth, the universe and the existence of humanity.

    Astronautic space travel is considered THE symbol of exploration in space. ESA astronaut Gerst is aware of the responsibility that his work is financed with public money. He compares the moon missions with the exploration of Antarctica: “In the beginning, it was about putting up a flag at the South Pole, with great political support. Then there was little research for decades, until there was a big push in the middle of the 20th century because people realized that there were scientific reasons to explore Antarctica. We are now facing this second wave on the moon.”

    Preparing for the moon

    Gerst does not yet know when or if he will be one of the astronauts who will fly to the moon. He says that he is head of the European astronaut corps for the new generation of ESA astronauts, but that he is preparing himself for a possible moon mission:

    “I have regular training sessions planned to prepare myself and my colleagues for such missions. For example, working in a space suit or flight training. We also practice conducting science in difficult environments, such as in the Antarctic, in underground caves or in volcanic areas.”

    The moon in the distance?

    On Monday, the US space probe “Peregrine” set off for the moon, but it will never arrive there. Now the next setback: NASA is postponing a manned flight to the moon. What is the status of the current missions?

    Everything started well. The launch of “Peregrine” on Monday was supposed to usher in a “new era of space travel to the moon and beyond,” the spokeswoman said during the lift-off broadcast. The rocket worked perfectly – but its payload, the probe on top of it, did not.

    Their on-board engines are acting up so much that they make it impossible to land on the lunar surface.

    German lunar experiment delivers data

    But even without the crowning achievement of a successful landing on the moon, the mission is already a success for the German Aerospace Center (DLR). There, in the Department of Radiation Biology, Thomas Berger is pleased with the progress of the mission so far.

    The DLR contributed to an experiment called M42 – named after the Orion Nebula in the night sky – to the “Peregrine” mission. “It measures the radiation on the way to the Moon and is now sending this measurement data back to Earth,” Berger said. The instruments will probably be managed until Thursday, before “Peregrine” runs out of fuel. Even without landing, at least this part of the mission was successful.

    But the bad news from the moon is still not letting up: Yesterday, the US space agency NASA announced that it does not want to send four astronauts around the moon at the end of the year – for the first time since the 1970s. Just over a year ago, everyone saw the test flight of “Artemis I”, said Bill Nelson. “It was so successful that further tests are necessary,” was the strange logic of the NASA boss.

    No need for haste

    Amit Kshatriya from NASA’s Exploration Systems Division explains exactly what went wrong: “During the return to Earth, more material came loose from the heat shield of the Orion crew capsule than the engineers had expected. We are currently investigating the cause and hope to have completed our investigations in the spring.”

    “Artemis I” took place in December 2022. Since then, NASA has been studying the heat shield. And studying. And studying. NASA chief Nelson justifies these delays: “In order to give the Artemis teams more time, we will not launch ‘Artemis II’ this year, but towards the end of next year. The follow-up mission “Artemis III” is not scheduled to bring people to the lunar south pole until the end of 2026.

    Other countries overtake the USA

    Delays have lasted several years for all sorts of reasons that a private space company like SpaceX could have possibly resolved within weeks. Elon Musk’s company plans to send its new Starship into space for the third time next month.

    After these recent delays, it’s more questionable whether the next people on the moon will have a Stars and Stripes flag in their luggage – but not for Bill Nelson: “I’m not worried that China could land on the moon before us,” says the NASA administrator.

    China has a very ambitious program. And sending people to the moon before the USA would be a huge PR coup. “But I don’t think they’ll manage it.” Neither the Americans nor the Chinese are next anyway: On January 19, the unmanned Japanese probe SLIM will land on the moon – or not.

    Third test flight unsuccessful

    SpaceX reports loss of giant rocket

    The third test flight of the gigantic SpaceX rocket “Starship” has also failed. It was apparently destroyed during its re-entry into the Earth’s atmosphere. One day it will transport people to the moon and Mars.

    The largest rocket system ever built was also unable to complete a third test flight. The unmanned “Starship” was made into space after launching from SpaceX’s Starbase spaceport in Boca Chica, Texas. However, it was destroyed upon re-entering the Earth’s atmosphere, according to commentators on the live stream from the space company SpaceX.

    A test flight lasting around an hour was planned, after which both rocket stages were to land in the sea. The lower rocket stage also attempted to land, but then failed, as the commentators reported: “We lost the rocket stage, so we have to look at the data again.”

    More goals achieved than in previous tests

    The upper stage of the rocket was the first to enter space, where it probably remained for about half an hour before being lost during re-entry into the Earth’s atmosphere. This means that more goals were achieved in the third test than in the previous two.

    In the first test last April, the entire rocket system exploded after just a few minutes. In the second test in November, the two rocket stages were separated and the upper one continued to fly, but both exploded separately shortly afterwards. SpaceX always emphasizes that the aim of the tests is to collect data.

    Manned missions to the Moon and Mars

    In the future, the system, which will be around 120 meters long, will be able to transport well over 100 tons of cargo. It consists of a booster around 70 meters long and an upper stage around 50 meters long and is intended to enable manned missions to the Moon and Mars. The US space agency NASA also plans to use “Starship” for its Artemis moon landing mission.

    The system is designed so that the spacecraft and rocket can be reused after returning to Earth. This should make space missions cheaper.

    SpaceX to build new lunar module

    It was the first manned flight to the Earth’s satellite since 1972: The US space agency NASA has commissioned Tesla founder Elon Musk’s company SpaceX to build a lunar module.

    NASA continues to rely on Tesla founder Elon Musk for manned space travel. The US space agency commissioned Musk’s company SpaceX to build the next lunar module for flights to the Moon – the first since the Apollo missions 50 years ago.

    SpaceX is already using the Falcon 9 to transport astronauts to the International Space Station ISS for NASA. The special thing about Musk’s lunar module “Starship” is that all parts of the carrier system are reusable, which will significantly reduce the costs of each individual flight in the medium term.

    The price was apparently also an argument for NASA: According to the Washington Post, SpaceX made the cheapest offer at just under three billion dollars – and thus beat the competition, including “Blue Origin”, the space company of Amazon founder Jeff Bezos.

    Test flights not yet successful

    However, there is still a lot to do before the “Starship” can actually fly people to the moon as part of the Artemis mission. The prototype’s previous test flights either ended in a crash landing or the lander exploded in mid-air.

    It remains unclear when the first manned flight to the moon since 1972 will take place. Former US President Donald Trump had set NASA a deadline of 2024. The government of his successor Joe Biden is currently having the schedule reviewed again.

    Editor’s note: A previous version mentioned a moon rocket that Musk is supposed to build for NASA. However, this is the part of the rocket that is supposed to land on the moon’s surface at the end, i.e. the lunar module.

    Life like on the Red Planet

    Humans have reached Mars: The Red Planet is located in Texas and measures 160 square meters. With the help of a one-year simulation, NASA wants to prepare for a real Mars mission – even if it is still a long way off.

    An area of ​​​​red sand that is intended to simulate the Martian landscape. Four bedrooms, a living room, several work units as well as a mini gym, a vertical farm for growing vegetables and a medical area. Video recordings from Reuters and other media provide insights into the so-called Mars Habitat Simulator, in which four volunteers are to simulate life on the Red Planet for a year.

    NASA wants to investigate how people behave in total isolation far away from Earth, says the head of the research project, Grace Douglas, to the news agency Reuters.

    The “Crew Health and Performance Exploration Analog” (CHAPEA) project was developed as a simulation of a one-year stay on Mars. The intention is to observe a crew in isolation with many restrictions. “We want to understand how these restrictions affect their health and performance,” Douglas said.

    Mars on 160 square meters

    The nearly 160 square meter “Mars Dune Alpha” comes from the 3D printer and was built in a converted hangar at the Johnson Space Center in Houston, Texas. The experiment is intended to prepare for a future Mars mission, says NASA scientist Suzanne Bell: “In addition to the isolation and the confined space, we are also studying how the four test subjects get along with each other. When you live in a confined space with three other people, it can be exhausting.”

    The four paid volunteers will have to live under realistic Mars conditions. This means they will not have much water, no fresh air, very little luggage and very limited contact with friends and family – just as people might live on a Mars station in the distant future.

    Walking on Mars via Virtual Reality

    The only thing that cannot be simulated in “Mars Dune Alpha” is the gravitational pull of Mars, says Douglas in an interview with the television station Fox 13 News.

    A special zone has been set up for this purpose, adds NASA human performance engineer Alex Baughman: “Here we have created a space where we can take space or Mars walks using virtual reality and carry out simulated Mars experiments. For example, taking geological samples, removing dust from space suits or solar panels, or making repairs to the house.”

    Volunteers must meet astronaut requirements

    The four people who will volunteer to be locked in the Mars house for 365 days will be announced in the next few weeks. According to NASA, they must meet the standard requirements for astronauts.

    In case there is a ruckus in the Mars house or if one of them gives up, two replacement astronauts are ready to step in in an emergency. The simulation is set to begin in June, with two more experiments to follow. Things will get serious in 2040. NASA boss Bill Nelson is already dreaming of sending a manned spacecraft to Mars by then at the latest.

    “Pizza and pastries are forbidden”

    What can ESA astronaut Matthias Maurer expect from his six-month stay on the ISS? Before his launch with SpaceX, he told tagesschau.de what he plans to do on board – and what he will have to do without.

    tagesschau.de: Mr. Maurer, your start date has been postponed several times. How did you spend the waiting time?

    Matthias Maurer: In the past year and a half, I have hardly had the opportunity to go on holiday due to the intensive training in preparation for my mission. So I have used the waiting time to relax a bit. Of course, we are continuing to train: we are doing sports, practicing various tasks again, such as “catch and capture”, where we virtually operate a robot arm and similar things. Of course, we are also going through various procedures in detail. A few days ago, we had an emergency training session on how to behave if something were to happen after the Dragon capsule took off, such as a fire breaking out or a loss of pressure in the Dragon.

    As a crew we obviously spend a lot of time together, but we also had the opportunity to see our families a few times. When things got more or less boring, we started collecting plastic waste on the beach. We have a so-called beach house that is available to us and there is a private section of the beach that is not open to the public. The beach house sounds great, by the way, but it is more of a conference center. The mood within the crew is still good and couldn’t be better. We are of course looking forward to things getting started soon. But we are not allowing frustration or impatience to build up. We quickly came to an agreement as a crew.

    tagesschau.de: Do you also feel some excitement before the start?

    Matthias Maurer: Not yet, actually. I think the excitement comes when the hatch is closed and we hear the rocket being filled. That makes a lot of noises, and we’ve been prepared for that. But of course we haven’t really heard or felt it yet. I’ve already been in my capsule once. We had to test whether the spacesuit fits with the connections in the capsule.

    Weightlessness – also a problem for astronauts

    tagesschau.de: Despite the long preparation: Are you afraid that you won’t be able to cope with the weightlessness in space?

    Maurer: According to statistics, 85 percent of my colleagues cannot tolerate space. So I am actually quite sure that I will not be able to tolerate space for the first three days either. It would be like being on a ship with a bit more sea than you are used to. But we take medication for motion sickness, which means I will be able to dampen it down. With a bit of luck, I will be one of the 15 percent who laugh and say: It’s not that bad. But I am prepared to suffer for a few days. That is part of it.

    tagesschau.de: The weather has to be right for your launch. And not just in Florida, but also over the Atlantic west of Ireland. Why is that?

    Maurer: We’ll take off from Cape Canaveral and then fly parallel to the US coast towards Canada. Then we’ll continue towards England and Ireland. And when we reach the airspace over England, we’ll be in orbit. But if we abort the launch, we’ll fall into the sea. That’s why the weather has to be good at the landing sites where we’re planning to abort the launch. Even in the best conditions in Florida, the launch could be delayed due to bad weather in the emergency landing areas.

    Experiments that can only be carried out in space

    tagesschau.de: Let’s talk about your work in space, such as the experiments. Why is it so important to experiment in space?

    Maurer: Gravity here on Earth causes us to have physical effects that we don’t have in space. If you take water and pour some oil over it, for example, an oil film will float on top of the water here on Earth because the oil film is lighter. In space, the oil film doesn’t float, but behaves differently because of the weightlessness. We want to look into this and study additional interactions between the water and the oil, such as surface energy or interface effects. I have a lot of experiments with me in my luggage, and we’re also doing combustion experiments, for example. Because combustion also takes place differently in space than on Earth.

    tagesschau.de: On the ISS you also practice for long-term stays on the Moon and Mars. Can you give us an example?

    Maurer: Bringing resources to the ISS is already expensive. You can assume that every liter of water that we fly up costs around 10,000 euros – for transport costs and the entire logistics chain that goes with it. If we had to bring water to the moon, that is even more expensive. That means: every drop of water that we can actively save helps to make the mission possible and sustainable. So we need devices that create closed cycles. The water that we drink on the ISS becomes urine, which is then purified again to make drinking water. At the moment we recycle almost 90 percent of the water – that’s very good, but it’s not 100 percent yet. We have to get there.

    “People on the ground should become part of my mission.”

    tagesschau.de: You have been asked a lot in recent weeks what you are taking with you to the ISS. So let’s turn the question around: What would you like to take with you but are not allowed to?

    Maurer: We have a few holidays up there, Christmas and my birthday. A glass to toast with something you drink at a party would have been nice. But we’re not allowed to take that with us. Even something crunchy, like pizza or pastries, is forbidden – because of the crumbs. If they don’t fall on the floor, we could inhale them. Then they would be a real danger.

    tagesschau.de: Your German predecessor on the ISS always shared his mission with us through photos he posted from space. Do you plan to do the same?

    Maurer: When I applied to become an astronaut in 2008, I had no idea that social media would become so popular. Now it’s part of the astronauts’ job. Luckily, I have a team that supports me. But of course I have to ” feed” this team from above: I have to and will create images and video clips and put the emotions I have up there into words. I do this not only because I have to, but also because photography is one of my hobbies and hopefully the people on the ground will also become part of my mission.

  • The hardware and build quality of the Apple Vision Pro are undeniably impressive

    I attempted to rely solely on the Vision Pro for my work for a week, and it surpassed my expectations once I connected it to my laptop.

    The Apple Vision Pro is the most remarkable mixed reality headset I have ever utilized. It is enjoyable for gaming and watching movies, and it has impressive eye and hand tracking capabilities. However, at a price of $3,500, one would expect it to offer more than just entertainment.

    Considering its cost is equivalent to that of a fully equipped MacBook, one would hope to be able to use it for productivity purposes.

    I have been using the Vision Pro for a few months, and for the past week, I have been attempting to use it in lieu of my traditional PC setup to assess its productivity potential. The positive aspect is that the Vision Pro possesses the capability and adaptability to function as a virtual office.

    The downside is that additional equipment, including a MacBook, is required to fully utilize its potential.

    To facilitate productivity, the addition of a MacBook is necessary.

    Initially, I attempted to work using the Vision Pro without any additional equipment. This appeared feasible since it shares similar power with the iPad Pro, the 2022 MacBook Pro, and the 2023 MacBook Air, and its visionOS is based on both iPadOS and macOS. However, its design and compatibility lean more towards the iPad.

    The Vision Pro encounters similar challenges as the iPad Pro when it comes to serious work, and these challenges are even more pronounced on the headset. iPadOS presents difficulties with multitasking and managing multiple apps simultaneously.

    Managing window placement, multitasking with multiple apps and desktops, and even simply knowing which apps are open is extremely challenging without a task manager or a macOS-like dock with indicators for running apps.

    The iPad Pro has a dock without indicators, but the Vision Pro lacks a dock altogether; users need to access an iPhone-like app list to browse apps, and it does not indicate which apps are open.

    In summary, I do not recommend relying solely on the Vision Pro for work purposes.

    To simplify the process, I utilized a MacBook Air and tested the Mac Virtual Display feature. Connecting to the Mac via Mac Virtual Display is straightforward, although not as seamless as Apple claims.

    By simply looking up while wearing the Vision Pro, the menu can be accessed, settings can be opened, and the Mac Virtual Display icon can be selected. If both the Mac and Vision Pro are on the same Wi-Fi network and logged into the same Apple account, the Mac can be selected and connected instantly.

    The process is fast and simple, and there are no complaints about it. However, it is supposed to be even more streamlined with the Vision Pro displaying a large “Connect” button floating over the Mac when it is looked at.

    I have seen the button appear a few times, but not consistently, and most of the time it does not appear. Nevertheless, manually connecting through the quick menu is almost as smooth.

    Once connected, the Mac Virtual Display presents the Mac’s screen as a floating window that can be repositioned and resized within the headset. Although smart glasses like the Rokid Max and the Viture One, which cost a sixth of the price, offer similar functionality, the Vision Pro has distinct advantages.

    Firstly, the Mac Virtual Display window can be moved and resized, and it will remain fixed in that position even when moving around. Whether you want it to float just above your MacBook or cover your wall like a large TV, it is easy to position and resize. It will remain in place even if you get up and move around.

    The Vision Pro surpasses other smart glasses by allowing the use of apps while using Mac Virtual Display.

    While multitasking on the Vision Pro alone is challenging, being able to manage all your essential tools in macOS on one large screen while simultaneously having a video window open to the left and a chat window open to the right makes it easy.

    Keyboard and mouse control worked well when connected to the MacBook. I couldn’t use my mouse outside of the Mac Virtual Display window because the Vision Pro doesn’t support any form of mouse input.

    However, the Magic Trackpad can be utilized between the MacBook screen and Vision Pro apps by swiping between them.

    Importantly, physical keyboard input from the MacBook was translated to the Vision Pro. I could type in my MacBook apps and then switch to a separate app on the Vision Pro and start typing there with the same keyboard.

    Using your eyes and fingers to type on the Vision Pro’s virtual keyboard is acceptable for a few words, but for longer sentences, a physical keyboard is necessary.

    Coming from a PC setup with an ultrawide monitor and previously using two monitors, I was disappointed to discover a significant limitation in Mac Virtual Display: only one screen is available.

    Even with multiple desktops through macOS’ Mission Control, they cannot be distributed to multiple windows on the Vision Pro. You can still set other apps around you and run them alongside the Mac Virtual Display window, but you’re limited to Vision Pro apps.

    On the positive side, you can choose from various resolutions including 4K and 5K (5,120 by 2,880), surpassing the 2,560-by-1,440 screen of my MacBook Air.

    Less significant but still somewhat irritating, the Mac Virtual Display connection doesn’t detect the Vision Pro’s Persona feature as a webcam feed. If you take a video call on the MacBook, others will only see your headset-covered face.

    To use Persona for calls, you need a browser window or a videoconferencing app running on the Vision Pro itself.

    It took some experimentation to figure out the best configuration for me, but I ultimately settled on the Mac Virtual Display in front of me, a Safari window behind it for taking video calls with Persona, a few Vision Pro communications apps to my right, and the Television app showing a virtual screen playing music to my left.

    I really enjoyed working in this virtual office. Even with only one screen for my tools on the laptop, being able to make it as big as I wanted and place it anywhere around me was a huge advantage.

    I could still run browsers, communications software, and other apps outside of the Mac Virtual Display window through the Vision Pro itself, and they all worked together very well.

    Keyboard controls between apps were generally very smooth, and my clipboard was shared between the Vision Pro and the MacBook, allowing me to copy a URL from a message and drop it on my desktop (which came in handy for iCloud links with large Vision Pro recordings ).

    The experience wasn’t perfect, and I encountered some hiccups. Occasionally, the Mac Virtual Display window would indicate that the connection was interrupted.

    Interestingly, this didn’t prevent me from using the MacBook through the Vision Pro, but it did stop my keyboard inputs from registering in Vision Pro apps until the error message disappeared.

    Chrome on the MacBook consistently crashed when I removed the Vision Pro, which didn’t happen when I physically closed the laptop or manually disconnected from it. These are relatively minor inconveniences that can be smoothed out over time.

    One issue you’ll likely face when working on the Vision Pro is the discomfort of long-term use. While the Vision Pro can run indefinitely when plugged in and the MacBook can last a solid 16 hours without power, I could only tolerate wearing the headset for 90 minutes at a time.

    Removing it after that duration left me with a bit of eye strain and a headache for a short period. The 20-20-20 rule of looking away from a screen at something 20 feet away for 20 seconds every 20 minutes is even more important for a view-replacing headset like the Vision Pro.

    Following a demonstration lasting approximately 30 minutes that covered the key features available for testing, I left with the firm belief that Apple has introduced a significant advancement in the capabilities and implementation of XR, or mixed reality, with its new Apple Vision Pro.

    To clarify, I am not asserting that it fulfills all its promises, introduces a genuinely new computing paradigm, or makes any other high-reaching claims that Apple aims to achieve upon its release. I will require ample time with the device beyond a guided demonstration .

    However, I have experience with nearly every major VR headset and AR device since the Oculus DK1 in 2013 up to the most recent generations of Quest and Vive headsets. I have explored all the experiences and attempts to popularize XR.

    I have witnessed both successful social, narrative, and gaming experiences such as Gorilla Tag, VRChat, and Cosmonius, as well as emotionally impactful first-person experiences created by Sundance filmmakers that shed light on the human (or animal) condition.

    Nevertheless, none of them possess the advantages that Apple brings to the table with Apple Vision Pro, including 5,000 patents filed over the past few years and access to a vast pool of talent and capital.

    Every aspect of this device reflects Apple-level ambition. Whether it will become the “next computing mode” remains uncertain, but the dedication behind each decision is evident. No corners have been cut, and full-fledged engineering is on display.

    The hardware is impressive — with 24 million pixels spread across the two panels, significantly more than what most consumers have encountered with other headsets. The optics are superior, the headband is comfortable and easily adjustable, and there is a top strap for alleviating weight.

    Apple has stated that it is still deliberating on which light seal (the cloth shroud) options to include when it officially launches, but the default one was comfortable for me. They intend to offer variations in sizes and shapes to accommodate different face shapes.

    The power connector features a clever design as well, using internal pin-type power linkages with an external twist lock for interconnection.

    For individuals with varying vision requirements, there is also a magnetic solution for some (but not all) optical adjustments. The onboarding experience includes automatic eye-relief calibration that aligns the lenses with the center of your eyes, eliminating the need for manual adjustments.

    The main frame and glass piece look satisfactory, although it’s worth noting that they are quite substantial in size. Not necessarily heavy, but certainly noticeable.

    If you have any experience with VR, you are likely aware of the two significant obstacles that most people encounter: nausea caused by latency and the sense of isolation during prolonged sessions wearing a device over your eyes.

    Apple has directly addressed both of these challenges. The R1 chip, alongside the M2 chip, boasts a system-wide polling rate of 12ms, and I observed no judder or framedrops. While there was a slight motion blur effect in the passthrough mode, it was not distracting. The windows rendered sharply and moved swiftly.

    Naturally, Apple’s ability to mitigate these issues stems from a plethora of entirely new and original hardware. Every aspect of this device showcases a new idea, a new technology, or a new implementation.

    However, all these innovations come at a cost: at $3,500, it exceeds high-end expectations and firmly places the device in the power user category for early adopters.

    Here’s what Apple has accomplished exceptionally well compared to other headsets:

    The eye tracking and gesture control are nearly flawless. The hand gestures are detected from anywhere around the headset, including on your lap or resting low and away on a chair or couch. Many other hand-tracking interfaces require you to keep your hands raised in front of you, which can be tiring.

    Apple has incorporated high-resolution cameras dedicated to the bottom of the device specifically to track your hands. Similarly, an eye-tracking array inside ensures that, after calibration, nearly everything you look at is precisely highlighted. A simple low-effort tap of your fingers and it works.

    Passthrough plays a crucial role. It’s vital to have a real-time 4K view of the surrounding environment, including any people nearby, when using VR or AR for extended periods.

    Most people have a primal instinct that makes them extremely uneasy when they can’t see their surroundings for an extended period.

    Having the ability to see through an image should increase the likelihood of longer usage times. Additionally, there’s a clever mechanism that automatically displays a person approaching you through your content, alerting you to their presence.

    The exterior eyes, which change appearance based on your activity, also serve as a helpful cue for those outside.

    The high resolution ensures that text is easily readable. Apple’s positioning of this as a full-fledged computing device only makes sense if the text is legible.

    Previous “virtual desktop” setups relied on panels and lenses that presented a blurry view, making it difficult to read text for an extended period.

    In many cases, it was physically uncomfortable to do so. With the Apple Vision Pro, text is incredibly sharp and readable at all sizes and distances within your space.

    There were several pleasantly surprising moments during my brief time with the headset. Apart from the display’s sharpness and the responsive interface, the entire suite of samples demonstrated meticulous attention to detail.

    The Personas Play feature. I had serious doubts about Apple’s ability to create a functional digital avatar based solely on a scan of your face using the Vision Pro headset. Those doubts were unfounded.

    I would say that the digital version it creates for your avatar in FaceTime calls and other areas successfully bridges the uncanny valley.

    It’s not flawless, but the skin tension and muscle movement are accurate, the expressions are used to create a full range of facial movements using machine learning models, and the brief interactions I had with a live person on a call (and it was live, I verified by asking off-script questions) did not feel unsettling or strange. It worked.

    It’s sharp. I’ll reiterate, it’s extremely sharp. It handles demos like the 3D dinosaur with incredible detail down to the texture level and beyond.

    3D movies look great on it. Jim Cameron probably had a moment when he saw “Avatar: Way of Water” on the Apple Vision Pro.

    This device is perfectly designed to showcase the 3D format — and it can display them almost immediately, so there will likely be a substantial library of 3D movies that will breathe new life into the format.

    The 3D photos and videos you can capture directly with the Apple Vision Pro also look excellent, but I didn’t have the chance to capture any myself, so I can’t comment on the experience. Awkward? Hard to say.

    The setup process is simple and seamless. A few minutes and you’re ready to go. Very Apple.

    Yes, it’s as impressive as it looks. The output of the interface and the various apps is so remarkable that Apple used them directly from the device in its keynote.

    The interface is vibrant and bold and feels present due to its interaction with other windows, casting shadows on the ground, and reacting to lighting conditions.

    Overall, I’m cautious about making sweeping claims regarding whether the Apple Vision Pro will deliver on Apple’s promises about the advent of spatial computing.

    I’ve had too little time with it, and it’s not even finished — Apple is still refining aspects such as the light shroud and various software elements.

    However, it is undeniably well-executed. It represents the ideal XR headset. Now, we’ll have to wait and see what developers and Apple achieve over the next few months and how the public responds.

    Recent leak suggests that mass production of the Apple Vision Pro 2 is in progress.

    The Apple Vision Pro 2 is scheduled to commence mass production in 2025, despite previous reports indicating otherwise. The original Vision Pro, Apple’s AR headset, did not perform well in the market, with sales struggling to reach 100,000 units by July 2024.

    Apple intends to introduce new features to enhance the popularity of the sequel. One of these features is a new M5 chipset, expected to enhance the headset’s performance.

    Contrary to earlier rumors of production cessation due to low demand for the original Vision Pro, analyst Ming-Chi Kuo from TF International Securities believes that mass production of the new M5 chipset-equipped AR headset will begin in the second half of 2025. Apple aims to make the Vision Pro 2 more cost-effective, potentially appealing to a broader customer base.

    Kuo also anticipates minimal, if any, changes to the design of the AR headset, which would reduce production costs. This strategic move would leverage the fresh and appealing design of the Vision Pro, featuring the innovative augmented reality display EyeSight and a modern futuristic high -end aesthetic.

    New chip, new enhancements

    According to Kuo, the M5 chipset will enhance the Apple Intelligence experience. The projected launch date of the Apple Vision Pro 2 suggests that the M5 chipset may utilize TSMC’s N3P node, although this is not confirmed.

    In an effort to control production costs, Apple will not utilize its more advanced 2nm chipsets. These chipsets were initially expected to be used for manufacturing next-generation iPhone chips like the A19 and A19 Pro, but it appears that these products will also stick with Apple’s N3P node (3 nm).

    While not as cutting-edge as the 2nm chipsets, the 3nm chipset is still efficient and powerful.

    The high cost of the Apple Vision Pro, starting at $3,500 (£2,800, AU$5,300), is often cited as a reason for its low sales figures. Other reasons include a perceived lack of content for the device, as well as comfort, wearability , and the intuitiveness of the gesture-based control.

    There is still much unknown about the specifications of the Apple Vision Pro 2, but if Apple can deliver the proposed M5 chipset in a more affordable headset, it could be a success for the company.

    The Vision Pro 2 is reportedly set to be released by the end of next year, featuring an M5 chip and designed for AI ‘from the ground up’ (as Apple might say). This news is promising, and I believe it’s the right move for Apple.

    It has been clear for some time that Apple’s vision for its Vision products is long-term.

    AR and VR are still in the early stages of adoption. However, the challenge many tech companies face is how to develop the technology and platform without having devices in the market.

    So, earlier this year, Apple released the Vision Pro. While it has not been a major success or significantly contributed to the company’s bottom line, it is a tangible product. Developers are creating applications for it, and technologies like visionOS, Immersive Video, and Spatial photos are expanding. Slowly, the Vision Pro is making a ‘spatial computing’ future more feasible.

    The objective: appealing to the masses

    Ultimately, Apple aims for its Vision products to become a major success and the next big thing. It wants spatial computing to become mainstream.

    To achieve this goal, at the very least, a Vision product needs to be:

    • Lighter
    • More versatile
    • less expensive

    Therefore, reports that Apple’s priority is not the Vision Pro 2, but instead a more affordable Vision device, make a lot of sense.

    While Apple focuses on the non-Pro version of its Vision line, it is crucial to keep the Vision Pro at the forefront of innovation.

    This is where the latest report becomes relevant.

    The Vision Pro 2 is receiving the necessary upgrades, and perhaps more

    Previously, I suggested that while Apple is concentrating on a less expensive Vision device, it should at least equip the current Vision Pro with an M4 and leave it at that.

    It appears that this is precisely what will happen, except it will feature an M5 instead.

    Reportedly, the Vision Pro 2 will include an M5 chip with a strong focus on Apple Intelligence.

    And I say: great!

    Apple’s focus on Apple Intelligence is evident, and the absence of this feature in visionOS for the $3,500 Vision Pro is disappointing, given its otherwise advanced capabilities.

    If Apple were to introduce a new Vision Pro in 2025 with an M5 chip and integrate several Apple Intelligence features into visionOS 3, it would generate the necessary excitement for the platform.

    Meanwhile, the company can continue prioritizing the more affordable Vision product, as it has a better chance of achieving widespread success.

    For now, it’s crucial for the Vision Pro to remain appealing to early adopters and the curious, and the rumored updates should help achieve this.

    According to Apple analyst Ming-Chi Kuo, a new version of the Vision Pro headset is being developed and is expected to begin mass production in the second half of 2025.

    Kuo suggests that the most significant change in the upcoming model will be the inclusion of Apple’s M5 chip, a substantial upgrade from the current Vision Pro’s M2 chip. This enhancement is expected to significantly boost the device’s computing power, particularly in terms of integrated Apple Intelligence features.

    Despite the upgraded internals, Kuo reports that other hardware specifications and the overall design of the Vision Pro will remain largely unchanged. This approach may help Apple manage production costs, although the price point is anticipated to remain close to the current $3,499 starting price.

    Kuo emphasizes that if the new version introduces compelling use cases, it could propel Apple’s spatial computing platform toward mainstream adoption. He also speculated on the potential integration of advanced AI models, such as text-to-video capabilities similar to OpenAI’s Sora, which could greatly enhance the Vision Pro experience.

    According to Bloomberg’s Mark Gurman, Apple is planning to incorporate Apple Intelligence features into the Vision Pro headset in the future. While the device is capable of running on-device AI functions such as writing tools, notification summaries, and an enhanced Siri, these features are not expected to be available in 2024. Instead, Apple may be saving the Apple Intelligence integration for visionOS 3, potentially launching in 2025.

    Apple’s exploration of a new product category includes venturing into robotics. Additionally, the company is preparing new iPads and accompanying accessories for a May release, the Vision Pro is set to receive another Personas upgrade, and there has been a significant management change at Apple.

    Just a year ago, Apple’s future product pipeline seemed abundant. The Vision Pro had not yet been introduced, smart home devices were in development, and the Apple electric car project seemed to be gaining traction.

    Today’s situation is markedly different. While the Vision Pro is now available for purchase, it has not achieved widespread popularity. The Apple vehicle project has been scrapped, along with efforts to develop next-generation smartwatch screens.

    The performance improvements of processors have begun to level off, and the company is lagging behind in the smart home market.

    To compound the situation, Apple’s competitors, such as Microsoft Corp. and Alphabet Inc.’s Google, have made significant progress in generative AI, much to the excitement of consumers and investors. Meanwhile, Apple has remained relatively inactive.

    Apple’s business is heavily reliant on the iPhone, which contributes to more than half of its revenue. Sales in that market have stagnated, underscoring the importance of finding a major new product category.

    Apple has faced similar challenges in the past. The iMac revitalized the company in the late 1990s, the iPod propelled it into consumer electronics in the early 2000s, and the iPhone transformed Apple into the industry giant it is today. The iPad further solidified its position in our lives.

    While Apple is starting to generate more revenue from online services and other offerings, it remains fundamentally a company focused on devices. During the most recent holiday season, the majority of its revenue was derived from products such as the iPhone, Mac, iPad, Apple Watch, and AirPods.

    Ultimately, services like the App Store, TV+, and Apple One bundles depend on the iPhone and other devices to function. This underscores the importance of staying at the forefront of hardware innovation.

    An Apple vehicle was seen as the “ultimate mobile device,” and it’s clear why that possibility was exciting. It’s a low-profit industry, but the vehicles could have been sold for $100,000 each.

    Even if Apple only sold a portion of the number of units of Tesla Inc., that could have resulted in a $50 billion business (or approximately equivalent to the iPad and Mac combined).

    The Vision Pro headset introduced Apple to the mixed-reality category, which the company calls spatial computing. However, its greatest potential might be in replacing the Mac and iPad, rather than creating an entirely new source of revenue.

    For the device to gain any significant traction, the company will need to produce a more affordable model and ideally bring it to market within the next two years.

    Then there’s the smart home sector, where Apple still has large aspirations. It has discussed automating household functions and offering an updated Apple TV set-top box with a built-in camera for FaceTime video calls and gesture-based controls. And all the technology will seamlessly integrate with both the iPhone and Vision Pro.

    One aspect of the plan is a lightweight smart display — something similar to a basic iPad. Such a device could be moved from room to room as needed and connected to charging hubs located around the house. Apple has initiated small-scale test production of the screens for this product, but has not made a decision on whether to proceed.

    Establishing a unified smart home strategy remains a goal for Apple, but fulfilling the vision has proven challenging. The need to complete the Vision Pro took priority, diverting resources away from smart home efforts.

    But now that the Vision Pro has been released and the electric car project has been canceled, Apple has more capacity to refocus on the home. And there’s an exciting potential opportunity in that area. As reported recently, Apple is exploring the concept of creating personal robotic devices infused with artificial intelligence.

    The company has internal teams within its hardware engineering and AI divisions exploring robotics. One recent project involved a home robot that could follow a person around the home.

    Some involved in the effort have even suggested that Apple could delve into humanoid technology and develop a machine capable of handling household chores. However, such advancements are likely a decade away, and it doesn’t seem that Apple has committed to moving in that direction .

    A more immediate move into robotics would be a device that Apple has been working on for several years: a tabletop product that utilizes a robotic arm to move around a display.

    The arm could be used to mimic a person on the other side of a FaceTime call, adjusting the screen to replicate a nod or a shake of the head. However, this device also lacks unified support from Apple’s executive team.

    So for now, Apple will likely make more gradual improvements to its current lineup: new device sizes, colors, and configurations, in addition to accessories that could generate more revenue from the iPhone. This has largely been the key to the company’s success during Tim Cook’s tenure as CEO.

    But with robotics and AI advancing every year, there’s still hope that something from the Apple lab could eventually make its way into consumers’ living rooms.

    2024 is shaping up to be the year of the iPad. The new iPads are finally on the horizon. You can mark early May on your calendar if you — like many Power On readers, apparently — have been eagerly anticipating an upgraded tablet.

    On the agenda is the overhauled iPad Pro, an iPad Air, a new Magic Keyboard, and an Apple Pencil. In total, this launch is set to be one of the most extensive updates to the Apple tablet in a single day.

    And it’s been a long time coming, especially for the iPad Pro. That model hasn’t received a substantial update since 2018.

    For those seeking more specific timing, I’m informed that the launch will likely take place the week of May 6. Another indication of this: Apple retail stores are gearing up to receive new product marketing materials later that week.

    This is usually a sign that a new product release is imminent. It’s also worth noting — as I reported at the end of March — that the intricate new iPad screens are the reason behind the roughly one-month delay from the initial March release plan.

    Regardless, the new lineup is expected to increase sales, but I’m uncertain whether it will address the broader challenges faced by the iPad. As a frequent user of a Mac and iPhone, and now a Vision Pro for watching videos, I find the iPad extremely irrelevant.

    The device isn’t sufficiently capable to fully replace a Mac for everyday tasks, and its software still has significant room for improvement. Hopefully, the introduction of iPadOS 18 will bring about substantial enhancements, making the device a true alternative to a Mac.

    Setting aside software considerations, the hardware upgrades in the new iPads mark some of the most significant changes in the product’s history. For the first time, Apple will be transitioning its tablet screens to OLED, or organic light-emitting diode, a technology already utilized in the iPhone.

    Reportedly, this technology looks stunning on larger displays, taking the experience that iPhone users have had since 2017 to a whole new level. However, one downside to this transition is that the new models will likely come with higher price points, according to the information I’ve received. The current iPad Pro starts at $799.

    Additionally, the company is working on new iterations of the entry-level iPad and iPad mini, but they are not expected to be released before the end of the year at the earliest. The new lower-end iPad will likely be a cost-reduced version of the 10th generation model from 2022, while the update for the iPad mini is expected to mainly involve a processor upgrade.

    Looking further ahead, Apple engineers are exploring the possibility of foldable iPads. However, this initiative is still in its early stages, and the company has yet to find a way to create foldable screens without the crease seen on similar devices from Samsung Electronics Co. and others.

    I’ve been cautioned that if Apple is unable to solve this issue, it might decide to abandon the concept of foldable iPads altogether. Nevertheless, there’s still time.

    Apple has introduced more realistic Personas for the Vision Pro, while visionOS 1.2 is currently undergoing testing. The visionOS 1.1 update was released a few weeks ago, and Apple has just added a new feature: Spatial Personas. These are advanced avatars that create the sensation of being in the same room as other people during FaceTime calls (in contrast to the original Personas, which felt more like being confined in a frosted glass box).

    Ironically, the initial beta version of visionOS 1.2 was released last week and brought almost no new features. (In fact, two of the original environments that were included with the Vision Pro on Feb. 2 are still not functional.)

    I have tested the new Spatial Personas, which are still in beta, with two different individuals for several minutes. I am extremely impressed — I would even go so far as to say that Apple’s communications and marketing teams have not fully highlighted this feature so far . It’s incredibly impressive and unlike anything I have experienced before.

    In fact, it’s so impressive that the absence of this feature in the initial Vision Pro launch likely held back the product. If you have a Vision Pro (and somehow know someone else with one), you absolutely have to try it.

    Why did Kevin Lynch, the head of Apple Watch, transition to the company’s AI group? One of the behind-the-scenes stories that was overshadowed by the cancellation of the Apple car is the change in Kevin Lynch’s role, who led the project in recent years.

    For about ten years, Lynch reported to Apple’s Chief Operating Officer, Jeff Williams. In addition to overseeing the car project, he has been in charge of software engineering for the Apple Watch under Williams.

    In an unexpected move, Lynch has now started reporting to John Giannandrea, Apple’s AI chief. Lynch and Williams still have oversight of the Apple Watch, leading to the question: Why was this change necessary?

    Those close to the situation believe that Lynch’s move is intended to bring clarity to an area that has posed challenges for Apple: AI. This is something Apple also attempted to address with the car project.

    Lynch initially joined that project in 2021, a few months before the project’s leader, Doug Field, stepped down to lead the electric vehicle efforts at Ford Motor Co. Within the company, Lynch is seen as a highly skilled engineering manager.

    With AI, it’s no secret that Apple has been struggling to develop large language models and other tools that can compete with the best in the industry. If Giannandrea were to eventually leave the company, Lynch — who has been due for a promotion to the senior vice president level — could be well-positioned to step into his role.

Exit mobile version