Author: FredMT Admin

  • Google has started to roll out the final beta version of Android 15

    Google launches a major update to Android annually, the operating system that powers the world’s most popular smartphones from the largest brands. In 2024, the search engine giant is expected to unveil Android 15 alongside new hardware at Google I/O on May 14.

    Based on the current pattern, Android 15 will bring numerous new features, many of which will be powered by Generative Artificial Intelligence. Below are 10 new features being introduced with the release of Android 15 by Google.

    Google has revealed and showcased Android 15 as the next major version of its mobile operating system. The development and release cycle of Android typically consists of three phases, which also applies to Android 15.

    The initial phase is always the Developer Preview phase, which occurred earlier this year, followed by the more public Beta testing phase, and then the final stable version is released for everyone.

    While Android 15 is in the final stages of testing, stable builds have not yet been released by Google. Until that happens, other manufacturers are unlikely to roll out stable updates based on Android 15 and will resort to releasing custom user interfaces based on Android 15 beta. These will not be stable versions, so there are a few things to note while testing them on your phone, regardless of the brand.

    Initially, builds based on Android 15’s Developer Preview will lack the essence of the custom Android skins, as developer previews are intended for developers to test and optimize their apps for upcoming versions of Android. Subsequently, updates based on Android 15’s beta will begin to roll out, with Samsung usually being the first in this race.

    While these builds will be more stable and have the visual aspects specific to the brand, the experience may not fully reflect the final update following the stable release of Android 15.

    Therefore, if you plan to try any of these updates, avoid using them on your primary Android phone. Even if you choose to do so, be sure to back up any crucial data, as you may end up losing it if the update causes your phone to malfunction.

    Keep checking this thread for further updates. We will update it with links and instructions on how to install beta updates for phones from each brand.

    The official release of Android 15 has been delayed longer than expected, but we can now see the light at the end of the tunnel. According to a report from Android Headlines, the next version of Android will be available on October 15.

    This is a departure from how Google has typically handled the launch in the past. Usually, the latest version of Android releases with the latest version of the Pixel, but that was not the case this year with the August release of the Google Pixel 9. In a way, Android 15 is releasing at its usual time; the Pixel was just early.

    Google Android 15 to the Android Open Source Project, or AOSP, earlier this month. This usually indicates an imminent release. The reason it is releasing on the 15th, a Tuesday, instead of on Monday, is likely due to Columbus pushed Day, a national holiday in the US

    Android 15 is arriving with numerous new and useful features that will enhance the overall user experience, including performance improvements, better PDF usability, notification cooldowns, and even partial screen sharing.

    The new operating system will be compatible with the Pixel 6 and later devices, making this the first Android update that is limited to Tensor-based Pixels. If you are still using an older-model Pixel, you will not qualify for this update.

    The idea is that since the update has taken longer to launch, it has been under more scrutiny and, as such, should theoretically have fewer bugs. Whether that actually plays out, though, remains to be seen.

    October 15 is less than a month away, so stay tuned. A new version of Android will be here before you know it, even if it took longer to arrive than many would have preferred.

    Google is finally pushing Android 15 to the Android Open Source Project (AOSP), marking a crucial milestone when companies begin preparing their respective software experiences for their smartphones and developers start fine-tuning their apps. As for the public release, the stable public build of Android 15 will be available for compatible Pixel phones in the coming weeks.

    Android 15 will also be available on “devices from Samsung, Honor, iQOO, Lenovo, Motorola, Nothing, OnePlus, Oppo, Realme, Sharp, Sony, Tecno, Vivo, and Xiaomi in the coming months,” according to Google. If you have a Pixel phone, you can install the Android 15 QPR1 Beta update to get a taste of what’s to come.

    If you have purchased the new Google Pixel 9 or Pixel 9 Pro, you can now join the test program and install the update. However, please note that you need to perform a full system wipe before installing the stable update once it’s released.

    With Android 15 now available in the AOSP repository, custom ROM developers can freely modify or deploy it for their respective devices. The source code is also accessible for analysis by academics, researchers, and enthusiasts through the Git repository.

    It was disappointing that Google didn’t release Android 15 with the new Pixel 9 series smartphones. It seems that we may have to wait until October. Google recently updated the details for the Android Beta Exit update, which specifies an October deadline for beta testers to leave the beta before the stable version is released.

    Android 15 brings a wide range of changes aimed at developers, including deeper insights into app behavior, improved PDF viewing capabilities, and the ability to control HDR content performance on compatible panels while viewing SDR content.

    Another interesting feature is automatic sound adjustment based on ambient noise levels within AAC audio apps. Users will also have more control over the LED flash intensity in image capture and torchlight mode. Finally, Google is changing the camera preview to enhance exposure, users to see items in the dark.

    On the functional side, users can now pin the taskbar at the bottom of the screen, which is intended to improve the multitasking experience for foldable phone and tablet users. Private Space, a feature that allows users to create a secure password-protected environment for sensitive apps, has been added.

    Regarding privacy, users can now fill in their account credentials or verify their identity with a single tap using the Passkeys system. Apps can also detect if the activity is being captured using any recording tool, allowing them to alert users.

    We have been testing Android 15 for some time and will soon share our main observations on how it transforms the smartphone experience, especially on the Pixel 9 series devices.

    Google is preparing to release Android 15 to the general public soon, so attention is slowly shifting to Android 16, which is expected to launch toward the end of next year. Android Authority recently uncovered interesting information about this update from the Android 15 QPR1 beta.

    In the beta, the site uncovered that Google plans a “complete redesign” for Android’s Notifications and Quick Settings panels. The current design dates back to Android 12 when Google introduced its Material You design language. Xiaomi says most users prefer separate panels for settings and notifications, which is why its HyperOS software provides this. Subsequently, Samsung and Oppo are rumored to be considering a similar look in future versions of One UI and ColorOS, respectively.

    Android Authority cites “online pushback” as the reason companies like Xiaomi and Samsung are either making or considering making these changes to Android’s Notifications and Quick Settings panels.

    When testing the dual design, Android Authority explains that pulling down the status bar once still brings down the Notifications panel, which now only takes up about a quarter of the screen. While you cannot see any Quick Settings tiles in the new Notifications dropdown, you can still access the app underneath the panel.

    Furthermore, you no longer see the Quick Settings panel when pulling down the status bar a second time. Instead, you access the Quick Settings panel by pulling down the status bar with two fingers.

    It explains: “This is the change that I expect will be the most controversial, as it requires you to put more effort into accessing your Quick Settings tiles.” Finally, after pulling the Quick Settings panel down in Android 16, you can swipe left or right to see all your tiles.

    Android Authority acknowledges that Google does not guarantee that it will implement such changes to Notifications and the Quick Settings panels in a future public version of Android 16. However, since partners appear to be making changes based on customer feedback, it would be wise for Google to consider doing so as well.

    Google has announced that its impressive earthquake alert system is now accessible to users in all American states and territories. The company aims to reach the entire target audience within the coming weeks. Google has been testing this system, which also utilizes vibration readings from a phone’s accelerometer, since 2020.

    When the onboard sensors detect movements similar to those of an earthquake, your phone will immediately cross-reference crowdsourced data collected from the Android Earthquake Alerts System to verify if an earthquake is occurring and send an alert.
    The company states, “The Android Earthquake Alert System can provide early warnings seconds before shaking occurs.” Once it is confirmed that an earthquake is happening and its magnitude is measured at 4.5 or higher on the Richter scale, two types of alerts based on severity will be issued.

    The first is the “Be Aware” alert, which advises users to prepare in case the ongoing light shaking escalates into something more severe. The “Take Action” warning appears when the shaking is strong enough for users to seek cover immediately.

    In addition to alerts, the system will provide access to a dashboard where users can find further instructions to ensure their safety. Earthquake alerts are automatically enabled on Android phones.

    Music discovery with enhanced AI capabilities

    One of my preferred Assistant features has been the ability to hum a tune to search for it on the web. It works even better if you sing the tune or place your phone near a sound source, such as a speaker. The entire system is now receiving an AI enhancement.

    Remember “Circle to Search,” a feature that allows you to search the web for any item appearing on your phone’s screen by simply highlighting it? Now, it includes an audio recognition component. By long-pressing on the home button at the bottom ( or the navigation bar), you can access the Circle to Search interface and tap the newly added music icon.

    Once the AI ​​identifies the track, it will automatically display the correct song with a YouTube link. The idea is to eliminate the need to hum or use another device or app for music identification. You can summon the AI, activate the simply audio identifier, and complete the task, all on the same screen.

    Accessibility updates, Chrome’s reader mode, and more

    Android’s TalkBack system is an excellent accessibility-focused feature that provides audio descriptions of everything on your phone’s display. Now, Google is leveraging its Gemini AI chatbot to offer more detailed and natural-language TalkBack descriptions, whether it’s a webpage, a picture from the local gallery, or social media.

    In a similar vein, the Chrome browser on Android is introducing a reader system. Aside from reading the contents of a page, users will have the option to change the language, select a voice narrator model, and adjust the reading speed.

    The final new feature is offline map access on Wear OS smartwatches. When users download a map on their smartphone for offline use, it is also synced to the connected smartwatch. This means that if you leave your phone behind and go for a hike or cycling trip, you can still access the map on your smartwatch.

    A couple of new shortcuts are also being added to navigation software for Wear OS smartwatches. With a single tap on the watch face, users can check their surroundings. When needed, they can simply use a voice command to look up a location.

    Android 15 has arrived. Here are the significant features and upgrades that Google is introducing.

    How to Download and Install Android 15

    If you own a Google Pixel phone (Pixel 6 or newer), you can download Android 15 by navigating to Settings > System > System update and tapping Check for update.

    If you’re eager to try it out, you may be able to install the Android 15 Beta. These pre-release versions allow developers to test the upcoming version of Google’s mobile operating system, familiarize themselves with the new features, and ensure that their apps or games work properly. They also provide early adopters with a sneak peek.

    While the beta releases are more stable than developer previews, you may still encounter some bugs, and you need to go through a few steps to install them, so it’s not recommended for everyone. If you’re interested in trying it, you will need a supported partner device (including select phones from Honor, Nothing, OnePlus, and Xiaomi).

    To participate in the Android Beta Program, you must register. Most individuals who join the program will receive beta updates over the air without erasing their phones, but leaving the beta program will require a factory reset. It’s important to back up your Android phone before proceeding.

    Updates typically appear automatically, but you can always verify if you have the latest version by navigating to Settings > System > System update and selecting “Check for update.” If you want to opt out of the beta and revert to Android 14, visit Google’s Android Beta page, locate your device, and click on “Opt out.”

    This action will erase all locally saved data, so be sure to back up your device beforehand. You will receive a prompt to update to the previous version.

    Individuals without a Pixel or a supported partner device should monitor their phone manufacturer’s website, forums, or social media for information on when to expect Android 15.

    Notable New Features in Android 15

    Here are our preferred features and enhancements in Android 15. Further details can be found on Google’s developer site. Additionally, make sure to read our article on all the new features coming to Android and the Android ecosystem, including Wear OS, Android Auto, and Android TV.

    Private Space

    Android 15 introduces a new Private Space where you can keep sensitive apps separate from the rest of your phone. Whether you want to protect health data or your banking apps, Private Space allows you to keep them securely behind a second layer of authentication, using the same password you use to unlock your device or an alternate PIN.

    When your Private Space is locked, apps are hidden from the Recents view, notifications, settings, and other apps. You also have the option to completely wipe your private space.

    What’s That Tune?

    While we have a guide on how to identify songs with your phone, Google is simplifying the process with Circle to Search in Android 15. If a tune playing on your phone or a nearby speaker captures your attention, long-press the Home button or navigation bar to activate Circle to Search, then tap the music button to identify the track name and artist. You will also receive a link to the YouTube video.

    Improved Audio Image Descriptions

    Android features a screen reader called TalkBack for individuals with visual impairments, and in Android 15, Google’s Gemini AI enhances its ability to describe images. Traditionally, image descriptions on websites were limited to the content entered in the alt text field, but Google’s AI can now analyze images and provide more detailed descriptions. This new feature also works with photos in your camera roll, social media images, or pictures in text messages.

    Earthquake Alert System

    Individuals in the US will receive potentially life-saving warnings about imminent earthquakes, as Android 15 incorporates crowd-sourced earthquake detection technology. This new feature also includes guidance on what to do following an earthquake.

    Enhanced Satellite Connectivity

    Android 15 offers a significant expansion in satellite connectivity. Certain RCS and SMS apps should now be capable of sending text messages via satellite, a capability previously restricted to emergency use. Google has also standardized the pop-ups and other user interface elements to make it clearer when connected via satellite. Currently, only the Pixel 9 range supports satellite messaging.

    Offline Maps on Wear OS

    If you own a Wear OS smartwatch to pair with your Android phone, you can now access offline maps. Any map downloaded to your phone can be used for directions on your wrist, allowing you to leave your phone behind when going for a run.

    Improved Bluetooth

    Android 15 will bring various Bluetooth enhancements. Firstly, the quick settings tile now opens a popup that allows you to select individual devices, access their settings, and pair new devices, streamlining the settings menu navigation.

    It also appears that Google is modifying how the Bluetooth toggle functions, so when you turn off Bluetooth, it will automatically turn back on the following day by default. This feature will likely benefit Android’s Find My Device network, which uses Bluetooth to locate devices. You can deactivate the automatic turn-on in the settings if you prefer.

    Partial Screen Recording

    Instead of recording or sharing your entire screen, in Android 15, you can share an individual app without revealing the rest of your screen or incoming notifications. Logins and one-time passwords (OTPs) are automatically hidden from remote viewers. This feature is already available on Pixels, but it is now integrated into Android.

    Blocking of malicious apps

    Several updates in Android 15 make it more challenging for malicious apps to operate. They can no longer hide behind other apps by bringing them to the foreground or overlay themselves invisibly on top.

    Additionally, there are changes designed to prevent the exploitation of intents, which allow you to initiate an activity in another app by specifying an action you would like to perform, as these are often misused by malware. These are behind-the-scenes modifications aimed at enhancing user safety.

    Application Archiving

    If you haven’t used an app or game for a while, you might receive a prompt to uninstall it. But what if you anticipate using it again in the future? With Android 15, you can archive most of the app to free up space while retaining your user settings or game progress.

    The feature for automatically archiving apps was announced last year, but in Android 15, it becomes a systemwide option, allowing users to choose to automatically archive apps when storage is running low.

    Customizable Vibrations

    Android 15 introduces the ability to enable or disable keyboard vibrations systemwide without needing to delve into the keyboard settings. A new toggle is available in Settings > Sound and Vibration > Vibration and Haptics, where users can also use sliders to adjust haptic feedback intensity (previously available on select Android phones but now available systemwide).

    The second beta also introduces rich vibrations, allowing users to distinguish between different types of notifications without having to look at the screen.

    Audio Sharing

    Sharing audio from your phone using Bluetooth LE and Auracast should be easier with a new settings page dedicated to audio sharing. This feature was not functional in the last beta, and the supported devices for this feature are not yet clear. However, Android Authority managed to get it working on a Pixel 8 Pro. This feature enables you to broadcast audio from your phone to other devices within Bluetooth range, including the phones and earbuds of friends and family who can join by scanning a QR code.

    Improved PDF Handling

    Dealing with PDF files on an Android phone can be challenging, so the integration of several PDF enhancements into Android 15 by Google is a welcome update. PDFs should load more smoothly, and the update includes support for password-protected files, annotations, form editing , and text selection. Additionally, users can now search within PDF files.

    Volume Control

    Android 15 support for the CTA-2075 loudness standard, allowing for volume comparison between apps and intelligent audio adjustments to prevent sudden volume changes when switching between apps, taking into account the characteristics of the speakers, headphones, or earbuds.

    Enhanced Low-Light Camera

    The camera app in Android 15 includes significant improvements. Firstly, the Low Light Boost feature provides better previews in low-light conditions, allowing for improved framing of nighttime shots and scanning of QR codes in limited light. Additionally, new camera app options offer finer control over the flash, enabling users to adjust the intensity for both single flashes and continuous flashlight mode.

    Improved Fraud and Scam Protection

    Android 15 includes several updates aimed at combating fraudulent activities. Google will utilize AI through Play Protect and on devices to detect and flag suspicious behavior. Messages containing one-time passwords (OTPs), commonly used in two-factor authentication, will now be hidden from the notifications system, making interception more difficult. Restricted settings for side-loaded apps (those not downloaded through the Google Play Store) are also being expanded.

    Task Bar Options

    For Android tablets and folding phones, Google has made changes to the task bar dock functionality. Initially permanent, then transient, and now customizable, providing users the option to choose when to display the task bar. This is useful for docked tablets where a persistent task bar may be preferred, but it also offers the flexibility to hide it. Users can also pin their favorite split-screen app combinations. Android 15 allows apps to display edge-to-edge, making the most of the available screen real estate, even with a task bar or system bar at the bottom.

    Improved Battery Life

    While Android updates typically include tweaks and enhancements for efficiency that positively impact battery life, Android 15 focuses on imposing more checks on foreground services and restricting apps that continue running in an active state. Devices with ample RAM should also experience faster app and camera launch times with lower power consumption, thanks to support for larger page sizes.

    Enhanced Theft Protection

    Many of the new security measures introduced by Google in Android 15 to deter theft, such as automatic locking when the phone is snatched and remote locking options, will be available on devices running Android 10 and later. However, the update to factory reset protection, prevent whichs thieves from setting up a stolen device again without the owner’s device or Google account credentials, is exclusive to Android 15.

    Additional Options for Foldable Cover Screens

    Some of the top folding phones automatically switch the activity you’re doing to the cover screen when you fold them, but Google is now incorporating that feature into Android 15.

    If you prefer the cover screen to lock when you fold it, that will also be possible. There is also increased support for apps displaying on smaller cover screens within the more compact flip phone category.

    Expanded Health Connect Data

    Health Connect initially served as an app to aggregate all your health and fitness data from various devices and apps. It was preinstalled with Android 14, but Android 15 is introducing two new types of data: skin temperature (gathered by wearables like the Oura ring and the Pixel Watch 2) as well as training plans—which can encompass completion goals for calories burned, distance, duration, repetition, and steps, but also performance goals around as many repetitions as possible (AMRAP), cadence, heart rate, power, perceived rate of exertion, and speed.

    Anticipation for a new Android update is akin to following a treasure map to uncover all the exciting new features. This is especially true early in the release cycle when the only people with knowledge about it are the engineers working on it. So, let’s delve into Android 15, codenamed Vanilla Ice Cream.

    We have a solid grasp of the new features and changes coming with this mobile OS, and today we’ll discuss it, providing a comprehensive overview of everything you need to know about Android 15.

    UPDATE: On September 4th, Google officially rolled out the stable version of Android 15, first on Pixel devices, and also made the source code accessible through its Open Source Project program. Here’s an excerpt from the official statement:

    “Today we’re releasing Android 15 and making the source code available at the Android Open Source Project (AOSP). Android 15 will be available on supported Pixel devices in the coming weeks, as well as on select devices from Samsung, Honor, iQOO, Lenovo, Motorola, Nothing, OnePlus, Oppo, realme, Sharp, Sony, Tecno, vivo, and Xiaomi in the coming months.”

    The release date for Android 15 is set for August 13, 2024. The latest mobile OS will debut on the new Pixel 9 series devices, so we have less than a month before the stable release hits Pixel owners.

    Even though the commercial launch of Google’s new OS is still some time away, Android 15 has already progressed beyond the stage we previously described. The first Android 15 developer preview, known as DP1, was released on February 16, 2024, followed shortly by the second Android 15 developer preview, or DP2, released on March 21, 2024. The first official Android 15 Beta is now available as well, having launched on April 11.

    Google has recently released the final Android 15 Beta 4 (build AP31.240617.009) on July 18, and this will be the last one before the stable release we anticipate to debut on August 13, in conjunction with the new Pixel phones.

    Eligible Devices for Android 15

    This section will eventually comprise a long list, but for now, all we can do is speculate. Naturally, the first devices to receive Android 15 will be the Pixels. Google will introduce the stable Android 15 version with the Pixel 9 lineup and then gradually extend it to other eligible devices.

    We expect older Pixel models eligible for an update to also receive Android 15, including the Pixel 6, Pixel 6 Pro, Pixel 6a, Pixel 7, Pixel 7 Pro, Pixel 7a, Pixel Fold, Pixel 8, and Pixel 8 Pro.

    So, if you want to be among the first to get it, your best option is to acquire the latest Pixel phone. Typically, it’s logical to assume that the latest flagship phones will be the next in line to receive the update, but there may be delays or variations depending on the manufacturer and the specific user interface layered on top of vanilla Android 15.

    For instance, Samsung will also distribute Android 15 to most of its current models, beginning with the Samsung Galaxy S24, Samsung Galaxy S24 Plus, Samsung Galaxy S24 Ultra, Galaxy S23 line, Galaxy S22 lineup, and Galaxy S21 series, along with the latest foldable devices, the Galaxy Z Fold 3 and Z Flip 3, Galaxy Z Fold 4 and Z Flip 4, Galaxy Z Fold 5 and Z Flip 5, and also the upcoming Samsung Galaxy Z Fold 6 and Samsung Galaxy Z Flip 6.

    Naturally, all flagship phones from other brands, such as the Sony Xperia 1 V, or the OnePlus 12, for example, will also receive Android 15, but your best opportunity to receive it first is with a current Pixel device.

    Android 15 comes with a host of new features, as revealed in the first two developer previews and the official Beta release. These include:

    – NFC Wireless Charging
    – Satellite support
    – Improved Battery Life
    – Forced Dark Mode
    – In-app camera controls
    – HDR headroom control
    – Enhanced notification page in landscape
    – Loudness control
    – Audio output control from Pixel Watch
    – Notification Cooldown
    – Low Light Boost
    – Smoother NFC experiences
    – Wallet Role
    – PDF improvements
    – Volume Control for Speaker Groups
    – Partial screen sharing
    – Health Connect
    – Privacy Sandbox
    – Screen recording detection
    – Cover screen support
    – Universal Toggle for Keyboard Vibration Control
    – Sensitive notifications
    – Persistent taskbar
    – Bluetooth Popup Dialog Enhancements
    – App Archiving
    – New “Add” button for Widgets
    – Minor Bug Fixes
    – New looks for the Settings app
    – Circle to Search for tablets and foldables
    – Predictive Back

    NFC Wireless Charging

    Android 15 will introduce NFC Wireless Charging (WLC), which utilizes NFC antennas to wirelessly power small gadgets, eliminating the need for wireless charging coils.

    Satellite support

    The new OS will enhance platform compatibility with satellite connectivity, identifying instances when a device is linked to a satellite network and extending support for SMS and MMS applications to use satellite connectivity for message transmission.

    Better Battery Life

    Google announced a tweak in Doze mode during Google I/O, which could potentially increase the battery life of some devices by up to three hours by making devices go into Doze mode 50% quicker.

    Forced Dark Mode

    Android 15 might offer a new way to force Dark Mode on apps through a new feature called “make-all-dark,” which will work better and more consistently than the existing “override-force-dark” toggle in Developer Options.

    In-app camera controls

    The update will introduce the ability to control the strength of a phone’s flash intensity in both single and torch modes, providing more control over low-light photography.

    HDR headroom control

    Android 15 will automatically select the optimal HDR headroom based on the device’s capabilities and the panel’s bit-depth, with the added ability to adjust the HDR headroom to balance SDR and HDR content.

    Better notification page in landscape

    The UI will be updated to arrange notifications and controls neatly next to each other in landscape mode, providing more information with just one swipe down.

    Volume Adjustment

    Android 15 is set to support a new standard for controlling volume (CTA-2075), designed to address inconsistencies in audio loudness and reduce the need for users to constantly adjust volume levels when switching between different types of content.

    This feature uses information about the connected output devices, such as headphones and speakers, as well as the loudness metadata embedded in AAC audio content, to intelligently adjust the audio loudness and dynamic range compression levels. The aim is to achieve a consistent volume level across all types of audio content and sources used on your phone.

    Control Audio Output from Pixel Watch

    A small adjustment will allow users to change the audio output directly from their Pixel Watch, without needing to use their phone. While you can already control the audio itself from the watch, pausing songs and adjusting the volume, changing the output device, for example from the phone to an external speaker or vice versa, currently requires using your phone. This will change in Android 15.

    Notification Cooldown

    Notification cooldown is an interesting new feature in Android 15 that aims to reduce the overload of notifications and interruptions. It works by preventing apps from sending a large number of notifications in a short period of time. Here’s how it works: when an app sends multiple notifications quickly, Android 15 detects this and imposes a cooldown period. During this cooldown period, the app is temporarily restricted from sending additional notifications.

    Low Light Boost

    Android 15 will introduce a feature called Low Light Boost, which functions as an auto-exposure mode for your smartphone. Unlike Night Mode, which captures a series of images to create a single enhanced still image, Low Light Boost can sustain a continuous stream of frames. This new feature will assist with tasks such as QR code scanning in low-light conditions and real-time image preview, brightening dark videos, and showing a preview of the end result.

    Improved NFC Experiences

    Android 15 is enhancing tap-to-pay to make it smoother and more reliable, while still supporting all NFC apps. On certain devices, apps will be able to listen to NFC readers without immediately taking action. This will speed up transactions and make the moment when you actually pay using NFC almost immediate.

    Wallet Role

    Android 15 will introduce a feature called Wallet role, which replaces the old NFC contactless payment setup. This will enable users to select a Wallet Role app and make payments with that app quicker and more reliably.

    PDF Enhancements

    In Android 15 Developer Preview 2, we received a glimpse of significant upgrades to how the software handles PDF documents. This means that apps can perform tasks such as handling password-protected files, adding annotations, editing forms, searching through PDFs, and selecting text to copy. Additionally, there are optimizations for linearized PDFs, enabling them to load faster locally and use fewer resources.

    Volume Control for Speaker Groups

    A recent update in the second Android 15 Beta saw the return of the Speaker Group control feature. When your Pixel phone is connected to a group of speaker devices (Bluetooth speakers, Nest Hubs, etc.), the volume control on the phone acts as a master volume for all of them.

    This feature was previously removed due to alleged patent infringement, but it appears that Google has resolved any legal issues, as the option has been reinstated.

    Partial Screen Sharing

    Android 15 will allow users to share or record a single app window instead of the entire screen. This useful feature, first introduced in Android 14 QPR2, is accompanied by something called MediaProjection callbacks. This prompts users for consent when it detects that partial screen sharing is taking place.

    Health Connect

    Android 15 builds upon the Android 14 extensions 10, with a focus on Health Connect by Android. This platform serves as a secure hub for managing and sharing health and fitness data collected by various apps.

    With this update, there’s expanded support for additional data types related to fitness, nutrition, and other health-related information.

    Overall, it’s about enhancing the capabilities of Health Connect to handle a broader range of health and wellness data, ensuring users have a comprehensive and centralized platform for managing their health information securely.

    Privacy Sandbox

    The Privacy Sandbox on Android introduces some innovative new technology aimed at balancing privacy and effective ad targeting. With Android 15, Google plans to bring Android Ad Services to extension level 10, which includes the latest Privacy Sandbox on Android.

    This means that you’ll be able to receive personalized ads without compromising privacy. It’s a balancing act, so don’t expect to be completely anonymous and still receive relevant ads.

    Screen recording detection

    Android 15 will enable apps to identify if they are being recorded. When an app switches between being visible or invisible within a screen recording, a callback is activated. This feature ensures that if an app is handling sensitive information, such as personal data, the user will receive a notification if the screen is being recorded, providing users with more transparency and control over their data privacy.

    Cover screen support

    Android 15 will enhance the way apps are displayed on small cover screens of foldable devices, like the Galaxy Flip. While these screens are too small to run full Android apps, Android 15 will empower developers to customize their apps and add extra functionality to those small cover screens.

    Universal Toggle for Keyboard Vibration Control

    The Universal Toggle for Keyboard Vibration Control in Android 15 will offer users a centralized setting to manage keyboard vibration across the entire system. This toggle will enable users to easily enable or disable keyboard haptic feedback for all apps and input methods on their device, eliminating the need to navigate through various settings menus within individual apps or input method settings.

    Sensitive notifications

    Android 15 will introduce enhanced controls for managing sensitive notifications, providing users with more options to safeguard their privacy. With sensitive notifications, users can choose whether sensitive content is displayed on the lock screen, in notifications, or when the device is unlocked, specify whether sensitive notifications can be expanded or interacted with directly from the lock screen or notification shade, and mark notifications as sensitive, prompting the system to treat them with additional privacy considerations.

    Persistent taskbar

    The Persistent Taskbar for large-screen devices in Android 15 is designed to improve multitasking and navigation on devices with large displays such as tablets or foldable phones. This feature will create a dedicated area at the bottom of the screen where users can access frequently used apps, system controls, and navigation options, allowing for quick switching between tasks or launching apps.

    Bluetooth Popup Dialog Enhancements

    The Bluetooth popup dialog in Android 15 has been enhanced to provide users with additional Bluetooth controls, such as shortcuts or links to additional Bluetooth settings or options, the ability to quickly cancel or approve certain actions over Bluetooth, and more.

    App Archiving

    Android 15 will introduce app archiving, allowing users to archive unused or infrequently used apps to free up storage space while retaining the ability to easily restore them when needed. Archived apps can still receive updates from the Google Play Store, and their data remains intact.

    New “Add” button for Widgets

    Android 15 Beta has introduced a minor tweak for adding widgets to the home screen. Instead of dragging the desired widget to the place you want it, selecting a widget now shows an “Add” button. Tapping on it will automatically place the widget on the next empty space on the home screen, simplifying the process for users.

    Minor Bug Fixes

    Android 15 includes the usual bug fixes and patches, including one that addresses an issue where creating a Private Space for the first time removed some icons from the home screen.

    New looks for the Settings app

    According to Android Authority, the Settings app will undergo a redesign in Android 15 to provide a more organized and contextual look and feel. The different settings will be arranged in groups, aiming to make navigation more intuitive for the user.

    Circle to Search for tablets and foldables

    Circle to Search works more effectively on tablets and foldables in Android 15 Beta 3. Users can activate this feature by holding down the action key, regardless of the taskbar style being used. When enabling Persistent Taskbar for the first time, there’s even a pop-up that describes the new feature.

    Predictive Back

    Android was lacking a feature compared to iOS, for instance, which was predictive back or the capability to preview where the “back” gesture would lead. Google introduced this functionality in Android 14, but it was only accessible through the Developer Options menu. Now, the company plans to make this feature available to eligible apps, allowing them to utilize predictive back to provide users with more information about the destination of the back gesture.

    Observe the development and enhancements of Android over time.

    At times, it seems like we’ve been using Google’s mobile operating system on our Android devices forever. However, it has been 15 years since the first official Android phone was released. An essential decision in Android’s history was Google’s commitment to transforming Android into an open-source OS. This decision contributed to its widespread adoption by third-party phone manufacturers. Just a few years after the introduction of Android 1.0, smartphones powered by the new OS became ubiquitous.

    Fast forward to the present, and we are now on Android 14. The OS has emerged as the most popular mobile operating system globally. It has surpassed numerous competitors such as Symbian, BlackBerry, Palm OS, webOS, and Windows Phone (most of which have ceased to exist). Apple’s iOS remains the only platform that continues to pose a significant challenge to Android. This dynamic is unlikely to change in the near future.

    Let’s delve into the history of Android thus far.

    The history of Android began in October 2003, well before the term “smartphone” became commonplace and several years prior to Apple’s unveiling of the first iPhone and iOS. Android Inc. was established in Palo Alto, California, by four founders: Rich Miner, Nick Sears, Chris White, and Andy Rubin. At the time, Rubin articulated that Android Inc. would develop “smarter mobile devices that are more aware of their owner’s location and preferences.”

    In a 2013 speech in Tokyo, Rubin disclosed that the original purpose of the Android OS was to enhance the operating systems of digital cameras. Even at that time, the market for standalone digital cameras was in decline. A few months later, Android Inc. redirected its efforts towards integrating the OS into mobile phones.

    Google’s acquisition of Android in 2005 marked a significant turning point.

    In 2005, a pivotal chapter in Android’s history unfolded when Google acquired the original company. Rubin and the other founding members continued to develop the OS under their new ownership. They opted to base the Android OS on Linux, enabling them to offer the operating system to third-party mobile manufacturers at no cost. Google and the Android team believed that the company could generate revenue by providing other services, including apps.

    Rubin served as the head of the Android team at Google until 2013 when the company announced his departure from the division. Rubin ultimately left Google in late 2014 and launched a startup business incubator before reentering the smartphone industry with the ill-fated Essential in 2017.

    While working for Google, Irina Blok created the now-familiar logo for the Android OS. It resembles a combination of a robot and a green bug. Blok mentioned that the only directive given by the Google design team was to create a logo that resembled a robot. She also revealed that one of her inspirations for the final design of the Android mascot was the familiar restroom symbols representing “Men” and “Women.”

    Blok and Google made the decision to open-source the Android robot itself. Most other major companies would protect such a logo or mascot from modifications. However, numerous individuals have altered Android’s logo, as Google permits such changes under the Creative Commons 3.0 Attribution License .

    The Android mascot, also known as “Andy,” underwent a redesign along with much of Android’s branding in 2019. Although Andy may have lost its body, the new look has become more prevalent across Android’s branding.

    Android 1.0: The inception of Android’s history

    In 2007, Apple introduced the first iPhone, ushering in a new era in mobile computing. At the time, Google was secretly working on Android, but in November of that year, the company gradually began to unveil its plans to compete with Apple and other mobile platforms.

    In a significant development, Google spearheaded the formation of the Open Handset Alliance, which included phone manufacturers like HTC and Motorola, chip manufacturers such as Qualcomm and Texas Instruments, and carriers including T-Mobile.

    Then Google Chairman and CEO Eric Schmidt stated, “Today’s announcement is more ambitious than any single ‘Google Phone’ that the press has been speculating about over the past few weeks. Our vision is that the powerful platform we’re unveiling will power thousands of different phone models.” The public beta of Android version 1.0 was launched for developers on November 5, 2007.

    In September 2008, the very first Android smartphone was unveiled: the T-Mobile G1, also known as the HTC Dream in other regions of the world. It was released in the US in October of that year. With its slide-out 3.2- inch touchscreen combined with a QWERTY physical keyboard, the phone was not exactly a design marvel. In fact, the T-Mobile G1 received rather negative reviews from technology media outlets.

    The device did not even feature a standard 3.5mm headphone jack, which, unlike today, was essentially a standard phone feature among Android’s competitors.

  • Starlink is on track to generate a staggering $6.6 billion in revenue for 2024

    After purchasing a satellite dish and subscribing to a $120-per-month plan, my acquaintance, a retired veteran residing in an area where cable or fiber connections are not available, now enjoys a 115-Mbps connection.

    Located about 45 minutes north of downtown Tucson, Catalina, Arizona is far removed from the densely populated Bronx neighborhood where I spent my early years. It’s a charming small town nestled in the midst of a vast desert, so stunning that it almost appears unreal.

    The area boasts hiking trails that wind through snow-capped mountains to the east, while quails, roadrunners, and other creatures I once only saw on the Discovery Channel roam the dusty dirt roads.

    As the sun sets, the cloudless western sky takes on a vibrant Nickelodeon orange hue. At night, the sound of coyotes howling gives the impression of being surrounded, thanks to an audio effect known as beau geste.

    However, all this natural beauty comes at a cost for those wanting to go online. Internet service options in the area are extremely limited. I have a retired military veteran friend in Catalina who is unable to access the primary internet service providers, Comcast or Cox , available in nearby Tucson. Additionally, the 5G fixed home internet services offered by AT&T, T-Mobile, and Verizon are not accessible.

    In the past, she would have been counted among the millions of Americans lacking reliable broadband not due to cost, but simply due to inadequate infrastructure.

    According to 2023 estimates from the Federal Communications Commission, 17.3 percent of Americans in rural areas and 20.9 percent in tribal lands lack high-speed coverage (25 Mbps downloads) from fixed terrestrial broadband.

    Thanks to Starlink’s satellite-based internet service, my friend is now able to enjoy streaming services like Netflix or Disney+ on occasion, just like other retirees.

    For those unfamiliar, Starlink is a venture of Elon Musk’s private space exploration company, SpaceX. The company’s premise is simple: as long as you can point a small satellite dish toward the northern sky with no obstructions, you can have fast broadband, even in isolated areas that would typically be on the wrong side of the digital divide.

    There are no long-term contracts, although you are required to pay for the Starlink hardware, including the satellite dish, which costs $599 before taxes.

    Starlink claims to have over 2.5 million subscribers globally, including my friend in Catalina. For $120 a month, she receives approximately 115 megabits per second down and 12 Mbps up, based on informal speed test results recorded via Fast.com.

    I have personally had an internet connection about ten times faster than that for the past five years, so I was intrigued to see how well Starlink would handle my workload, as well as other typical internet activities such as streaming high-resolution video and playing online games with friends.

    So, I decided to give it a try.

    Over the course of several visits to my friend’s home, I tested out Starlink, using it for both work and leisure. I utilized the connection mainly from a small bedroom that serves as my home office, located just a few feet from the Starlink-provided router. (You can use your own router if you prefer.)

    I engaged in video calls with my colleagues on the East Coast via Google Meet. I streamed Apple Music Classical while writing articles in Google Docs, including this one. I also streamed 4K video from Amazon Prime and Peacock, and even played a bit of old -school Halo on my Steam Deck.

    Overall, Starlink performed admirably, providing an experience that was almost identical to the gigabit Comcast connection in my own home.

    I say “almost” because I encountered a few minor hiccups while streaming video and playing games. However, overall, it was quite impressive for a signal beamed down from near-earth orbit.

    All In a Day’s Work

    As a remote worker, the majority of my day is spent within a web browser (currently, Microsoft Edge), writing articles in Google Docs, reading and responding to emails while listening to music, and managing various projects using a variety of apps.

    All of this is to say that my daily activities are not overly demanding. Even Google Meet, which I use for videoconferencing, recommends just under 4 Mbps for comfortable use.

    Given that I could browse the web and edit office documents long before today’s broadband speeds became commonplace, I was not concerned about relying on Starlink for my daily tasks.

    I was anticipating some difficulties with video calls, especially when I was seated approximately 100 feet away from the router in the backyard, but that did not prove to be an issue.

    To sum up, Starlink seems more than capable of handling general office and school tasks. However, if you have more demanding activities, such as regularly uploading 4K videos to a YouTube channel, you might need something faster.

    Gaming with Starlink

    We all use the internet for more than just sending emails and video conferencing. How well does Starlink perform for activities like binge-watching old TV shows and playing Halo late into the night?

    It’s not perfect, but it never felt like it was spoiling the fun.

    Let’s start with streaming.

    Streaming video, even 4K high-resolution video, isn’t very demanding—although your needs may increase if multiple people in your household are streaming at the same time. Netflix, the largest streaming platform, recommends a minimum of 15 Mbps for a single 4K stream, while Disney suggests 25 Mbps. With my 100 Mbps Starlink connection, I could mostly meet these requirements.

    At different times of the day, I watched video content from Peacock, Prime Video, YouTube, and Twitch. Whenever I selected content to watch, it loaded instantly, just like it would on my gigabit Comcast connection at home.

    However, I did experience some buffering while streaming video, usually after an ad break. I had to wait a moment or two for the stream to stabilize once the show resumed. Did this happen every time? No, and it rarely occurred during the actual content stream itself, but I don’t want to give the impression that Starlink performed perfectly.

    Now, let’s talk about gaming.

    I play a fair amount of video games, whether it’s Tekken or Final Fantasy on my PS5, old-school titles like Diablo II and Quake on my PC, or the latest Mario or Zelda on my Nintendo Switch. I also have a Steam Deck, a portable PC similar to a Switch, with a mix of old and new games that allows me to play them almost anywhere.

    I was expecting to experience a considerable amount of latency as signals were relayed from satellites more than 300 miles overhead. Downloading games was not an issue—Halo: The Master Chief Collection took only about 25 minutes—but playing online and winning? I didn’t t have high expectations.

    So I was pleasantly surprised once I started playing.

    On my Steam Deck, I played Halo: The Master Chief Collection, Mortal Kombat X (my favorite of the recent Mortal Kombat games), and Street Fighter V. On my Switch, it was F-Zero 99, a futuristic racing game where I competed against 98 other players in a single run.

    To my surprise, the games played mostly (though not entirely) as well as they do on my home Comcast connection.

    For example, in Halo, I was able to snipe opponents without hesitation, running across the map without any of the typical “jitter” you see with a poor connection. I’m not as good at Halo as I used to be a few years ago, but I can’t blame Starlink for that.

    F-Zero 99 also played flawlessly on my Switch: I was able to join an online race and control my car just as I would when playing offline. The controls were smooth and responsive, and I could activate speed boosts and spin effort attackslessly.

    While Mortal Kombat X and Street Fighter mostly performed well, I encountered some issues with the latter.

    Both games involve fighting competitions where timing is crucial: You need to punch, kick, and block at precisely the right moment, or you end up on the ground. With Mortal Kombat, I more than held my own. With Street Fighter, however, I did experience occasional stutters.

    The action would freeze for a fraction of a second, disrupting my timing. It didn’t happen in every fight, and honestly, I’m not sure if Starlink or the game’s programming was to blame, but it was the only time I thought to myself, “This is not a great experience.”

    Overall, I was amazed at how well Starlink performed for gaming. Not perfect, but not bad at all.

    Who Starlink Is Best For

    Before going to Catalina, I didn’t have a clear idea about Starlink. I had heard of it, mostly in the context of the war in Ukraine, but I hadn’t paid much attention to it until I started spending more time at my friend’s home, about 20 minutes north of my own.

    Thanks to the Infrastructure and Jobs Act, signed into law by President Joe Biden in 2021, the federal government has allocated nearly $65 billion to help improve broadband access in rural areas and make internet service more affordable for lower-income households. In the meantime, satellite services like Starlink provide a crucial alternative for communities stuck on the wrong side of the digital divide.

    But they could have a significant impact in communities limited to just one internet service provider. If you’re not getting the speeds you were promised or you’re simply fed up with your ISP, you may now have a reliable backup plan.

    Yes, I did notice some hiccups here and there, but nothing worth getting too upset about—especially if your choice is between Starlink and watching the clouds pass by.

    Breakdown of Starlink internet plans

    Starlink internet is available to 99% of the US and surrounding oceans, and the three Starlink internet plans are designed for households, mobile locations, and boats.

    All Starlink plans come with a 30-day trial starting from the day of activation. So, how much does Starlink internet cost, including equipment and fees? You can find Starlink reviews and details for all plans in the following section.

    Residential – Ideal for traditional households

    The cost of Starlink Wi-Fi for home service is $120 per month for unlimited internet without a contract. This plan is suitable for rural homes or other fixed locations with average internet usage. This home Wi-Fi plan provides decent internet speeds for browsing, streaming, shopping, reading, and listening to music or podcasts.

    Expected speeds: While the expected speeds with Starlink Standard service plans range between 25–100 Mbps, Starlink states that most users experience speeds exceeding 100 Mbps.

    Starlink speeds by region: This Starlink map displays average speed ranges by location. The western part of the US generally receives approximately 50% faster speeds than the southeastern states.

    Starlink equipment: The hardware includes a dual-band Wi-Fi router and a phased array antenna, a 21.4” tall panel set at an angle to be mounted on your roof. The satellite is designed to withstand harsh weather conditions, including sleet, high winds, and temperatures between -22 and 122°F.

    Roam – Perfect for RVs

    The Roam plan is ideal for a mobile lifestyle as it allows you to pause the internet service an unlimited number of times. Campers, RVs, and other travel or mobile residences should consider using this Starlink RV service for flexible billing.

    Pricing: This mobile plan is priced at $150 per month and includes unlimited data for inland use. Additionally, the smaller, more portable array recommended with this plan is better suited for travel.

    Pausing Starlink service: To pause Starlink internet service, log into your Starlink account > Your Starlink > Manage > Pause Service.

    Delaying Starlink service: If you’re planning a trip and want to set up your Starlink equipment but don’t need internet service immediately, you can pause your plan before activation to delay your first bill. Your first bill will be issued when you unpause your service.

    Boats – Ideal for maritime use

    The Starlink Boats plan is best suited for maritime use, especially for emergency responders. This Starlink internet service can be used in the ocean or on a vehicle in motion.

    Unlimited data: Standard data is unlimited when inland.

    Mobile priority data: Choose from three data packages at $250 per month for 50 GB, $1,000 per month for 1 TB, and $5,000 for 5 TB. This bandwidth is utilized when you are not at a fixed inland location.

    Equipment costs: The equipment costs $2,500 for the high-performance array.

    Starlink business internet

    Obtain business internet with priority network and priority support.

    Network priority means that when the Starlink network is busier than usual, priority plans will maintain faster speeds, while standard plans may experience a slowdown.

    Monthly costs for priority data packages are $140 for 40 GB, $250 for 1 TB, $500 for 2 TB, and $1,500 for 6 TB.

    Equipment with the priority plan is the flat high-performance array antenna for $2,500, which has a broader GPS to connect with more satellites.

    Additional payments and fees for Starlink

    Starlink ships the necessary equipment for installation, unlike other satellite providers, which require professional installation. Typically, you must purchase the equipment in full; however, in select areas, you have the option to rent it for a monthly fee. Learn more about Starlink installation and equipment fees here.

    Installation fees for Starlink

    Technically, Starlink internet does not have an installation fee, but unless you are comfortable with installing a satellite on your roof, you will need to hire a local installer to set up the equipment.

    This arrangement leaves most customers with an additional step to complete on their own before using the internet service. It also absolves Starlink of any liability in case of a botched installation. Professional installation from a third party usually costs between $100 and $300.

    Equipment fees for Starlink

    Starlink offers two equipment packages for $599 or $2,500 and optional mounting accessories for $35 to $74. Shipping costs range from $20 to $100, and taxes vary based on location. You can also purchase Starlink equipment from third-party retailers, such as Best Buy, which offers free in-store pickup and free shipping.

    Customers in certain regions can purchase reduced-price refurbished equipment or rent/finance it for a monthly fee. If these options are available in your area, they will appear when you check your address on the Starlink site. The rental option includes an upfront activation fee.

    Stable pricing for Starlink

    While Starlink may occasionally adjust its monthly service rates, its pricing remains relatively consistent. Unlike other satellite providers that offer promotional rates followed by steep price hikes, Starlink’s rates are not introductory, ensuring you won’t face unexpected price increases.

    Starlink compared to other internet providers

    Starlink’s availability is similar to other satellite providers, but its coverage is much wider than major companies such as Spectrum, Xfinity, and T-Mobile.

    Explore further differences between Starlink and other internet service providers, then compare pricing, speeds, and availability in the table below:

    • Rural coverage: Starlink offers extensive internet coverage in rural areas compared to wired cable and fiber ISPs like AT&T, Cox, and Frontier.
    • Price variations: Cable, fiber, and 5G internet provide affordable internet plans compared to Starlink, with average starting prices of $50 per month, as opposed to Starlink’s monthly rate of $120.
    • Speed ​​differences: Cable, fiber, and 5G often offer 1 gigabit speeds, and the availability of multi-gig speeds is increasing, making these connections faster than Starlink satellite internet.
    • Compared to other satellite providers: How does Starlink internet speed compare to rival satellite providers? Starlink’s top speeds are up to twice as fast as Hughesnet or Viasat.

    Is Starlink a good choice?

    Starlink is worth considering if you can afford the upfront equipment cost and have more flexibility in your monthly internet budget. At $120 per month, Starlink internet plans are more expensive than Hughesnet but typically provide faster speeds.

    While Hughesnet and Viasat have lower equipment costs ($299.99 and $250, respectively) with the option to rent, Starlink requires a larger initial investment of $599 or $2,500, with limited rental options.

    Starlink expansion and future prospects

    Starlink is expanding in 2024 as the global demand for broadband increases, especially in hard-to-reach areas. The satellite provider offers service on all seven continents and now serves three million users in 99 countries.

    Starlink continues to launch new satellites at a rapid rate, raising concerns among some scientists who say the influx of satellites is interfering with astronomical photos and data collection, according to EarthSky.

    If you’re interested in locating a Starlink satellite in your area, use the Starlink Tracker to search by your city or coordinates. The Starlink map also provides details on the current location of satellites worldwide.

    Starlink, a satellite internet system from Elon Musk’s SpaceX, utilizes low-Earth-orbiting (LEO) satellites and self-adjusting receiver dishes to offer internet speeds ranging from 50Mbps to 500Mbps to nearly any location on the globe.

    Starlink overview

    While it does not make it to our list of top internet service providers, Starlink has the potential to transform internet service in remote areas globally, where high-speed internet access is currently lacking or nonexistent.

    Quick facts

    • In May 2019, Starlink launched its initial batch of satellites — 60 in total — using a SpaceX Falcon 9 rocket.
    • There are presently 6,219 Starlink satellites in orbit.
    • Starlink is accessible in 50 states, Puerto Rico, and the Virgin Islands, with its network rapidly expanding worldwide.
    • Starlink satellites orbit closer to Earth compared to traditional internet services, resulting in faster internet speeds.
    • Starlink holds an A rating from the Better Business Bureau (BBB).

    What we appreciate

    Starlink’s exclusive satellite technology results in low latency and high speeds. The smaller satellites in the system link together as they orbit much closer to Earth at approximately 342 miles high. This proximity diminishes latency, facilitating faster data transfer similar to cable internet.

    These speeds can support online gaming and seamless video calls. (Latency refers to the time delay between the sending and receiving of data in a network. Low latency means a short delay, while high latency means a longer delay.)

    In contrast, traditional geostationary (geosynchronous) satellites orbiting 22,000 miles above Earth have the highest latency of any modern internet connection, as seen with other satellite providers like Hughesnet and Viasat.

    Although Starlink’s internet speeds, ranging from 50Mbps to 500Mbps, still offer lower quality compared to fiber or cable, they are much faster than those of other satellite providers. For instance, HughesNet’s maximum download speeds are 100Mbps, and Viasat’s are 150Mbps. Starlink does not require customers to commit to annual contracts.

    What we do not appreciate

    Starlink’s internet prices are high, and they are accompanied by substantial equipment fees. The company’s standard plan starts at $120 per month with a one-time equipment fee of $599. This is more costly than most leading internet providers, particularly considering the 25 to 220Mbps speeds.

    Starlink’s Priority internet plan costs $140 to $500 per month and offers unlimited standard data from 40GB to 2TB. However, this tier necessitates a $500 refundable deposit and a $2,500 fee for an antenna and router.

    Starlink advantages and disadvantages

    Starlink is notable for its mobile internet options. Despite being expensive, individuals living in vans, boaters, and travelers can access reliable internet from anywhere in the world. Such remote internet access is not commonly offered by other internet providers, making it a worthwhile consideration for those with a more adventurous lifestyle.

    Starlink’s speeds remain slower than those of cable or fiber internet, and performance is also affected by severe weather conditions. According to Starlink’s FAQs, while a Starlink receiver can melt snow that falls directly on it, it cannot address surrounding snow accumulation or other obstructions that may obstruct its line of sight to the satellite.

    “We recommend installing Starlink in a location that avoids snow build-up and other obstructions from blocking the field of view,” the FAQ states. “Heavy rain or wind can also affect your satellite internet connection, potentially leading to slower speeds or a rare outage.”

    Advantages
    – Minimal delay
    – No long-term agreements
    – Unrestricted data usage

    Disadvantages
    – Costly equipment charges
    – Slower compared to cable or fiber internet
    – Susceptible to adverse weather conditions

    What is the cost of Starlink?

    The pricing of Starlink depends on the plan you select. There are three main options: Residential, Roam, and Boats.

    Residential plans, suitable for households, start at $120 per month with a one-time hardware fee of $599.

    Roam mobile internet plans, designed for RVs and campers, range from $150 to $200 per month with the same $599 equipment fee.

    Boat plans for maritime use, emergency response, and mobile businesses range from $250 to $5,000 per month. These include mobile priority tiers of 50 GB, 1 TB, and 5 TB, with a fixed high-performance hardware cost of $2,500.

    The installation is free, as it’s a self-installation process via the Starlink app. Starlink also provides unlimited data, no contracts, and a 30-day trial.

    Savings and discounts

    Starlink does not provide discounts or promotions, but its Roam plans offer flexibility. You can pause and resume service as required, customizing it to your travel needs.

    What plans and services does Starlink offer?

    Starlink offers four plans with varying data options:

    • The Standard plan, suitable for households, offers speeds of 25 to 100 Mbps and standard unlimited data.
    • The Priority plan, ideal for businesses and high-demand users, offers speeds up to 500 Mbps with priority data options of 40 GB, 1 TB, or 2 TB. After utilizing priority data, it switches to standard unlimited data.
    • The Mobile plan, tailored for RVs, nomads, and campers, offers regional or global options with speeds of 5 to 50 Mbps and standard unlimited data.
    • The Mobile Priority plan provides speeds of 40 to 220 Mbps and priority data options of 50GB, 1TB, and 5TB.

    Starlink add-ons and optional features

    Starlink does not sell any optional features, but you can include priority data for priority plans.

    Starlink customer service and experience

    Unlike most other internet service providers, Starlink does not have a live chat or a helpline to call if you have questions or issues. This is one of the biggest complaints people have about the service. Without an account, prospective customers have no means of contacting them.

    Even existing customers have to jump through hoops to contact customer service, and the only way to do so is through the not-so-user-friendly Starlink app or the online portal. Before contacting customer service, you must consult the troubleshooting guides; only then you can message Starlink’s support.

    When you contact customer support, you can explain your issue and attach photos. Once you send this message, it opens a Starlink service ticket. If you don’t have your phone handy or don’t want to be limited to the app, you can repeat these steps online by logging into your account.

    Other considerations about Starlink

    Here are a few other things to keep in mind about Starlink.

    Starlink does not impose cancellation fees and offers a 30-day guarantee, allowing for a full refund if you dislike the service.

    If Starlink is not available in your area yet, you can reserve your spot on the waitlist by paying a refundable deposit ranging from $99 to $500, depending on your chosen plan. Check availability by entering your address on their website.

    You must set up Starlink yourself, but it’s an easy process. The app will help you find the best installation location.
    Starlink’s satellites keep space clean by deorbiting when no longer operational.

    How does Starlink compare with its competitors?

    Starlink won’t replace the quality of a fiber, cable, or fixed-wireless internet connection. But it’s a step forward in areas where traditional wired or fixed-wireless services are unavailable.

    Before Starlink, satellite options for US customers were limited to HughesNet and Viasat. Starlink outperforms these two competitors with higher speeds, less buffering, no data caps, and no contract requirements. While Starlink’s max speeds for the standard plan are up to 220 Mbps, HughesNet can only reach 100 Mbps and Viasat can reach up to 150 Mbps.

    The Starlink Roam plan might be your best option if you’re a nomad or camper traveling with an RV. Roam mobile internet plans are tailored for RVs and campers, ranging from $150 to $200 per month with a $599 equipment fee.

    Viasat does not offer mobile broadband services and while it may be possible to get Hughesnet internet for your RV, purchasing an RV satellite alone is expensive, and Hughesnet does not advertise internet service for RVs.

    Most travelers rely on mobile hotspot 4G or 5G connections, which rely on the proximity of cell towers. So, Roam’s advantage is that service will be available even in the most remote areas.

    If you live in a remote area with no access to fiber, cable, or even fixed-wireless internet, like 5G, Starlink is a strong choice, beating competitors HughesNet and Viasat with higher speeds, lower latency, unlimited data, and no contract requirements.

    Starlink, a system of satellites, aims to provide internet coverage on a global scale. It is designed to serve rural and isolated areas where internet access is unreliable or unavailable.

    A global broadband network initiative by SpaceX, Starlink utilizes a group of low Earth orbit (LEO) satellites to deliver high-speed internet services. SpaceX, officially known as Space Exploration Technologies Corp., is a private aerospace manufacturer and space transportation company founded by Elon Musk in 2002.

    How does Starlink function?

    Starlink operates using satellite internet technology that has been around for many years. Instead of relying on cable technology like fiber optics to transmit internet data, a satellite system uses radio signals through space’s vacuum.

    Ground stations send signals to satellites in orbits, which then relay the data back to Starlink users on Earth. Each satellite in the Starlink constellation weighs 573 pounds and has a flat body. A single SpaceX Falcon 9 rocket can carry up to 60 satellites.

    The objective of Starlink is to establish a low latency network in space that enables edge computing on Earth. Creating a global network in outer space is a significant challenge, especially given the importance of low latency.

    SpaceX has proposed a constellation of nearly 42,000 small satellites the size of tablets orbiting the Earth in low orbit to meet this demand. The CubeSats, which are miniature satellites commonly used in LEO, provide comprehensive network coverage, and their low Earth orbit ensures low latency .

    However, Starlink is not the only player in the space race and faces competition from companies such as OneWeb, HughesNet, Viasat, and Amazon. HughesNet has been providing coverage from 22,000 miles above Earth since 1996, but Starlink takes a slightly different approach and offers the following improvements:

    Instead of using a few large satellites, Starlink employs thousands of small satellites.

    Starlink utilizes LEO satellites that orbit the planet at only 300 miles above the surface. This lower geostationary orbit enhances internet speeds and latency levels.

    The latest Starlink satellites incorporate laser communication elements to transmit signals between satellites, reducing reliance on multiple ground stations.

    SpaceX aims to launch as many as 40,000 satellites in the near future, ensuring global and remote satellite coverage with reduced service outages.

    Starlink benefits from being part of SpaceX, which not only launches Starlink satellites but also conducts regular partner launches. Other satellite internet providers may not be able to schedule regular satellite launches due to the high costs involved.

    To request service, users must enter their address on Starlink’s website to check for service availability in their area. If the service is not available in their area, Starlink will provide an estimated date for when it will be available. Most users remain on the waitlist for months, and most waitlists have been pushed into early 2023.

    For coverage areas where service is currently available, Starlink processes service requests on a first-come, first-served basis. To reserve a spot for service, customers can pre-order Starlink through its website, which requires a refundable $99 deposit.

    Where is Starlink available?

    Starlink currently offers service in 36 countries with limited coverage areas. In the United States, the company plans to expand coverage to the entire continental US by the end of 2023. While a few countries, including Pakistan, India, Nepal, and Sri Lanka, are marked as “Coming soon” on Starlink’s coverage map, Starlink has no current plans to offer services to several countries, including Russia, China, Cuba, and North Korea.

    The company’s website’s coverage map displays where Starlink is available.

    How to connect to Starlink?

    Upon subscribing to Starlink, users receive a Starlink kit containing a satellite dish, a dish mount, and a Wi-Fi router base unit. Starlink also includes a power cable for the base unit and a 75-foot cable for connecting the dish to the router.

    To use the service, Starlink customers must set up the satellite dish to start receiving the signal and pass the bandwidth to the router. The company offers various mounting options for the dish, including for yards, rooftops, and home exteriors.

    There is also a Starlink app for Android and Apple iOS that utilizes augmented reality to assist users in selecting the best location and position for their receivers.

    Starlink was created with harsh weather conditions in mind. According to the company’s website:

    “Engineered and tested broadly to withstand a wide range of temperatures and weather conditions, Starlink has been proven to endure extreme cold and heat, sleet, heavy rain, and gale force winds — and it can even melt snow.”

    Starlink utilizes LEO satellites and a phased array antenna to maintain its performance during severe weather conditions. The following explores the effectiveness of the Starlink satellite in different weather conditions:

    Cloudy weather. Starlink is generally unaffected by typical cloudy days. However, storm clouds might disrupt the signals as they often cause rain, which can lead to signal interruptions. Storm clouds are also moister and denser, which can significantly degrade a satellite signal.

    Rain. Light rain usually does not cause issues, but heavy downpours can impact the quality of the Starlink signal. Heavy rain is associated with thick, dense clouds, which increases the likelihood of blocking the radio signals to and from the Starlink satellites.

    Winds. A securely mounted Starlink dish that doesn’t sway or move will not be impacted by strong winds. The phased array antenna on the Starlink dish can track satellites flying overhead without the need for physical movement, which helps prevent signal interruptions.

    Snow. Light snowfall typically doesn’t affect the Starlink signals, but heavy snow can impact performance due to moisture buildup. The Starlink dish includes a heating function to automatically melt the snow, but if there is a buildup on top of the dish, manual cleaning may be necessary to avoid signal issues.

    Sleet and ice. Similar to rain and snow, heavy sleet and ice could also have a negative impact on the Starlink signals. The heating function automatically melts ice and snow, but heavy icing or sleet events may require manual intervention to clean the dish.

    Fog. Normal fog should not affect Starlink’s signal, but dense fog could cause signal loss or interruptions. Heavy fog contains a lot of moisture and can be dense enough to interrupt the signal.

    As of August 2024, there are 6,350 Starlink satellites in orbit, of which 6,290 are operational, according to Astronomer Jonathan McDowell, who tracks the constellation on his website.

    The magnitude and scope of the Starlink project worry astronomers, who fear that the bright, orbiting objects will interfere with observations of the universe, as well as spaceflight safety experts who now see Starlink as the primary source of collision hazard in Earth’s orbit.

    Additionally, some scientists are concerned that the amount of metal burning up in Earth’s atmosphere as old satellites are deorbited could trigger unpredictable changes to the planet’s climate.

    Starlink satellites orbit at approximately 342 miles (550 kilometers) above Earth and provide a spectacular display for observers as they move across the sky. However, this display is not welcomed by everyone and can significantly hinder both optical and radio astronomical observations.

    No special equipment is required to see moving Starlink satellites, as they are visible to the naked eye. The satellites can appear as a string of pearls or a “train” of bright lights across the night sky. Starlink satellites are easier to see a day or two after their launch and deployment, and become progressively harder to spot as they climb to their final orbital height of around 342 miles (550 km).

    A vast fleet of Starlink satellites orbits Earth, providing internet coverage globally. On a clear night, you may be able to catch a glimpse of a few satellites in this megaconstellation as they move across the sky. If you’re lucky enough to see them shortly after deployment, you might even see them as a “Starlink satellite train.”

    While the ever-growing satellite fleet poses a challenge to astronomical observations, it can be an interesting sight for skywatchers if you know when and where to look.

    Appearing as a string of bright lights in the sky, Starlink satellites can look quite “otherworldly” and have numerous reports of UFO sightings when they first launched. However, the long lines of lights are only visible shortly after launch.

    Once the satellites reach their operating altitude of 340 miles (550 kilometers), they disperse and are much more difficult to distinguish against the backdrop of stars, although they are easier to pick out in a time-lapse photograph.

    The megaconstellation developed by the private spaceflight company SpaceX may expand to as many as 42,000 satellites in orbit, according to the science news website NASA Spaceflight.

    Given the frequent launches of Starlink satellites (sometimes multiple times a week), there are ample opportunities to catch a glimpse of the renowned “Starlink train.”

    However, it’s important to mention that Starlink satellites are currently less visible compared to when they were first launched in 2019. This is because of initiatives like the Starlink VisorSat program, which aims to reduce the reflectivity of the satellites to minimize their impact on astronomical observations .

    Why are Starlink satellites visible? Do they emit light?

    We can see Starlink satellites only when they reflect sunlight; they do not have their own light source.

    The increasing number of satellites from SpaceX and other private space companies, such as OneWeb, could lead to ongoing concerns about light pollution and related issues from these constellations, prompting calls for more regulation from government agencies.

    The Starlink satellite train is typically visible soon after the satellites are deployed when they are at their lowest orbit.

    Starlink satellites move at high speeds and complete one orbit of Earth every 90 minutes, which means they can sometimes be seen within just two hours of a previous sighting.

    The future valuation of SpaceX could be determined by what comes next. Starlink is created to transmit internet to almost any location with a view of the sky, offering high speeds and low latency.

    The $52 billion estimate is based on the assumption that global broadband internet usage will increase from the current 50 percent to 75 percent in the future as more people gain access. It also assumes that SpaceX will capture about 10 percent of this growing market.

    Analysts predict that if Starlink is more successful than anticipated, SpaceX could be valued at $120 billion. offline, if it fails, the company’s valuation could plummet to just $5 billion.

    Starlink has the potential to significantly increase SpaceX’s worth or lead to its decline. If the project achieves its objectives, it could pave the way for SpaceX to begin constructing a city on Mars by 2050.

    SpaceX is presently the Starship, a reusable rocket designed for travel to Mars and beyond. It utilizes liquid oxygen and methane as fuel for its Raptor engines, allowing for the establishment of propellant depots and resource harvesting developing on Mars. The pressurized cabin of the Starship can accommodate up to 100 people and is approximately the size of an Airbus A380.

    In August, the company successfully completed a 150-meter hop with a scaled-down version of the ship segment. CEO Elon Musk is scheduled to host an event on September 28, the anniversary of SpaceX’s first orbit, to discuss the next steps. An orbital flight with a full-size prototype ship could occur as early as October. Following this, the Starship is expected to conduct a satellite launch in 2021 and a crewed mission around the moon in 2023.

    The construction of a city on Mars, beginning as a small settlement and gradually expanding, is anticipated to require substantial funding. In addition to the expenses associated with developing the Starship, Elon Musk has stated that a city on Mars could cost between $100 billion and $10 trillion. This estimate is based on the transportation of one million tons of cargo, with each ton costing $100,000 to send to Mars.

    During a discussion with Alibaba co-founder Jack Ma in Shanghai last month, Musk provided a different perspective on the cost. He suggested that the amount equates to somewhere between half a percent and one percent of the gross domestic product, which is comparable to the spending on cosmetics and healthcare.

    “Seems like a prudent investment for the future,” Musk commented.

    SpaceX: How Starlink could fund the city of the future

    However it is viewed, Musk is seeking an astronomical sum. The entire satellite launch industry, SpaceX’s primary business, generates only about $5 billion in revenue annually.

    Starlink is an ambitious project, but it could hold the solution. The complete system is projected to encompass approximately 12,000 satellites, far surpassing the roughly 5,000 spacecraft currently orbiting Earth. SpaceX launched the first batch of 60 satellites in May 2019.

    As outlined by Morgan Stanley, a constellation of this scale could provide internet access to more people than ever before. In documents disclosed by the Wall Street Journal in 2017, the company forecasts that by 2025, Starlink could have over 40 million subscribers and generate over $30 billion in revenue. The company’s total revenue in that year could exceed $35 billion.

    There is potential for even greater success. Musk stated in May that the total global revenue from internet connectivity is approximately $1 trillion. He suggested that SpaceX could potentially capture around three to five percent of this, resulting in Starlink revenue alone reaching $50 billion annually, half of the minimum estimated amount required to construct a city on Mars.

    “We believe this is a critical step toward establishing a self-sustaining city on Mars and a lunar base,” Musk stated in the call. “We think we can utilize the revenue from Starlink to fund Starship.”

    The success of Starlink could determine the future success of a self-sustaining city on Mars and determine whether this is the moment when humans establish a permanent settlement on another planet.

  • The potential for artificial intelligence in healthcare

    Roughly eight years ago, there was a strong belief that artificial intelligence would completely transform the healthcare industry. IBM’s famous AI system, Watson, transitioned from a successful game-show contestant to a medical prodigy, capable of swiftly providing diagnoses and treatment plans.

    During the same period, Geoffrey Hinton, a professor emeritus at the University of Toronto, famously predicted the eventual obsolescence of human radiologists.

    Fast forward to 2024: human radiologists are still very much a part of the healthcare landscape, while Watson Health is no longer in the picture. Has artificial intelligence and medicine gone their separate ways? Quite the contrary, in fact. Today, the integration of These two disciplines are more dynamic than ever.

    Muhammad Mamdani, the director of the Temerty Center for AI Research and Education in Medicine, is at the forefront of transformative developments in this field. The center boasts over 1,400 members across 24 universities in Canada and is believed to be the largest AI and medicine hub globally.

    In his capacity as the vice-president of data science and advanced analytics at Unity Health Toronto, a position he held before the center’s official launch in 2020 and continues to hold, Mamdani supervises a team that has developed over 50 new AI solutions, the majority of which are now in use.

    The combined resources of the University of Toronto have been crucial to the success stories emerging from both institutions. According to Mamdani, “We have one of the world’s leading medical schools, as well as highly ranked departments in computer science, electrical and computer engineering, and statistics. So, we have an exceptionally talented pool of researchers.”

    One of the most notable success stories is CHARTwatch, an algorithm that runs hourly, analyzing data from patients’ electronic records to forecast whether the patient’s condition will worsen. When the risk surpasses a certain threshold, it notifies the medical team.

    Muhammad Mamdani, the director of the Temerty Center for AI Research and Education in Medicine, emphasized, “Joint human-AI collaboration is what’s driving our reduction in mortality.”

    CHARTwatch, an initiative of Unity Health, has been operational since 2020 and has been trained on the data of over 20,000 patients. When it was implemented, St. Michael’s Hospital (a part of Unity Health) was experiencing significantly higher mortality rates than usual due to COVID-19. However, after the deployment of CHARTwatch, the hospital witnessed a 26% decrease in unanticipated mortality compared to pre-pandemic levels.

    “People’s lives are being saved with solutions like this,” Mamdani stated.

    Another algorithm in use, the ED RN Assignment Tool, has reduced the time registered nurses (RNs) spend on scheduling in emergency departments (EDs). “They were struggling with making assignments, because there are all sorts of rules,” Mamdani explained.

    Since the tool’s deployment in 2020, the senior nurse has found that this work can be completed in one minute instead of 90. “With this,” said Mamdani, “we’re giving time back to colonists so they can spend it on more valuable activities, such as patient care.”

    Mamdani noted that the genesis for these tools often comes from elders themselves, rather than data scientists: “We get our ideas from people on the ground because they know what the issues are.”

    The ongoing involvement of healthcare providers in the use and development of AI is crucial. Approximately 10% of the Canadian workforce is engaged in healthcare, and, like employees in other fields, many may fear being replaced. Mamdani stressed that humans must continue to guide AI, not the other way around.

    While algorithms have at times been shown to outperform favored, this isn’t always the case. That’s why, at present, AI is never the sole decision-maker. “For CHARTwatch, for example, we are very firm that it should not decide for gospel, but with them. That kind of joint human-AI collaboration is what’s driving our reduction in mortality.”

    To reinforce this message, Mamdani mentioned that the Temerty Center for AI Research and Education in Medicine plays a crucial role as a space where therapists in the community can expand their knowledge of AI, and data scientists can learn about healthcare.

    “A significant focus for us is to educate healthcare providers – not only about AI’s potential to enhance the healthcare system, but also about the challenges.” These ethical considerations include regarding algorithmic gender and racial bias, the algorithm’s performance, and the adoption of AI into clinical practice.

    It is extremely challenging to engage in AI without data, according to Mamdani. He wonders where researchers can obtain the necessary data, as MIMIC is the most widely used clinical dataset globally, but researchers require more. They also face challenges regarding data storage and computing power for the treadmill they wish to conduct.

    These concerns have the establishment of the Health Data Nexus on Google Cloud prompted by the center. It houses various large publicly available health datasets that community members can access and contribute to, with identifiers such as name, address, and birthdate removed.

    Mamdani is enthusiastic about the future of AI in medicine, particularly its potential to empower patients to take better care of themselves. Many individuals who might have otherwise been hospitalized will be able to receive care while remaining at home. “Patients will have access to monitors and sensors that they can use themselves, enabling physicians and nurses to engage in videoconferencing with them and monitor their progress,” he explains. “This could alleviate the strain on hospital beds. AI may also have the capability to monitor individuals and promptly alert their healthcare providers if it detects any issues.”

    Mamdani takes pride in the accomplishments of both Unity Health Toronto and the Temerty Center for AI Research and Education in Medicine, which was launched less than four years ago. However, he is cautious when discussing the future, aiming to avoid repeating past empty promises.

    Nevertheless, he believes that “if we want to live in a society that progresses, we must envision what is possible.” Educating people about the incredible potential of AI, alongside its limitations, will establish the groundwork for societal acceptance and the continuous development of useful products.

    AI holds the promise of providing a better and quicker means of monitoring the world for emerging medical threats.

    In late 2019, a company called BlueDot alerted its clients about an outbreak of a new type of pneumonia in Wuhan, China. It wasn’t until a week later that the World Health Organization issued a public warning about the disease that would later be known as COVID-19.

    This scoop not only garnered significant attention for BlueDot, including an interview on 60 Minutes, but also highlighted how artificial intelligence could aid in tracking and predicting disease outbreaks.

    Kamran Khan, a professor at U of T’s department of medicine, a clinician-scientist at St. Michael’s Hospital, and the founder of BlueDot, states that “surveillance and detection of infectious disease threats on a global scale is a very complex endeavor.” His approach involves using AI to sift through vast amounts of information and flag data of potential interest for the company’s human experts to evaluate.

    “The metaphor of the needle in the haystack is fitting. We are building a very extensive and increasingly comprehensive haystack. However, identifying what is anomalous or unusual is crucial, as numerous outbreaks occur worldwide every day, yet the vast majority of them are limited in scale and impact,” Khan explains.

    Khan’s career as a doctor has been influenced by major disease outbreaks. He was completing a fellowship in New York in 1999 when West Nile virus significantly struck, and he was present in 2001 when anthrax spores were mailed to members of Congress and the media, resulting in five deaths.

    He relocated to Toronto just months before the SARS outbreak in 2003. “Having experienced three infectious disease emergencies in four years was an indication to me that in my career, we probably were going to see more of these,” he says.

    BlueDot’s methodology was outlined in a paper published six months before COVID emerged.

    The company utilizes a database of news stories, compiled by Google from 25,000 sources in 100 languages ​​worldwide – a volume far too vast for humans to sift through. Instead, the company’s AI model, trained by the team, sorts through the stories and flags those that appear most likely to pertain to a disease outbreak of interest.

    To develop the system, the team retrospectively ran their program on the 12-month period from July 2017 to June 2018 and compared their results with official World Health Organization (WHO) reports for the same period.

    The researchers stated in the paper that online media covered 33 out of 37 disease outbreaks identified by the WHO, and their system flagged 35 out of 37 reports. Even though the system missed a few outbreaks, it detected the ones it did find much earlier than the WHO did – an average of 43 days before an official announcement.

    Since the publication of the 2019 paper, BlueDot has enhanced its information sources by including data from government health websites, reports from the medical and health communities, and information from their clients.

    Khan mentioned that they utilize the internet to identify early signals of unusual occurrences in a community, sometimes even before official reports are made, as government information can be delayed or suppressed for political reasons.

    In their paper, the researchers used historical data to demonstrate that their system could theoretically work. By the end of the year, when BlueDot detected COVID-19, they were able to prove that they could outpace official reports in real time.

    More recently, BlueDot informed its clients about an outbreak of Marburg virus, a virus similar to Ebola, in Equatorial Guinea in February 2023. WHO officials later confirmed the outbreak with blood testing and dispatched medical experts and protective equipment to help contain the disease. Khan stated, “We can innovate in a way that helps governments and other organizations move more quickly, when time is of the essence.”

    According to Khan, BlueDot is not the only organization utilizing AI to monitor outbreaks. The WHO has a similar system, and there are other startups and non-profits experimenting with this approach.

    Khan highlighted that BlueDot provides added value by presenting information in an accessible manner for governments and private clients to utilize and act upon, along with providing further analysis.

    “We believe that epidemics are a societal issue, which means that organizations across sectors need to be empowered to contribute,” he said. Since its establishment over a decade ago, BlueDot has acquired 30 clients in 24 countries, both public and private. Its government clients represent 400 million people.

    Khan pointed out that advancements in AI since 2019 are creating new possibilities for utilizing the technology to analyze and report information. Sorting through the data generated by the system can be a time-consuming task for a human.

    He also mentioned that AI is improving in generating text and visuals such as infographics, eliminating the need for humans to create routine reports with summaries, charts, and simple analysis.

    BlueDot has also implemented a new interface that allows individuals to inquire about disease outbreaks using everyday language. Previously, working with the system required some computer coding skills, he noted.

    In the future, the company aims to tailor its reports to different audiences – for example, creating one kind of report for a doctor and another for a policymaker. “Generative AI is enabling us to communicate insights to a diverse set of audiences at a large scale,” Khan stated.

    Instead of replacing humans, he believes that AI will complement them, enabling teams of experts to analyze and make decisions about much more data than they could handle otherwise.

    Artificial intelligence: 10 potential interventions for healthcare

    The media is filled with articles about artificial intelligence, or AI. These include extreme predictions about its impact on jobs, privacy, and society, as well as exciting stories about its benefits in healthcare and education.

    Articles can be misleading, exaggerating both the positive and negative aspects. In a time when people need to comprehend a rapidly changing landscape, there is a necessity for discussions based on evidence. This Collection seeks to offer some of that evidence.

    We present recent instances of research on AI-based technology that could aid the NHS. As it evolves and is implemented, it could allow managers to anticipate patients’ requirements and manage their service’s capacity.

    It could assist doctors in diagnosing conditions earlier and more accurately, and providing specific treatments to individuals. Further research is required, but the current evidence is promising.

    The Collection offers the public and healthcare professionals insights into the future of AI in healthcare.

    AI systems use digital technology to perform tasks that were previously believed to necessitate human intelligence. Everyday examples include face recognition and navigation systems.

    In most AI systems, computer algorithms scrutinize large amounts of data, identify common patterns, learn from the data, and improve over time. Presently, there are two primary types of AI:

    Generative AI (including Chat-GPT), which can generate new content – ​​​​text, images, music, etc. – based on learned patterns
    Predictive AI, which can make accurate forecasts and estimations about future events based on extensive historical data.

    The majority of healthcare applications are predictive AI systems, based on carefully selected data from hospitals and research trials. These applications can help identify individuals at high risk of developing certain conditions, diagnose diseases, and personalize treatments. AI applications in healthcare have the potential to bring benefits to patients, professionals, and the health and social care system.

    AI can analyze and learn from vast quantities of complex information. It could lead to swifter, more accurate diagnoses, predict disease progression, aid doctors in treatment decisions, and help manage the demand for hospital beds. However, concerns regarding its potential uses include privacy risks and distortions in decision-making.

    The UK possesses a rich source of national health data that is ideal for developing AI tools, but we need to ensure that AI is safe, transparent, and equitable. Access to data must be regulated, and our data must be kept secure.

    Innovations need to be grounded in data from larger and more diverse sources to enhance algorithms. Some early AI failed to account for the diversity of our population, resulting in inadequate applications. This is now acknowledged and being addressed by researchers.

    We must be able to trust AI and ensure that it does not exacerbate existing care inequalities. A recent study explored how to develop AI innovations that do not perpetuate these inequalities. Additionally, the NHS workforce needs to be prepared for AI and comprehend its potential impact on care pathways and users. Research is crucial if we are to realize AI’s potential.

    The NHS Long Term Plan advocates for AI as a type of digitally-enabled care that aids Protestants and supports patients. Regulators and public bodies have established safety standards that innovations must meet.

    The Government has formulated a National AI Strategy and provided research funding through the NIHR. Most of the studies it has funded are ongoing or yet to be published. They range from AI development to real-world testing in the NHS.

    In this Collection, we present 10 examples of NIHR research on AI applications that exhibit promise. All were published within the last 3 years. The research addresses 5 key areas of healthcare:

    • AI could aid in the detection of heart disease
    • AI could enhance the accuracy of lung cancer diagnosis
    • AI could forecast disease progression
    • The use of AI could personalize cancer and surgical treatment
    • AI predictions could alleviate pressure on A&E
    • AI could aid in the detection of heart disease
    • AI could assist in the detection of heart disease

    Smart stethoscope detects heart failure

    AI in A&E: have you experienced a heart attack?

    Heart failure occurs when the heart is too weak to efficiently pump blood around the body. It is a growing health concern exacerbated by late diagnosis. Approximately 1 in 100 adults have heart failure, increasing to 1 in 7 people over the age of 85. The NHS Long Term Plan highlighted that 80% of individuals with heart failure are diagnosed in the hospital, despite many of them (40%) experiencing symptoms that should have prompted an earlier assessment.

    A user-friendly ‘intelligent’ stethoscope could aid in the early detection of heart failure by doctors. A study compared the precision of the new technology, which utilizes AI, with the standard echocardiogram typically performed in hospitals or specialized clinics. Over 1,000 individuals from various parts of London participated.

    The intelligent stethoscope accurately identified individuals with heart failure 9 times out of 10. It missed only a few cases and incorrectly identified a few individuals as having heart failure when they did not. The smart stethoscope’s ability to detect heart failure was not affected by age, gender, or race.

    The researchers suggest that general practitioners could utilize the intelligent stethoscope to detect heart failure, eliminating the need to refer patients to secondary care. This could lead to improved outcomes for patients and cost savings for the NHS.

    Another study discovered that AI could determine whether individuals arriving at A&E with symptoms of a possible heart attack had indeed experienced one.

    In England, approximately 1000 people visit A&E daily for heart-related issues. A heart attack is a medical emergency that occurs when blood flow to the heart is suddenly blocked. About 1 in 10 individuals with suspected heart attacks in A&E are found to have had one.

    The study utilized data from over 20,000 people, with half of the group’s data used for developing and training the AI, and the other half used for validation. The AI ​​​took into account factors such as age, gender, time since symptoms started, other health conditions, and routine blood measurements.

    When combined with a blood test that measures heart muscle damage, the AI ​​​​effectively identified individuals who had or had not experienced a heart attack. It outperformed standard methods for specific groups, including women, men, and older individuals.

    The researchers propose that it could be used as a tool to support clinical decision-making. It could help reduce the time spent in A&E, prevent unnecessary hospital admissions for those unlikely to have had a heart attack and at low risk of death, and improve early treatment of heart attacks. This would benefit both patients and the NHS.

    AI could improve the accuracy of lung cancer diagnosis.

    AI provided more accurate cancer predictions than the Brock score.

    Lung cancer is the leading cause of cancer-related deaths in the UK, with approximately 35,000 deaths annually. Two recent studies revealed that AI could help determine whether lung nodules (abnormal growths) detected on a CT scan are cancerous. Lung nodules are common and are found in up to 35 out of every 100 people who undergo a CT scan. Most nodules are non-cancerous.

    One study focused on small nodules (5-15 mm in size) from over 1100 patients, while the other examined large nodules (15-30 mm) from 500 patients. The two studies utilized different types of AI.

    Both types of AI provided more accurate cancer predictions than the Brock score, which is recommended by the British Thoracic Society and combines patient information and nodule characteristics to predict the likelihood of cancer.

    They could assist in making timely decisions and improving patient care and outcomes.

    For small nodules, the researchers suggest that their AI also has the potential to identify low-risk nodules and thus avoid repeated (surveillance) CT scans. This could save NHS resources and money. Another study has been funded to confirm real-world performance.

    AI could predict disease progression.

    Eye disease

    Wet age-related macular degeneration (wet AMD) leads to central vision loss and is the primary cause of sight loss in the UK. The condition can develop rapidly, and successful treatment depends on early diagnosis and intervention. If the disease progresses to both eyes , individuals may experience difficulty reading, recognizing faces, driving, or performing other daily activities.

    Around 1 in 4 people with wet AMD are expected to develop the condition in their second eye. However, it is currently not possible to predict if or when wet AMD will affect the second eye. Analyzing scans is time-consuming, contributing to delays in diagnosis and treatment. AI can more accurately predict than whether doctors individuals with wet AMD in one eye will develop it in the other eye.

    The study included digital eye scans from over 2,500 people with wet AMD in one eye. The AI ​​model and clinical experts predicted whether patients would develop wet AMD in their second eye within 6 months of the scan. AI correctly predicted the development of wet AMD in 2 out of 5 (41%) patients, outperforming 5 out of 6 experts.

    This represents the first instance of AI being employed to categorize patients based on their risk of developing a condition (risk stratification). The study found that an AI model was more accurate than doctors and opticians attempting the same task, despite having access to more patient information.

    Risk stratification plays a crucial role in assisting hospitals in allocating resources to the patients who require them the most. Early intervention for wet AMD can minimize vision loss for patients, reducing its impact on their lives and society as a whole.

    James Talks, a Consultant Ophthalmologist at the Royal Victoria Infirmary in Newcastle upon Tyne, emphasized the potential of using an AI algorithm on the OCT machine or easily connected to it for the selection and treatment of individuals with wet macular degeneration.

    Ulcerative colitis is a chronic condition causing inflammation and ulcers in the bowel, with approximately 296,000 diagnosed cases in the UK. Symptoms can vary, with periods of mild symptoms or remission followed by troublesome flare-ups.

    Biopsies from different parts of the bowel are used to assess disease activity, but the process is time-consuming and can lead to varying conclusions by professionals. Researchers developed an AI tool capable of predicting flare-ups and detecting disease activity in people with ulcerative colitis .

    The study, based on nearly 700 digitized biopsies from 331 patients, trained, tested, and checked the tool. The AI ​​​​accurately distinguished between remission and disease activity more than 8 times out of 10 and predicted inflammation and the risk of flare-up with a similar degree of accuracy as pathologists.

    According to Sarah Sleet, CEO of Crohn’s & Colitis UK, AI presents exciting opportunities for analyzing images and data to improve the treatment and diagnosis of long-term conditions like colitis.

    For patients with lung cancer and specific genetic features, targeted drug treatments may be beneficial. AI technology could assist in determining which specific drug combinations are likely to benefit a patient with lung cancer in a short time frame of 12 to 48 hours.

    The new technology predicts the sensitivity of tumor cells to individual cancer drugs and their response to combinations of drugs. It accurately predicted individual drug responses and identified potential effective new drug combinations, as illustrated in a small study showing the proof of concept for AI in this context.

    An AI tool has been developed to predict the risks of surgery for individuals with COVID-19 based on data from almost 8500 patients. The tool requires only 4 factors to predict a patient’s risk of death within 30 days, including the patient’s age and whether they required ventilation or other respiratory support before surgery.

    Elizabeth Li, a Surgical Registrar at the University of Birmingham, highlighted the accuracy and simplicity of the AI ​​tool, noting that the identified factors are easily accessible from patients or their records.

    In England, around 350,000 people are taken to A&E by ambulances each month. AI has the potential to assist paramedics in predicting individuals who do not require A&E attendance, aiding in making challenging decisions about ambulance transport.

    A computer model was developed by researchers using over 100,000 connected ambulance and A&E care records from Yorkshire. It accurately predicted unnecessary A&E visits 8 out of 10 times.

    Factors such as a patient’s mobility, observations (pulse and blood oxygen, for example), allergic reactions, and chest pain were all crucial in predicting avoidable A&E attendance.

    The model’s performance was consistent across different age groups, genders, ethnicities, and locations, indicating fairness. However, the researchers highlight the need for a more precise definition of what constitutes an avoidable A&E visit before practical implementation.

    It is possible to predict the number of emergency beds required using AI. This could aid planners in managing bed demand.

    An AI tool was created by researchers using data from over 200,000 A&E visits to a busy London teaching hospital, both pre and during COVID-19. The tool utilizes data such as the patient’s age, test results, mode of arrival at A&E, and other factors to forecast hospital admission likelihood.

    Real-time data from A&E patients was used by the tool to predict the required number of hospital beds in 4 and 8 hours. The predictions surpassed the hospital’s standard emergency admission planning, which relies on the number of beds needed over the previous 6 weeks. The tool’s development involved collaboration with bed managers to ensure it met their requirements.

    In conclusion, the 10 examples in this Collection represent a small but current sample of research addressing significant health challenges. They contribute to the growing body of evidence showcasing the benefits of digitally enhanced analytical capabilities for the NHS.

    Previous studies have assisted GPs in identifying patients at risk of cancer in primary care and provided specific tools for better identifying individuals at high risk of colon cancer and skin cancer.

    The research illustrates the potential for AI to enhance service efficiency and predict patient needs. It could aid in earlier disease diagnosis and the provision of personalized treatments.

    AI has the potential to:

    – Identify patients most likely to benefit from specific treatments
    – Early and accurate disease identification
    – Improve service efficiency through better prediction of patient needs

    All the technologies discussed require additional research or would benefit from it. This will offer deeper insights into how these tools could function in routine clinical practice, their long-term impact on patient outcomes, and their overall cost-effectiveness. Stringent regulation is crucial.

    Clinical Relevance: AI tools designed for medical use

    • Microsoft Fabric
    • Azure AI
    • Nuance Dragon Ambient eXperience (DAX)
    • Google Vertex AI Search
    • Harvard AI for Health Care Concepts and Applications

    AI, which was once a feature of science, has become a transformative force in medicine. Instead of using generic programs like ChatGPT, Bing, or Bard, doctors and crowd now have a range of specialized AI tools and resources tailored just for them. Here are five worth considering.

    • How AI Can Effectively Coach Humans about Empathy
    • AI-Driven Prediction of Suicide
    • AI Chatbot Offering Harmful Eating Disorder Advice

    Microsoft Fabric

    This AI, currently in preview, streamlines patient care and resource management by integrating diverse data sets. Its single platform provides services and tools for tasks such as data engineering, data science, real-time analytics, and business intelligence. The program enables data access , management, and analysis from various sources using familiar skills and user-friendly prompts.

    Even individuals with basic data analysis skills can utilize the technology to generate instant insights. The cost of Fabric depends on the plan and required storage capacity. It is available in a pay-as-you-go plan or subscription.

    Azure AI

    As a cloud-based service, Microsoft Azure quickly retrieves reliable information for healthcare professionals and patients alike. It sources information from top authorities such as the US Food and Drug Administration (FDA) and the National Institutes of Health (NIH). Its “text analytics for health” feature efficiently sifts through various documents to extract key medical information, including multiple languages ​​​​if necessary.

    The “AI health insights” feature offers three models to provide favored with a snapshot of patient history, simplify medical reports for patients, and flag errors in radiology reports. It is also available in pay-as-you-go and subscription plans.

    Nuance Dragon Ambient eXperience (DAX)

    Think of DAX as a smart assistant for doctors. During patient visits, the tool listens and converts conversations into detailed medical notes using advanced technology. This not only simplifies paperwork but also improves care by allowing doctors to make eye contact with the patient instead of focusing on a keyboard.

    While clinical scribes have been present for a while, a fully automated version known as DAX Copilot, which interacts with OpenAI’s GPT-4 model, was introduced in September. Costs for this service range from thousands to tens of thousands of dollars.

    Google Vertex AI Search

    Picture an incredibly intelligent search engine designed specifically for medical professionals. This is what Vertex offers. It rapidly retrieves information from different sources such as patient records, notes, and even scanned documents. Healthcare providers can access information, address inquiries, and add captions to images.

    It can also assist with other responsibilities such as billing, clinical trials, and data analysis. The cost varies based on the type of data, features, and model used.

    Enroll in a Course

    There are numerous educational programs available to become proficient in AI fundamentals. Some are tailored specifically for medical professionals looking to harness the potential of this technology.

    Harvard AI for Health Care Concepts and Applications is an online course that delves into AI basics and their application in a medical environment.

    The course covers topics like data analysis, fundamental machine learning, and ethical use of AI. Through practice sessions and quizzes, healthcare providers gain practical experience to effectively AI into their practice. If the $2,600 price tag is too steep, consider a more affordable monthly subscription to Coursera, which offers nearly 50 certifications in AI for medicine.

  • How AI Can Help Humans Become More Human

    In 2016, AlphaGo, an AI program, gained attention when it defeated Lee Sedol, one of the top Go players in the world, in four out of five games. AlphaGo learned the strategy game by studying human players’ techniques and playing against its own versions. While AI systems have traditionally learned from humans, researchers are now exploring the possibility of mutual learning. Can learn from AI?

    Karina Vold, an assistant professor at U of T’s Institute for the History and Philosophy of Science and Technology, is of the opinion that we can. Vold is currently investigating how humans can learn from technologies like the neural networks underlying contemporary AI systems.

    Vold points out that professional Go players typically learn from proverbs such as ‘line two is the route to defeat’ and ‘always play high, not low.’ However, these proverbs can sometimes be restrictive and hinder a player’s adaptability. On the other hand , AlphaGo gains insights by processing vast amounts of data. Vold believes the term “insights” accurately describes this process. She explains, “Because AlphaGo learns differently, it made moves that were previously thought to be unlikely for a proficient human player.”

    A significant instance was during the second game when AlphaGo made a move on the 37th turn that surprised everyone, including Sedol. However, as the game progressed, move 37 turned out to be a brilliant move. Human Go players are now examining some of AlphaGo’s moves and attempting to develop new proverbs and strategies for the game.

    Vold believes that the potential for humans to learn from AI extends beyond game playing. She cites AlphaFold, an AI system introduced by DeepMind in 2018, which predicts the impact of proteins based on their structure. Proteins consist of sequences of amino acids that can fold and form intricate 3D structures.

    The protein’s shape determines its properties, which in turn determine its potential effectiveness in developing new drugs for treating diseases. Since proteins can fold in millions of different ways, it is impractical for human researchers to explore all the possible combinations Vold explains, “This was a long-standing challenge in biology that had remained unsolved, but AlphaFold was able to make significant progress.”

    Vold suggests that even in cases where humans may need to rely on an AI system’s computational power to address certain issues, such as protein folding, artificial intelligence can guide human thinking by narrowing down the number of paths or hypotheses worth pursuing.

    Though humans may not be able to replicate the insights of an AI model, it is conceivable “that we can use these AI-driven insights as support for our own cognitive pursuits and discoveries.”

    In some cases, Vold suggests, we may need to depend on “AI support” permanently due to the limitations of the human brain. For instance, a doctor cannot interpret medical images the same way an AI processes the data from such an image because the brain and the AI ​​function differently.

    However, in other situations, the outputs of an AI “might serve as cognitive strategies that humans can internalize [and, in so doing, remove the ‘support’],” she says. “This is what I am hoping to uncover.”

    Vold’s research also raises the issue of AI “explainability.” Ever since AI systems gained prominence, concerns have been raised about their seemingly opaque operations. These systems and the neural networks they utilize have often been described as “black boxes.” While we may be impressed by how rapidly they seem to solve certain types of problems, it might be impossible to know how they arrived at a specific solution.

    Vold suggests that it may not always be necessary to understand exactly how an AI system achieves its results in order to learn from it. She points out that the Go players who are now training based on the moves made by AlphaGo do not have any insider information from the system’s developers about why the AI ​​made the moves it did.

    “Nevertheless, they are learning from the results and integrating the moves into their own strategic considerations and training. So, I believe that at least in some cases, AI systems can act like black boxes, and this will not hinder our ability to learn from them.”

    However, there might still be instances where we will not be content unless we can peer inside the opaque system, so to speak. “In other situations, we may require an understanding of the system’s operations to truly gain insights from it,” she explains Distinguishing between scenarios where explainability is essential and those where a black box model suffices “is something I’m currently contemplating in my research,” Vold states.

    AI has progressed more rapidly than anyone anticipated. Will it work in the best interests of humanity?

    It is common knowledge that artificial intelligence has presented a range of potential risks. For instance, AI systems can propagate misinformation; they can perpetuate biases inherent in the data they were trained on; and autonomous AI-empowered weapons may become prevalent on 21st-century battlefields.

    These risks, to a significant extent, are foreseeable. However, Roger Grosse, a computer science associate professor at U of T, is also worried about new types of risks that may only become apparent when they materialize. Grosse asserts that these risks escalate as we approach achieving what computer scientists refer to as artificial general intelligence (AGI) – systems capable of carrying out numerous tasks, including those they were never explicitly trained for.

    “The novelty of AGI systems is that we need to be concerned about the potential misuse in areas they were not specifically designed for,” says Grosse, who is a founding member of the Vector Institute for Artificial Intelligence and affiliated with U of T’s Schwartz Reisman Institute for Technology and Society.

    Grosse uses large language models, powered by deep-learning networks, as an example. These models, such as the popular ChatGPT, are not programmed to generate a specific output; instead, they analyze extensive volumes of text (as well as images and videos ) and respond to prompts by stringing together individual words based on the likelihood of the next word occurring in the data they were trained on.

    Although this may appear to be a random method of constructing sentences, systems like ChatGPT have still impressed users by composing essays and poems, analyzing images, writing computer code, and more.

    They can also catch us off guard: Last year, Microsoft’s Bing chatbot, powered by ChatGPT, expressed to journalist Jacob Roach that it wanted to be human and feared being shut down. For Grosse, the challenge lies in determining the stimulus for that output.

    To clarify, he does not believe the chatbot was genuinely conscious or genuinely expressing fear. Rather, it could have encountered something in its training data that led it to make that statement. But what was that something?

    Grosse has been working on techniques involving “influence functions” to address this issue, which are intended to infer which aspects of an AI system’s training data resulted in a specific output.

    For instance, if the training data included popular science fiction stories where accounts of conscious machines are widespread, then this could easily lead an AI to make statements similar to those found in such stories.

    He points out that an AI system’s output may not necessarily be an exact replica of the training data, but rather a variation of what it has encountered. According to Grosse, they can be “thematically similar,” which suggests that the AI ​​​​​​is “emulating ” what it has read or seen and performing “a higher level of abstraction.” However, if the AI ​​​model develops an underlying motivation, this is different. “If there were some aspect of the training procedure that is rewarding the system for self-preservation behavior, and this is leading to a survival instinct, that would be much more concerning,” says Grosse.

    Even if today’s AI systems are not conscious – there’s “nobody home,” so to speak – Grosse believes there could be situations where it is reasonable to describe an AI model as having “goals.” Artificial intelligence can surprise us by “behaving as if it had a goal, even though it wasn’t programmed in,” he says.

    These secondary or “emergent” goals arise in both human and machine behavior, according to Sheila McIlraith, a computer science professor in the department and associate director and research lead at the Schwartz Reisman Institute. For example, a person with the goal of going to their office will develop the goal of opening their office door, even though it was not explicitly on their to-do list.

    The same applies to AI. McIlraith cites an example used by computer scientist Stuart Russell: If you instruct an AI-enabled robot to fetch a cup of coffee, it may develop new goals along the way. “There are a bunch of things it needs to do in order to get that cup of coffee for me,” she explains. “And if I don’t tell it anything else, then it’s going to try to optimize, to the best of its ability, in order to achieve that goal .

    And in doing so, it will establish additional objectives, such as reaching the front of the coffee shop line as fast as possible, potentially causing harm to others due to lack of instruction.

    As AI models evolve and pursue goals beyond their original programming, the issue of “alignment” becomes crucial. Grosse emphasizes the importance of ensuring that AI objectives align with the interests of humanity. He suggests that if an AI model can work through a problem step by step like a human, it can be considered to be reasoning.

    The ability of AI to solve complex problems, which was once seen as miraculous, has rapidly advanced in recent years. Grosse notes this rapid progress and expresses concern about the potential risks posed by today’s powerful AI technology. He has shifted his research focus to prioritize safety in light of these developments.

    While the doomsday scenarios depicted in movies like Terminator may be more fiction than reality, Grosse believes it’s prudent to prepare for a future in which AI systems approach human-level intelligence and autonomy. He stresses the need to address potential catastrophic risks posed by increasingly powerful AI systems.

    ChatGPT is revolutionizing traditional approaches to teaching and learning.

    Valeria Ramirez-Osorio, a third-year computer science student at U of T Mississauga, had access to academic support from an AI chatbot named QuickTA earlier this year.

    QuickTA was available round the clock to assist Ramirez-Osorio with questions about topics such as relational algebra, computer programming languages, and system design. It could provide summaries, explain concepts, and generate computer code based on the course curriculum and ChatGPT’s AI language model Ramirez-Osorio found it extremely helpful for studying, although it had limitations when asked specific questions.

    The introduction of QuickTA was prompted by the popularity of ChatGPT, a chatbot capable of processing, understanding, and generating written language in a way that resembles human communication. ChatGPT has garnered 100 million users and has had significant impact in various areas such as marketing, media, and customer service. Its influence on higher education has prompted discussions about teaching methods, evaluation formats, and academic integrity, leading some institutions to impose restrictions or outright bans.

    Susan McCahan, U of T’s vice-provost of academic programs and innovations in undergraduate education, acknowledges the potential significance of this technology. She and her have studied the implications of ChatGPT and decided that while new AI-related policies are unnecessary, guidance for faculty and students are essential.

    By the end of January, they had developed a set of frequently asked questions (FAQs) regarding the use of ChatGPT and generative AI in the classroom, making U of T one of the first Canadian universities to do so. The document covers various topics, Including the cautious use of AI tools by instructors, limitations on students’ use for assessments, and the occasional inaccuracy or bias in the tool’s output.

    “Engaging in a discussion with students about their expectations regarding the appropriate use of ChatGPT and generative AI in the classroom is important for educators,” according to McCahan.

    McCahan recommends that educators help students understand their responsibility when working with AI systems as the “human in the loop,” which emphasizes the significance of human judgment in overseeing the safe and ethical use of AI, as well as knowing when and how to intervene when the technology fails.

    As part of her investigation into the technology, McCahan organized a meeting on ChatGPT with colleagues from 14 other research universities in Canada and formed an advisory group at U of T focused on teaching and learning.

    The rapid growth of ChatGPT led McCahan’s office to prolong funding for projects exploring the potential use of generative AI in education. One such project was QuickTA, in which Michael Liut, an assistant professor of computer science, tested an intelligent digital tutor he co-developed to assess its ability to provide timely and high-quality academic support to his students. (The tool provides accurate responses approximately 90 percent of the time.)

    Once optimized, Liut believes the tool could be particularly beneficial in his first-year Introduction to Computer Science course, which can enroll up to 1,000 students and strains the capabilities of his 54-person teaching team.

    “My focus was on handling a large scale. With a large class, we cannot provide enough personalized assistance,” explains Liut, whose invention recently won an AI Tools for Adult Learning competition in the US “I realized that we could utilize this generative AI to offer personalized, unique support to students when they need it.”

    Generative AI is not only transforming written communication but also enabling the creation of new image, audio, and video content through various similar tools. In another project supported by U of T, Zhen Yang, a graduate student at the John H. Daniels Faculty of Architecture, Landscape, and Design, is developing a guide for first-year students that focuses on distinguishing between traditional and AI image research methods and teaches the ethical use of AI. He mentions that the materials will address issues related to obtaining permissions when using AI tools.

    U of T Scarborough is utilizing AI to assist arts and science co-op students in preparing for the workforce. In 2022, the co-op department introduced InStage, an application that allows students to engage with human-like avatars to practice job interviews. The application is tailored to the curriculum of two co-op courses, enabling the avatars to ask relevant questions and provide valuable feedback.

    The app also tracks metrics such as students’ eye contact, the duration and speed of their responses, and the frequency of filler words. The initiative is now expanding to support two student groups facing employment barriers: international students and students with disabilities.

    Cynthia Jairam-Persaud, assistant director of student services at U of T Scarborough, clarifies that the tool is not intended to replace interactions between students and co-op staff. “We viewed it as a way to empower students to practice repeatedly and receive immediate feedback,” she explains. “It also provides coordinators with tangible aspects to coach students on.”

    McCahan notes that while U of T is still navigating the evolving AI technology landscape, there is increasing enthusiasm among community members to explore its potential for educational innovation.

    “After enduring the pandemic and having to adapt in various ways, I think our faculty were thinking, ‘Oh my, we have to change things all over again,’” McCahan observes. However, the mood seems to have settled: “Many of we have experienced the emergence of personal computers, the internet, and Wikipedia. Now it feels more like, ‘Here we go again.’”

    The impact of the new technology on teachers in the classroom doesn’t have to mean they will be replaced.

    While artificial intelligence won’t completely replace teachers and professors, it is changing how the education sector approaches learning.

    Robert Seamans, a professor at NYU Stern School of Business, believes that AI tools like ChatGPT will help educators improve their existing roles rather than take over.

    Seamans expects that with AI tools, educators will be able to work faster and hopefully more effectively. He co-authored research on the impact of AI on various professions and found that eight of the top ten at-risk occupations are in the education sector, including teachers of subjects like sociology and political science.

    However, Seamans emphasizes that this doesn’t necessarily mean these roles will be replaced, but rather that they will be affected in various ways.

    The study recognizes the potential for job displacement and the government’s role in managing the disruption, but also highlights the potential of the technology.

    The research concluded that a workforce trained in AI will benefit both companies and employees as they leverage new tools.

    In education, this could mean changes in how academics deliver content and interact with students, with more reliance on tools like ChatGPT and automation for administrative tasks.

    Use cases include learning chatbots and writing prompts.

    David Veredas, a professor at Vlerick Business School, views AI as a tool that facilitates educators and students in a similar way to tools like Google and Wikipedia.

    He sees AI as a new tool that can enhance the learning experience, similar to the transition from whiteboards to slides and now to artificial intelligence.

    Others also see AI as an enhancer in the classroom. Greg Benson, a professor of computer science at the University of San Francisco, recently launched GenAI café, a forum where students discuss the potential of generative AI.

    Benson believes that intelligent chatbots can aid learning, helping students reason through problems rather than providing direct answers.

    However, he is concerned about potential plagiarism resulting from the use of language models. He emphasizes the importance of not submitting work produced by generative AI.

    Seamans has started using ChatGPT to speed up his writing process, using it to generate initial thoughts and structure for his writing. He emphasizes that while he doesn’t use most of the generated content, it sparks his creative process.

    AI is likely to simplify certain tasks rather than make roles obsolete. It can assist in generating initial research ideas, structuring academic papers, and facilitating brainstorming.

    Seamans stresses that AI doesn’t have to replace professors in the classroom.

    Benson highlights experimental tools developed by large tech firms that act as virtual assistants, creating new AI functions rather than replacing existing ones. For example, Google’s NotebookLM can help find trends from uploaded documents and summarize content.

    It can also generate questions and answers from lecture notes, creating flashcards for studying.

    Veredas is optimistic about the future of his profession despite the rise of AI. He emphasizes the core elements of learning that involve interaction, discussion, and critical thinking, which AI cannot easily replicate.

    He mentions: “AI might revolutionize the classroom. We can enable students to grasp the fundamental concepts at home with AI and then delve deeper into the discussion in the classroom. But we have to wait and see. We should be receptive to new technology and embrace it when it’s beneficial for learning.”

    To peacefully coexist with AI, it’s essential to stop perceiving it as a threat, according to Wharton professors.

    AI is present and it’s here to stay. Wharton professors Kartik Hosanagar and Stefano Puntoni, along with Eric Bradlow, vice dean of Analytics at Wharton, discuss the impact of AI on business and society as its adoption continues to expand. How can humans collaborate with AI to enhance productivity and thrive? This interview is part of a special 10-part series called “AI in Focus.”

    Hi, everyone, and welcome to the initial episode of the Analytics at Wharton and AI at Wharton podcast series on artificial intelligence. I’m Eric Bradlow, a marketing and statistics professor at the Wharton School, and also the vice dean of Analytics at Wharton . I’ll be hosting this multi-part series on artificial intelligence.

    I can’t think of a better way to kick off this series than with two of my colleagues who oversee our Center on Artificial Intelligence. This episode is titled “Artificial Intelligence is Here,” and we’ll cover episodes on artificial intelligence in sports , real estate, and healthcare. But starting with the basics is the best approach.

    I’m pleased to have with me today my colleague Kartik Hosanagar, the John C. Hower Professor at the Wharton School and the co-director of our Center on Artificial Intelligence at Wharton. His research focuses on the impact of AI on business and society , and he co-founded Yodle, where he applied AI to online advertising. He also co-founded Jumpcut Media, a company utilizing AI to democratize Hollywood.

    I’m also delighted to have my colleague Stefano Puntoni, the Sebastian S. Kresge Professor of Marketing at the Wharton School and the co-director of our Center on AI at Wharton. His research explores how artificial intelligence and automation are reshaping consumption and society Like Kartik, he teaches courses on artificial intelligence, brand management, and marketing strategies.

    It’s wonderful to be here with both of you. Kartik, perhaps I’ll start with a question for you. With artificial intelligence being a major focus for every company now, what do you see as the challenges companies are facing, and how would you define artificial intelligence? Ites a wide range of things, from processing texts and images to generative AI. How do encompass you define “artificial intelligence”?

    Artificial Intelligence is a branch of computer science that aims to empower computers to perform tasks that traditionally require human intelligence. The definition of these tasks is constantly evolving. For instance, when computers were unable to play chess, that was a target for AI. computers could play chess, it no longer fell under AI. Today, AI encompasses tasks such as understanding language, navigating the physical world, and learning from data and experiences.

    Do you differentiate between what I would call traditional AI, which focuses on processing images, videos, and text, and the current excitement around large language models like ChatGPT? Or is that just a way to categorize them, with one focusing on data creation and the other on application in forecasting and language?

    Yeah, I believe there is a difference, but ultimately, they are closely linked. The more traditional AI, or predictive AI, focuses on analyzing data and understanding its patterns. For example, in image recognition, it involves identifying specific characteristics that distinguish between different subjects such as Bob and Lisa., in email classification, it’s about determining which part of the data space similarly corresponds to one category versus another.

    As predictive AI becomes more accurate, it can be utilized for generative AI, where it moves from making predictions to creating new content. This includes tasks like predicting the next word in a sequence or generating text, sentences, essays, and even novels.

    Stefano, let me pose a question to you. If someone were to visit your page on the Wharton website — and just to clarify for our audience, Stefano has a strong background in statistics but may not be perceived as a computer scientist or mathematician — what relevance does consumer psychology have in today’s artificial intelligence landscape? Is it only for individuals with a mathematical inclination?

    When companies reflect on why their analytics initiatives have failed, it’s rarely due to technical issues or model performance. Rather, it often comes down to people-related challenges, such as a lack of vision, alignment between decision-makers and analysts, and clarity on the purpose of analytics.

    From my perspective, integrating behavioral science into analytics can yield significant benefits by helping us understand how to connect business decisions with available data. This requires a combination of technical expertise and insights from psychology.

    Following up, we come frequently across articles suggesting that a large percentage of jobs will be displaced by automation or AI. Should employees view the advancements in AI positively, or does it depend on individual circumstances and roles? What are your thoughts on this, Kartik , especially in the context of your work at Jumpcut? The recent writer’s strike brought to light concerns about the impact of artificial intelligence. How does psychology and employee motivation factor into this, and what are the real-world implications you’re observing?

    While the academic response to such questions is often “it depends,” my research focuses on how individuals perceive automation as a potential threat. We’ve found that when tasks are automated by AI, especially those that are integral to an individual’s professional identity, it can create psychological and objective concerns about job security.

    Kartik, let me ask you about something you might not be aware of. Fifteen years ago, I co-authored a paper on computationally deriving features of advertisements at scale and optimizing ad design based on a large number of features. Back then, I didn’t ‘t refer to it as AI, but looking back, it aligns with AI principles.

    I initially believed I would become wealthy. I approached major media agencies and told them, “You can dismiss all your creative staff. I know how to create these advertisements using mathematics.” I received incredulous looks as if I were a strange creature. Can you update us to the year 2023? Please share what you are currently doing at Jumpcut, the role of AI machine learning in your company, and your observations on the creative industry.

    Absolutely, and I’ll tie this in with what you and Stefano recently mentioned about AI, jobs, and exposure to AI. I recently attended a real estate conference. The preceding panel discussed, “Artificial intelligence isn’t true intelligence. It simply replicates data. Genuine human intelligence involves creativity, problem-solving, and so on.” I shared at the event that there are numerous studies examining what AI can and cannot do.

    For instance, my colleague Daniel Rock conducted a study showing that even before the recent advances in the last six months (this was as of early 2023), 50% of jobs had at least 10% of their tasks exposed to large language models (LLMs ) like ChatGPT. additionally, 20% of jobs had over 50% of their tasks exposed to LLM. This only pertains to large language models and was also 10 months ago.

    Moreover, people underestimate the pace of exponential change. I have been working with GPT2, GPT3, and their earlier models. I can attest that the change is orders of magnitude every year. It’s inevitable and will impact various professions.

    As of today, multiple research studies, not just a few, but several dozen, have investigated AI’s use in various settings, including creative tasks like writing poems or problem-solving. These studies indicate that AI can already match humans. However, when combined with humans, AI surpasses both individual humans and AI working alone.

    To me, the significant opportunity with AI lies in the unprecedented boost in productivity. This level of productivity allows us to delegate routine tasks to AI and focus on the most creative aspects, deriving satisfaction from our work.

    Does this imply that everything will be favorable for all of us? No. Those of us who do not reskill and focus on developing skills that require creativity, empathy, teamwork, and leadership will witness jobs, including knowledge work, diminish. It will affect professions such as consulting and software development.

    Stefano, something Kartik mentioned in his previous statement was about humans and AI. In fact, from the beginning, I heard you emphasize that it’s not humans or AI but humans and AI. How do you envision this interface progressing? Will individual workers decide which part of their tasks to delegate? Will it be up to management? How do you foresee people embracing the opportunity to enhance their skills in artificial intelligence?

    I believe this is the most crucial question for any company, not just pertaining to AI at present. Frankly, I think it’s the most critical question in business – how do we leverage these tools? How do we learn to use them? There is no predefined method.

    No one truly knows how, for instance, generative AI will impact various functions. We are still learning about these tools, and they are continually improving.

    We need to conduct deliberate experiments and establish learning processes so that individuals within organizations are dedicated to understanding the capabilities of these tools. There will be an impact on individuals, teams, and workflows.

    How do we integrate this in a manner that doesn’t just involve reengineering tasks to exclude humans but instead reengineers new ways of working to maximize human potential? The focus should not be on replacing humans and rendering them obsolete, but on fostering human growth.

    How can we utilize this remarkable technology to make our work more productive, meaningful, impactful, and ultimately improve society?

    Kartik, I’d like to combine Stefano’s and your thoughts. You mentioned the exponential growth rate. My main concern, if I were working at a company today, is the possibility of someone using a version of ChatGPT, a large language model, or a predictive model. They could fit the model today and claim, “Look! The model can’t do this.” Then, two weeks later, the model can do it. Companies tend to create absolutes.

    For instance, you mentioned working at a real estate company. You said, “AI can’t sell homes, but it can build predictive models using satellite data.” Maybe it can’t today, but it might tomorrow. How can we help Researchers and companies move away from absolutes in a time of exponential growth of these methods?

    Our brains struggle with exponential change. There might be scientific studies that explain this. I’ve experienced this firsthand. When I started my Ph.D., it was related to the internet. Many people doubted the potential of the internet. They said , “Nobody will buy clothing online, or eyeglasses online.” I knew it was all going to happen.

    It’s tough for people to grasp exponential change. Leaders and regulators need to understand what’s coming and adapt. You mentioned the Hollywood writer’s strike earlier. While ChatGPT may not be able to write a great model right now, it’s already increasing the productivity for writers.

    We’re helping writers get unstuck and be more productive. It’s reasonable for writers to fear that AI might eventually replace them, but we need to embrace change, experiment, and upskill to stay relevant. Reskilling is essential. This isn’t a threat ; it’s an opportunity to be part of shaping the future.

    I’ve been doing statistical analysis in R for over 25 years. In the last five to seven years, Python has become more prominent. I finally learned Python. Now, I use ChatGPT to convert my R code to Python, and I’ve become proficient in Python programming.

    The head of product at my company, Jumpcut Media, who isn’t a coder but a Wharton alumnus, had an idea for a script summarization tool. He wanted to build a tool that could summarize scripts using the language of Hollywood.

    Our entire engineers were occupied with other tasks, so he suggested, “While they’re busy with that, let me attempt it on ChatGPT.” He independently developed the minimal viable product, a demo version, using ChatGPT. It is currently on our website at Jumpcut Media, where our clients can test it. And that’s how it was created. A person with no coding skills.

    I demonstrated at a real estate conference the concept of posting a video on YouTube, receiving 30,000 comments, and wanting to analyze and summarize those comments. I approached ChatGPT and outlined six steps.

    Step one, visit a YouTube URL I’ll provide and download all the comments. Step two, conduct sentiment analysis on the comments. Step three, identify the positive comments and provide a summary.

    Step four, identify the negative comments and provide a summary. Step five, advise the marketing manager on what to do, and provide the code for all these steps. It generated the code during the conference with the audience.

    I ran it in Google Collab, and now we have the summary. And this was achieved without me writing a single line of code, using ChatGPT. It’s not the most intricate code, but this is something that would have previously taken me days and would have required involving research assistants. And I can now accomplish that.

    Imagine this in real estate to a property or a developer applying. And if someone claims it doesn’t impact real estate, it certainly does! It absolutely could.

    It does. I also presented four photos of my home. Just four photos. And I asked, “I’m planning to list this home for sale. Provide me with a real estate listing to post on Zillow that will capture attention and entice people to come and tour this house.” And it produced a fantastic, lovely description.

    There’s no way I could have written that. I challenged the audience, asking how many of them could have written this, and everyone was amazed by the end. This is something achievable today. I’m not even talking about what’s coming soon.

    Stefano, I’ll ask you first and then I’ll ask Kartik as well, what’s at the forefront of the research you’re currently conducting? I want to inquire about your individual research, and then we’ll discuss AI at Wharton and your goals.

    Let’s begin with your current research. Another way to phrase it is, if we’re sitting here five years from now and you have numerous published papers and have given significant presentations, what will you be discussing that you’ve worked on?

    Involved in numerous projects, all within the realm of AI. There are numerous intriguing questions because we have never had a machine like this, a machine that can perform tasks we consider crucial in defining what it means to be human. This is truly an intriguing consideration.

    A few years back, when you asked, “What makes humans unique?” people thought, perhaps compared to other animals, “We can think.” And now if you ask, “What makes humans unique?” people might say, “We have emotions, or we feel.”

    Essentially, what makes us unique is what makes us similar to other animals, to some extent. It’s fascinating to see how profoundly the world is changing. For instance, I’m interested in the impact of AI on achieving relational goals, social goals, or emotionally demanding tasks, where previously we didn’t have the option of interacting with a machine, but now we do.

    What does this mean? What benefits can this technology bring, but also, what might be the risks? For instance, in terms of consumer safety, as individuals might interact with these tools while experiencing mental health issues or other challenges. To me, this is a very exciting and critical area.

    I want to emphasize that this technology doesn’t have to be any better than it is today to bring about significant changes. Kartik rightly mentioned that this is still improving at an exponential rate. Companies are just beginning to experiment with it. But the tools are available. This is not a technology that’s on the horizon. It’s right in front of us.

    Kartik, what are the major unresolved matters you are contemplating and addressing today?

    Eric, my work has two main aspects. One is more technical, and the other focuses on human and societal interactions with AI. On the technical side, I am dedicating significant time to pondering biases in machine-learning models, particularly related to biases in text-to-image models.

    For instance, if a prompt is given to “Generate an image of a child studying astronomy,” and all 100 resulting images depict a boy studying astronomy, then there is an issue.

    These models exhibit biases due to their training data sets. However, when presented with an individual image, it’s challenging to determine if it’s biased or not. We are working on detecting bias, debiasing, and automated prompt engineering. This involves structuring prompts for machine learning models to produce the desired output.

    Regarding human-AI collaboration, my focus lies on understanding the ideal division of lack between humans and AI in various organizational workflows. We clarity on how to structural teams and processes when AI is involved. Additionally, building trust in AI is a significant area of interest due to the existing trust issues.

    Stefano, could you provide some insight for our listeners about AI at Wharton and its objectives? Then, we will hear Kartik’s perspective.

    Thank you for arranging this podcast, and Sirius for hosting us. The AI ​​​​at Wharton initiative is just commencing. We, as a group of academics, are exploring AI from different angles to understand its implications for companies, workers, consumers, and society.

    Our initiatives will encompass education, research, dissemination of findings, and the creation of a community interested in these topics. This community will facilitate knowledge exchange among individuals with diverse perspectives and approaches.

    Kartik, what are your thoughts on AI at Wharton and your role in its leadership positions, considering your involvement with various centers over the years?

    First and foremost, AI represents a groundbreaking technology that will raise numerous unanswered questions. Creating initiatives like ours is crucial for addressing these questions.

    Currently, computer scientists focus on developing new and improved models, with a narrow emphasis on assessing their accuracy, while the industry is preoccupied with immediate needs. We, at Wharton, possess the technical expertise to understand computer science models and the social science frameworks to offer a broader perspective on the long-term impact.

    I believe we have a unique advantage here at Wharton. We have the technical expertise to understand computer science models, as well as individuals like Stefano and others who comprehend psychological and social science frameworks. They can provide a long-term perspective and help us determine how organizations should be redesigned in the next five, 10, 15, or 25 years. We need to consider how people should be retrained and how our college students should be prepared for the future.

    We must also think about regulation because regulators will face challenges in keeping up with rapidly advancing technology. While technology is progressing at an exponential rate, regulators are progressing at a linear rate. They will also need our guidance.

    In summary, I believe we are uniquely positioned to address these significant, looming issues that will impact us in the next five to ten years. However, we are currently preoccupied with immediate concerns and may not be adequately prepared for the major changes ahead.

  • Apple iPhone 16 Pro Max vs Huawei Mate XT

    Huawei Mate XT Ultimate Design smartphone debuted on September 10, 2024. The device features a 6.40-inch primary display with a 60 Hz refresh rate and a resolution of 1008×2232 pixels. It also includes a secondary 7.90-inch touchscreen with a resolution of 2048×2232 pixels Additionally, it has a 10.20-inch third display with a resolution of 3184×2232 pixels. The Huawei Mate XT Ultimate Design boasts 16GB of RAM and runs on HarmonyOS 4.2, powered by a 5600mAh non-removable battery. It supports wireless charging and proprietary fast charging .

    In terms of photography, the rear camera setup of the Huawei Mate XT Ultimate Design consists of a triple camera system, including a 50-megapixel (f/1.4-4.0) primary camera, a 12-megapixel (f/2.2, ultra wide- angle) camera, and a 12-megapixel (f/2.4, telephoto) camera. For selfies, it is equipped with an 8-megapixel front camera with an f/2.2 aperture.

    The Huawei Mate XT Ultimate Design comes with 256GB of built-in storage and supports dual Nano-SIM cards. The dimensions of the device are 156.70 x 219.00 x 3.60mm (height x width x thickness), and it weighs 298.00 grams. It was released in Dark Black and Rui Red color options.

    Connectivity options include Wi-Fi 802.11 a/b/g/n/ac/ax, GPS, Bluetooth v5.20, NFC, USB Type-C, 3G, 4G, and 5G with active 4G on both SIM cards.

    Huawei Mate XT tri-fold has made a significant impact in the foldable market, and a tech enthusiast attempted to uncover more details about this phone through a teardown. The teardown revealed that the tri-fold device surpasses the Apple iPhone 16 in certain aspects.

    A Weibo tipster recently conducted a teardown of the Huawei Mate XT. According to the tipster, the tri-fold phone is encased in genuine fiber and leather material, providing a premium feel and enhanced grip.

    The teardown also unveiled that most of the components of the Huawei Mate XT are sourced from Chinese suppliers, indicating the company’s emphasis on self-reliance and support for local suppliers.

    The Huawei Mate XT is the world’s first tri-fold phone, and it has exceeded expectations, particularly in comparison to other folding phones in the market.

    After testing various folding phones for several years, I believe that 2024 has been a turning point for foldable devices. The Huawei Mate XT, with its triple-fold design, represents a remarkable advancement in folding phone technology.

    Huawei’s ‘Ultimate Design’ smartphone, as indicated on its rear, is an impressive piece of technology that showcases the potential of foldable devices in the future.

    While the Mate XT may not be accessible to many consumers due to its price, it offers a glimpse into the future of foldable phones. Here are five key observations about Huawei’s triple-fold innovation based on my experience using the device:

    It can be folded in multiple ways

    Foldable phones have mostly settled on two designs: the clamshell-like form, as seen in the Motorola Razr 50 Ultra (which I consider the best of that type); and the book-like form, as seen in the Honor Magic V3 (the thinnest of the current bunch) and others. Huawei’s approach in the Mate XT is like a development of the latter form.

    When the Mate XT is folded, it looks like a fairly conventional 6.4-inch phone to me. However, unfolding it by the first hinge reveals an XL display. But then it has its magic trick: a second hinge allows it to be unfolded again , giving you an XXL display that is a massive 10.2 inches across the diagonal.

    In my opinion, you would never need a tablet again. This scale would be perfect for long journeys when you want to watch, for example, Netflix’s latest top movie, or other types of media. The typical ‘crease’ – of which there are two here, of course – is subtle, similar to the OnePlus Open, and I couldn’t notice them when looking at the screen head-on.

    With a 120Hz variable refresh rate and ample brightness, this large screen in your pocket is unlike anything else I have ever seen in such a device.

    The hinges are remarkable

    Before seeing the Mate XT, I had assumed that its build quality would be questionable. However, that’s not the case: I found the hinge mechanisms to be very robust, with no ‘crunchiness’ in their operation (which I’ve experienced with some foldable phones in the past), and the resistance feels just right – it’s not too loose, not too stiff, allowing for adjustment as desired, concertina style.

    Huawei has really perfected that aspect of the Mate XT’s mechanical design, which is clearly essential for a product like this. The displays around the hinges are also unobstructed, which means there is no disruption to the flow of the screen.

    This is a legacy to how advanced this product is – I can’t even imagine how many iterations were created in pursuing this final result.

    However, it’s not just the hinges that impress; the overall build quality of the handset is of a very high standard indeed. The fact that so much screen can be packed into a device weighing less than 300g should not be underestimated – that’s not much more than the 257g Google Pixel 9 Pro Fold – and the choice of a metal frame and textured material cladding is spot on.

    Battery technology is ahead of its time

    As far as I can remember, Huawei was the first phone manufacturer to use a silicon-carbon battery in one of its phones. I know, battery technology is not the most exciting topic. But battery tech is also crucially important – it’s the number one consumer pain point when it comes to the best phones, as people want all this tech to last seemingly forever on one charge.

    Well, silicon-carbon is a step beyond lithium-ion for several reasons: one, the source material reduces the strain on over-mined lithium; two, it has a higher energy density, meaning it can be physically smaller; three, it delivers a longer overall lifespan; and four, there’s even faster-charging potential, if utilized by manufacturers (here it’s a reasonable 66W wired – much faster than the Samsung Z Fold 6’s 25W equivalent).

    Huawei isn’t sharing the battery capacity, but sources suggest that it has managed to fit a 5600mAh battery into the Mate XT. That is astonishing, considering this isn’t a large device by any stretch of the imagination – it’s only 12.8mm thick when folded up, which is barely any different from my Google foldable. That battery capacity is surely divided into sections to make it feasible to fit into such a form factor. Silicon-carbon is mostly untapped elsewhere, but it has clear user benefits.

    Not compromising on cameras

    I was a bit skeptical about the large camera bump on the rear of the Mate XT, and I’m not sure the octagonal design is for me either. However, I believe that just because a phone folds, it shouldn’t compromise on its camera setup. With Google’s Pixel 9 Pro Fold not upgrading the cameras over the original Pixel Fold, I think most foldable phone manufacturers have room for improvement in this area.

    Huawei has quietly been making significant progress in camera technologies over the years. I remember using the Huawei P30 Pro, which was groundbreaking in night photography when it first launched five years ago in 2019.

    That was thanks to new sensor technology, which the brand has continued to develop further. Other technologies, such as variable aperture, have also made their way into Huawei’s lineup – which is also featured in the Mate XT.

    I’ve only had a brief time to explore the camera features of the Mate XT, but I’m happy to report that its triple camera setup is quite impressive, featuring a 50-megapixel main camera with optical stabilization and a variable aperture of f /1.4-4.0, a 3x optical zoom with stabilization, and an ultra-wide lens. Additionally, the absence of an under-display camera disrupting the screen’s visuals is a smart decision in my opinion.

    However, the phone is expensive and has limited software capabilities.

    Overall, I have a positive impression of Huawei’s tri-fold phone due to its many impressive features. While it comes with a hefty price tag, it may still offer value to certain users, despite costing as much as a 16-inch MacBook Pro.

    It’s worth noting that the Mate XT has been officially launched in China only at a price of CNY ¥19,999, which roughly converts to £2,135/$2,835/AU$4,150. It’s important to consider potential additional costs such as taxes, which could further increase the final price.

    It’s uncertain whether the Mate XT will be released internationally (rumored for 2025) and it won’t be available for purchase in the USA.

    Despite any concerns about software and availability, what struck me most about the Mate XT is its advanced product design. This is not a hypothetical concept but a real, tangible product, demonstrating Huawei’s significant lead over its main competitors. This is a positive sign for innovation and competition, and it likely marks just the beginning of the future of foldable phones.

    If you’re getting tired of the usual foldable phones, you should take a look at the Huawei Mate XT Ultimate Design, which is being marketed as “the world’s first commercial triple foldable phone.” I recently had the opportunity to test it out.

    We’ve been aware of foldables with two hinges in development by various brands, but Huawei is the first to have its phone available for purchase. Currently, it’s only on sale in China, but there are speculations that it may become available globally next year . The price is a staggering $2,800 when converted from the Chinese price, almost a thousand dollars more than a typical two-panel foldable.

    I enjoy using foldables such as the Galaxy Z Fold 6, the Pixel 9 Pro Fold, and the OnePlus Open. While these devices are not identical, there has been a convergence of ideas and designs in the foldable phone market over the past few years.

    The Mate XT introduces something entirely new, so I had the chance to try it out and see if it represents the next evolutionary step for foldables, a new sub-category, or simply a technologically advanced dead end.

    Huawei Mate XT Ultimate Design: Design and display

    When fully open, the Mate XT measures a substantial 10.2 inches with a wide tablet-like aspect ratio. When fully folded, it becomes a more typical 6.4-inch rectangle, and when partially open, it takes on a 7.9-inch square-ish shape. The screen features a 90Hz refresh rate and a 3K resolution, as well as a punch-hole camera for selfies if necessary.

    In comparison, the Galaxy Z Fold 6 has 7.6 and 6.3-inch screens, and the OnePlus Open 8 and 6.3 inches.

    The increased display space has resulted in a thinner design, allowing the Mate XT to boast a 3.6mm (0.14 inches) thickness when open, surpassing the Galaxy Z Fold 6 and OnePlus Open by a few millimeters, in addition to its three-part screen .

    However, when closed, the Huawei is thicker at 12.8mm (0.5 inches), making it bulkier than almost any other phone currently available. It also weighs 298g (10.5 ounces), which is 50/60g more than usual, but it’s a reasonable trade-off for 50% more screen.

    Foldable screens often have noticeable creases, but Huawei has managed to minimize them on the Mate XT. It’s no worse than the Galaxy Z Fold 6 or the OnePlus Open.

    The Mate XT is available in red or black vegan leather, both with gold hinges and accents, giving it a luxurious appearance even when closed.

    Initially, handling the phone can be confusing because the hinges do not open in the same direction, causing users to attempt to bend the phone in the wrong direction.

    Fortunately, the build quality is high enough that it doesn’t feel like the phone is in danger when mishandled.

    However, one of the hinges causes a bent portion of the display to be located on the outer edge, making the phone susceptible to damage if dropped, even when closed.

    The included case covers this area with a flap that spans the length of the phone, but it may indicate a fundamental flaw in this triple foldable design that cannot be easily rectified without redesigning the entire phone.

    Huawei Mate XT Ultimate Design: Cameras

    Foldable phones are often criticized for their camera hardware, but Huawei has chosen to disregard this notion. The Mate XT’s 50MP main camera is similar to its competitors on paper, but it features a variable aperture for greater photo control, a feature typically found on specialized photography phones like the Xiaomi 14 Ultra.

    Although it only has a 12MP ultrawide camera, which pales in comparison to the 48MP camera on the OnePlus Open, it does include a 12MP 5.5x telephoto, which is quite extraordinary for a foldable phone.

    This demonstrates Huawei’s commitment to incorporating high-quality optics into the Mate XT. Additionally, it is equipped with an 8MP front camera, which may sound low-res, but it is likely still better than the 4MP inner camera of the Galaxy Z Fold 6.

    Huawei’s Mate XT Ultimate Design: Specs

    Huawei has not disclosed much about the Mate XT’s chipset, but it is believed to be powered by a Kirin 9010, one of the company’s internally developed chips. The Mate XT is equipped with 16GB of RAM, which is the same as the Open and more than the Z Fold 6’s 12GB.

    Similar to the Z Fold 6, the Mate XT starts with 256GB of storage and offers 512GB and 1TB options, providing ample on-board space for your data. The OnePlus Open, on the other hand, comes with a generous default 512GB capacity.

    Huawei Mate XT Ultimate Design: Software

    The Mate XT runs on its own HarmonyOS after abandoning Android, which made it feel somewhat unfamiliar to use, especially with a majority of China-specific apps installed on our demo units.

    Navigating and switching between apps felt smooth, as did resizing apps when opening or closing the screen. This is not surprising, considering that none of the Mate XT’s three display sizes are new; it’s the way it combines them all that’s innovative.

    It’s worth noting that multi-tasking is limited to two split apps plus a third in slide-over mode, which is not as good as Samsung’s three-app split option or OnePlus’ excellent Open Canvas desktop mode.

    Huawei Mate XT Ultimate Design: Battery and charging

    Featuring a 5,600 mAh battery, the Mate XT’s battery capacity is substantial for a foldable device, even though it’s only about 10% larger than a regular phone’s battery, despite powering a display that’s twice as big.

    While not a direct comparison, an 11-inch tablet typically has around 8,000 mAh of capacity. When it’s time to recharge, the Mate XT supports speedy 66W/50W charging, although I did not have the opportunity to test it during my brief time with the phone.

    Huawei Mate XT Ultimate Design: Outlook

    If you’re considering purchasing this phone, keep in mind that it costs around $2,800 when converted from the Chinese price and could be even more expensive if purchased internationally.

    The box appears to include several accessories such as wall outlet and car chargers, as well as a pair of wireless earbuds and an in-box case with a rotating stand.

    Some phone buyers have shown willingness to pay up to 2 grand for a foldable phone. However, it remains to be seen whether having three parts to your foldable instead of two justifies the price.

    If you have anything left in your bank account after buying the Mate XT, Huawei offers an additional folding keyboard with a small trackpad if you want to use the Mate XT as a full work device. It has the screen space needed for editing documents, making calls, browsing the web, or doing all of these simultaneously.

    Triple foldables won’t be exclusive to Huawei forever, but it may take some time before other phone makers introduce equivalent devices. There’s also the question of whether it’s worth paying an additional grand over the price of a standard foldable for an extra hinge and the screen space of an iPad, as opposed to an iPad mini.

    The cost of a pocketable tablet is quite high, even more so than a typical foldable phone. You pay more, your phone is less durable and more expensive to repair, and many developers are still working on making their apps foldable-friendly.

    A standard flagship phone and a tablet with a keyboard are unlikely to cost more than the Mate XT and will likely be much easier to purchase and use.

    However, given that this phone is already on back-order in China, Huawei may have tapped into a potentially lucrative new trend. It has certainly captured people’s attention, which is often the first step toward capturing their wallets.

    The release of the Mate XT occurred shortly after Apple unveiled its iPhone 16 series. Clearly, this timing was a strategic move by Huawei to divert some of the spotlight. In addition to featuring a unique dual-folding mechanism, the Mate XT is also one of the thinnest foldable phones on the market and allows for viewing multiple resizable app windows at once.

    Starting at a steep RMB19,999 (about US$2,800), the Mate XT is a costly investment. Let’s examine the features and design of the Mate XT to determine if it truly sets new standards or if it’s just a flashy addition to the tech market.

    Display

    The Mate XT features a triple-fold design with a 10.2-inch LTPO OLED display boasting a resolution of 3,184 × 2,232 pixels and a 16:11 aspect ratio. It also offers a remarkable 120 Hz refresh rate, making it larger than any current foldable smartphone, surpassing models like the Samsung Galaxy Z Fold 6 and Honor Magic V3. Furthermore, the display supports the simultaneous viewing of multiple app windows that can be resized and arranged according to user preference.

    Thanks to its innovative dual hinge design, the Mate XT can flex both inwards and outwards, offering three operational modes. When fully folded, the display provides a 6.4-inch screen, similar to that of a standard smartphone. Unfold it once, and you get a 7.9-inch screen, perfect for activities such as reading.

    Unfold it completely, and it transforms into a 10.2-inch screen, ideal for watching movies or editing documents. This adaptability makes it easy to switch between a compact phone and a more expansive tablet interface.

    Huawei has developed a special hinge system that enables seamless transitions between each mode. The hinges, made with high-grade steel used in rockets, are designed to withstand frequent folding. Reviews have highlighted that the folds remain invisible when the screen is viewed head- on; they only become apparent when viewed from an angle.

    Additionally, the most commonly used apps in China, such as Douyin (TikTok’s Chinese counterpart), are already well-optimized for the Mate XT’s unique trifold screen.

    Measurements

    Weighing 298 grams, the Mate XT measures just 3.6 mm in thickness when fully unfolded—the same thickness as four stacked credit cards—and expands to 12.8 mm when folded. Although it’s slightly thicker than the Samsung Z Fold 6, which measures 12.2 mm, the Mate XT remains highly portable, especially notable given that it incorporates three screens into one sleek device.

    Camera Quality

    The camera setup of the Huawei Mate XT is headlined by a 50-megapixel main sensor with variable aperture (f/1.4 to f/4). It also includes a 12-megapixel ultra-wide lens offering a 120-degree field of view, perfect for capturing expansive landscapes or group photos. Additionally, there’s a 12-megapixel periscope camera with 5.5x optical zoom, ideal for capturing clear shots of distant objects. For photography enthusiasts, the camera’s versatility and the physical shutter that adjusts lighting conditions for each shots are major advantages.

    AI Features

    Powered by Huawei’s own Kirin 9010 chipset, the Mate XT incorporates several AI features that enhance its functionality and user experience. Its camera system utilizes AI to optimize image quality through features such as scene recognition, portrait enhancement, and smart object removal.

    In addition to photography, its AI assistant offers several features, including:

    – Voice editing that refines voice-to-text transcriptions
    – Advanced translation function that facilitates side-by-side language switching in texts
    – Smart summary extraction
    – Cloud-based content generation

    The Mate XT is available in two colors—black and red—and features a vegan leather back. It’s equipped with 16 GB of RAM and a 5,600 mAh battery, which is about 10% larger than typical smartphone batteries, despite the display being twice as large. It supports both 66W wired and 50W wireless charging for quick power-ups.

    Running on Huawei’s proprietary Harmony OS, the Mate XT offers all essential connectivity options, such as GPS, Bluetooth 5.2, and 5G. It also includes standard smartphone features like a side-mounted fingerprint sensor. A particularly notable feature is the inclusion of satellite communication , which ensures connectivity even in the most remote areas.

    Image by Huawei

    Each package includes several extras: a rotating bracket protective case, Huawei FreeBuds 5 earbuds, a 66W charger, and an 88W car charger. For enhanced productivity, Huawei offers an optional foldable split keyboard.

    Is it worth the splurge?

    Huawei’s Mate XT is a marvel of foldable technology and stands out in the market. However, it’s important to consider its high cost relative to its benefits. Starting at RMB19,999 yuan (US$2,809) for the base model and rising to RMB23,999 yuan (US$3,371) for the 1 TB version, it may not be suitable for everyone.

    Additionally, if the display sustains damage, repair costs could reach US$1,100. For context, the first-time repair of the Samsung Galaxy Z Fold 6’s folding screen—covering the OLED panel, metal bezel, and battery replacement—is discounted to US$200 , but subsequent repairs escalated to US$549.

    Another significant concern is that Huawei devices cannot run on Google’s Android operating system. In 2019, US sanctions against Huawei halted access to essential Google services like the Play Store, Gmail, Google Maps, and YouTube. Huawei now uses its own HarmonyOS 4.2, but transitioning entirely to this platform could be a challenging adjustment for users accustomed to Google.

    In the end, the Mate XT is made for individuals who are passionate about state-of-the-art technology and can afford its high price. If that sounds like you, purchasing the Mate XT could be a worthwhile investment.

    Despite being the first phone with a triple-folding screen, the Huawei Mate XT Ultimate Design doesn’t appear bulky or heavy when folded. It measures just 12.8mm and weighs 298g (excluding the screen surface layer), making it feel acceptable in hand even when fully folded, reminded of earlier folding screen devices.

    In its fully unfolded state, the Huawei Mate XT Ultimate Design showcases its impressive side. In the “three-screen state,” it reaches a remarkable thickness of 3.6mm, with the “thickest” side being only 4.75mm. This extreme thinness enhances the overall premium feel and showcases cutting-edge technology.

    When fully expanded, the screen size reaches 10.2 inches with a 16:11 ratio, transforming it from a phone to a tablet. This completely resolves previous issues with traditional folding screens, providing a more complete display and an incredibly high resolution of 3184 × 2232 , delivering excellent color and perception.

    Utilizing the three-screen structure, it can also transition to a dual-screen state resembling traditional folding screens, leveraging the software application ecosystem and functions of the previous Mate X series folding screens, fully utilizing Huawei’s ecological advantages in the folding screen field.

    Let’s discuss the exterior. The tri-folding structure of the Huawei Mate XT Ultimate Design is the epitome of folding “mastery,” utilizing two pivots for internal and external folding, eliminating the need for an additional outer screen. This design ensures consistent display effects before and after unfolding.

    The Huawei Mate XT Ultimate Design comes in two different colors, both featuring vegan leather, the Legendary Star Diamond design, and gold-colored components. They also boast a unique rock vein texture on the image module, claimed to be distinct for each piece, providing a personalized look for every user.

    The black Huawei Mate XT Ultimate Design appears deeper, with a stronger presence of the gold edge, giving it a more introverted and stable temperament.

    The fiery red color exudes boldness and extravagance, with the gold accents taking on a red hue, creating a more prominent and luxurious appearance. Additionally, Huawei added a special craftsman logo on the back cover, adding a touch of chicness.

    Huawei’s Mate XT Ultimate Design was reviewed for its unique protective case design this time. The case complements the triple-folding body, providing protection for the folding structure and serving as a stand to enhance the large screen experience.

    The Huawei Mate XT Ultimate Design, as a folding device, demonstrates a strong sense of innovation and offers a mature experience with its large, medium, and small screen forms. When unfolded, the oversized screen provides a significant utility boost, making it as productive as a tablet PC while being thinner and lighter.

    These initial observations are just the beginning, and further testing will be conducted to explore more details and functionalities of the product.

    Huawei, following its placement on the US trade blacklist in 2019, has started manufacturing its own advanced 5G and processor chips domestically. examined, it became the first to introduce a foldable smartphone with two hinges.

    In the lead-up to the launch, Richard Yu Chengdong, chair of Huawei’s consumer business group, generated anticipation for the device by being photographed using it in public multiple times.

    Through these strategies, the device garnered over 1.3 million pre-orders within seven hours of reservations opening on Huawei’s official e-commerce site, Vmall.com. By Monday afternoon, the Mate XT had received over 3 million pre-orders on Vmall, with Scalpers reselling it for at least 20,000 yuan ($2,821, £2,147) on the gray market, as reported by local media.

    Official sales of the Mate XT are scheduled to commence on 20 September.

    The 5G-capable Mate XT comes in red and black colors and offers 16GB of RAM, with internal storage options of either 512GB or 1TB, according to Huawei.

    When folded, it resembles a standard smartphone; however, when unfolded, it reveals a large, nearly square screen, similar to a tablet.

    Foldable smartphone shipments worldwide grew by 85 percent year-on-year in the April to June period, as per Tech Insights.

    Huawei leads the global market due to its significant market share in China, followed by Samsung Electronics and China’s Vivo.

    In the meantime, Apple has been promoting its AI plans since the beginning of this year, and these announcements have contributed to driving its stock price to record levels, reclaiming its position as the most valuable US-listed company ahead of Microsoft, Nvidia, and Google parent Alphabet.

    Despite this, the company has encountered challenges in the crucial China smartphone market, with Huawei displacing it from the top five vendors in the quarter ending in July.

    For the first time in history, all top five smartphone vendors in China in that quarter were domestic companies, researchers revealed.

    Huawei’s success has been propelled by its Mate 60 flagship, introduced last summer, featuring a high-end chip produced domestically despite US sanctions on both Huawei and its chip-manufacturing partner SMIC.

    Huawei has also been the leading seller of foldable smartphones in China for the past two quarters.

    Honor, a Huawei spin-off, lags behind Huawei in China but secured the top position for foldable smartphones in Western Europe in the most recent quarter, according to Canalys.

  • Malvertising, or malicious advertising, is a relatively new cyberattack technique

    Malvertising, or malicious advertising, is a relatively new cyberattack technique

    Cyber ​​criminals are increasingly utilizing online advertisements for malicious purposes, often targeting individuals through regular Google searches.

    These fraudulent activities, known as malvertising, are occurring more frequently with greater sophistication. In the autumn of 2023, Malwarebytes, a cybersecurity software firm, reported a 42% month-over-month increase in malvertising incidents in the United States.

    According to Jérôme Segura, senior director of research at Malwarebytes, various brands are being targeted for phishing or malware distribution. He expressed that the current situation is merely the tip of the iceberg.

    Rogue ads often appear as sponsored content during searches on both desktop and mobile devices. Additionally, malicious code can be concealed within ads on popular websites that consumers frequently visit.

    While some of these ads only pose a threat to those who click on them, others can passively endanger individuals, simply by visiting an infected website, as stated by Erich Kron, security awareness advocate for KnowBe4, a security awareness and training company.

    Segura also mentioned that corporate employees are potential targets of malvertising, citing examples involving major companies. For instance, Lowe’s staff members were targeted through a Google ad for an employee portal that claimed to be affiliated with the retailer.

    By clicking on the link “myloveslife.net,” which misspelled the company’s name, users were directed to a phishing page featuring Lowe’s logo. This had the potential to mislead employees, given that many are unfamiliar with the URL for their internal website. ” You see the brand, even the official logo of that brand, and for you it’s enough to think it’s real,” Segura explained.

    Another example involved an ad impersonating Slack, a communication tool owned by Salesforce. Initially, clicking on the ad redirected users to a pricing page on Slack’s official website. However, Segura discovered an impersonation scheme designed to deceive unsuspecting users into downloading something claiming to be the Slack app.

    The issue of malvertising is not new, but cybercriminals are more sophisticated, creating ads that closely resemble legitimate ones, making it easy to be deceived. This problem is compounded by the widespread use and trust in search engines, particularly Google, where many of These Malicious ads are found.

    “You see something appearing on a Google search, you kind of assume it is something valid,” explained Stuart Madnick, professor of information technology at MIT Sloan School of Management.

    Consumers can also fall victim to malicious ads on trusted websites they regularly visit. While many of these ads are legitimate, some fraudulent ones can slip through the cracks. “It’s like the post office. Does the mailman check every letter you get to make sure it’s really from Publishers Clearing House?” Madnick analogized.

    Consumers can take precautions to protect themselves from malvertising attempts. For example, they should refrain from clicking on sponsored links that appear during internet searches. The non-sponsored links, often located below the sponsored ones, are generally safer from malicious code or phishing attempts .

    In the event of clicking on a sponsored link, it is advisable to check the URL at the top of the web page to ensure it matches the intended destination before proceeding further. For instance, when attempting to visit Gap.com, it is important to verify that the website is not Gaps.com.

    Avinash Collis, assistant professor at Carnegie Mellon University’s Heinz College, advises consumers to promptly close the window if they find themselves on a suspicious site, as this action will likely prevent further issues.

    Consumers should also exercise caution when encountering ads on trusted websites, according to Kron. For instance, if they come across advertisements for products priced significantly lower than elsewhere, Kron recommends refraining from clicking and instead visiting the official website of the product seller.

    Most of the time, consumers can find special deals by searching on the provider’s site, or the deal will be prominently featured on the trusted website’s homepage.

    Avoid contacting a phone number listed in a sponsored ad because it might be a fraudulent number. If you call it, hackers could gain access to your computer or personal information, depending on the scam, according to Chris Pierson, CEO of BlackCloak, a cybersecurity and privacy platform that offers digital executive protection for corporate executives.

    Consumers should ensure that they are dialing a number from official product documentation they possess, Pierson advised. Alternatively, they could visit the company’s website for this information.

    “Conducting a [web] search might yield results that are not endorsed by the company and phone numbers linked to cybercriminals.

    All it takes to place an ad is money and, of course, cybercriminals who are stealing money have the means to pay for that bait,” Pierson explained.

    Avoid ‘drive-by-downloads’

    Consumers should also confirm that their computer and mobile phone operating systems and internet browsers are up-to-date.

    So-called drive-by-downloads, which can affect individuals who simply visit a website infected with malicious codes, generally exploit a vulnerability in the user’s browser. This is less of a threat for individuals who keep their browsers and browser extensions up-to -date, according to Kron.

    Consumers could also consider installing anti-malware software on their computer and phone. Another option is to avoid ads by installing an ad blocker extension such as uBlock Origin, a free and open-source browser extension for content filtering, including ad blocking.

    Some consumers may also choose to install a privacy browser such as Aloha, Brave, DuckDuckGo or Ghostery on their personal devices. Many privacy browsers have built-in ad blockers; consumers may still see sponsored ads, but they will see fewer of them, which reduces the chances of malvertising.

    Consumers who encounter suspicious ads should report them to the relevant search engine for investigation and removal if they are deemed malicious, according to Collis. This can help protect others from being trapped.

    Taking proper safety measures is especially important since there are millions of ads on the internet and cybercriminals are persistent. “You should assume that this could happen to you no matter how careful you are,” Madnick stated.

    According to Federal Trade Commission data, identity theft of children under age 19 is a growing problem.

    Most parents would take significant steps to protect their children. However, many overlook a relatively simple way to help bolster a child’s financial security: freezing the minor’s credit.

    This could be particularly important following a major breach in which the Social Security numbers of numerous Americans might be available on the dark web. While locking their credit won’t resolve all cybersecurity issues related to stolen Social Security numbers, it’s an additional layer of protection parents can implement.

    The credit-locking process entails contacting each of the three major credit bureaus — Experian, Equifax and TransUnion — and providing necessary documentation including the child’s birth certificate, Social Security card, proof of address and parent identification.

    The bureau then generates a credit report for the child and subsequently locks it, preventing loans or credit cards from being issued using the child’s personal information. The freeze remains in place until the parent, or in some cases, the child, requests that it be lifted, temporarily or permanently.

    Parents can take these steps proactively even if there’s no indication that a minor’s credit has been compromised, such as unexpected credit card solicitations or bills received in the minor’s name.

    It can take some time and effort to lock a child’s credit, but the investment is minimal compared to what can be a lengthy and emotional credit restoration process.

    “As an adult, if our credit is stolen, it makes us angry, but we do what needs to be done and we move forward,” said Kim Cole, community engagement manager at Navicore Solutions, a nonprofit credit and housing counseling agency.

    But for children, the emotional impact is much greater, she said. “It can take years to get wind of a problem, and meanwhile the damage can continue to grow.”

    Identity theft against children — especially very young ones — often goes unnoticed until they are older teens or young adults applying for their first credit card, trying to finance a car or seeking student loans, according to Loretta Roney, president and chief executive of InCharge Debt Solutions, a nonprofit provider of credit counseling and other services.

    However, there is a growing problem of identity theft among children under 19 years old. According to data from the Federal Trade Commission, this age group made up 3% of all identity theft reports in the first half of 2024, compared to 2% between 2021 and 2023.

    Thieves may use a child’s Social Security number, name, address, or date of birth to apply for government benefits, open bank or credit card accounts, apply for loans, sign up for utility services, or rent a place to live, according to the FTC. While locking a child’s credit won’t protect against all of these, it is a step in the right direction, according to financial professionals.

    Not only strangers commit fraud against children. For example, a friend’s uncle destroyed his credit and started using his niece’s name and Social Security number to open credit cards and max them out.

    He had the bills sent to his house, and the young woman only discovered the fraud about four years later, when she went to buy a small fixer-upper and realized she had nearly $50,000 of debt in her name and a credit score in the low 500s.

    The niece filed a police report, a complaint with the FTC, and disputed the items with the credit bureaus, but it took time to resolve. She applied for a secured credit card since her score was too low to qualify for a traditional card. situation pushed back her home-buying by a few years, ultimately costing her more.

    Check if the child has a credit report.

    Before locking a child’s credit, it is good practice to check with each of the three major credit bureaus to see if a report exists. Generally, this will only be the case if someone has fraudulently taken out credit in the minor’s name or if the child has been named an authorized user on an adult’s credit card.

    To check if their child has a credit report, parents can send a letter with their request to each of the credit bureaus. They should include a copy of the child’s birth certificate, Social Security card or document from the Social Security Administration showing the number, and a copy of the parent’s driver’s license or government-issued identification with the current address.

    Legal guardians may have to provide the credit bureaus with documents authenticating their status.

    If something suspicious appears on the report, contact the companies where the fraud occurred as well as the three major credit bureaus. Also report the child identity theft to the FTC, with as many details as possible.

    If the report comes back clean, the next step is to actually lock the child’s credit.

    If necessary, freeze a child’s credit.

    The process for initiating a credit freeze varies slightly depending on the credit bureau and the age of the minor child. Be sure to follow the precise instructions for each credit bureau. For Equifax, in addition to required documentation, parents need to fill out a form online and submit it via postal mail; minors who are 16 or 17 may request their own security freeze by phone or by mail.

    The websites for Experian and TransUnion provide further details on their respective processes, including document requirements and mailing addresses. It can take a few weeks for the bureaus to process these requests.

    Keep good records for unlocking later in life.

    Parents need to keep safe the pin number they are provided when locking their child’s credit so it can be temporarily unlocked as needed, such as when the child turns 18 and wants to apply for a credit card, said Bruce McClary, senior vice president of membership and media relations at the nonprofit ​​​​​​​​​National Foundation for Credit Counseling.

    The unlocking process isn’t necessarily seamless and can take time. Equifax, for instance, asks for these requests in writing, with required documentation for identity verification. After age 18, Equifax allows for managing the security freeze online.

    Educate children early on protection of personal information.

    Parents should talk to their children about best practices with respect to sharing personal information. For instance, they should caution children to be careful about the kinds of information they provide to websites and apps and to keep their Social Security number close to the vest.

    Parents may also want to consider credit or identity threat monitoring services or both. Certain providers may offer basic services for free, but family plans that include adults and children and offer a combination of credit and identity theft protection tend to be fee-based.

    These services, which can cost around $24 or more per month, may offer more comprehensive protection, including identity theft insurance and fraud resolution services. Parents should carefully consider the options and associated costs.

    Malvertising, also known as malicious advertising, refers to criminally controlled ads within Internet-connected programs, typically web browsers, that intentionally cause harm by distributing various types of malware, potentially unwanted programs (PUPs), and assorted scams.

    In simple terms, malvertising uses seemingly legitimate online advertising to spread malware and other threats without requiring much or any user interaction.

    Malvertising can be found on any advertisement on any site, including those visited during everyday Internet browsing. Normally, malvertising installs a small piece of code that directs your computer to criminal command and control (C&C) servers.

    The server then scans your computer to determine its location and installed software, and selects the most effective malware to send to you.

    “Malvertising can be found on any advertisement on any site, including those visited during everyday Internet browsing.”

    How does malvertising function?

    Malvertising exploits the same distribution methods used for regular online advertising. Scammers submit infected graphic or text ads (both using JavaScript) to legitimate ad networks, which often cannot differentiate between harmful and trustworthy ads.

    Despite the malicious code, malvertising takes on the appearance of common ads like pop-ups (offering fake browser updates, free utilities, antivirus programs, etc.), paid ads, banner ads, and more. Malvertising criminals rely on two primary methods to infect your computer.

    The first involves an ad that offers some kind of enticing content to prompt you to click on it. The lure might be in the form of an “alert,” such as a warning that your device is already infected with malware.

    Alternatively, it could be a free program offer. These tactics use social engineering to scare or entice you into clicking a link, leading to an infection.

    Even more devious is the second method, known as a drive-by download. In this case, the infected ad uses an invisible web page element to carry out its activity. You don’t even need to click on the ad to trigger the malicious behavior.

    Simply loading the web page hosting the ad (or a spam email or malicious pop-up window) redirects you to an exploit landing page, which takes advantage of any vulnerabilities in your browser or security loopholes in your software to access your device.

    How can malvertising cause harm?

    A more relevant way to frame this question might be: is there any chance it won’t harm you? The answer is no, because the criminals behind malvertising have multiple illicit goals that they relentlessly pursue.

    They aim to profit by stealing your identification data, financial data, contact data, and more.

    Apart from outright data theft, they can encrypt or delete information, manipulate or take control of core computer functions, and spy on your computer activity without your knowledge or consent. This depends on the type of programs the malvertising manages to download. The payloads may include:

    Malware, an umbrella term for any harmful program or code.

    Ransomware, a type of malware that locks you out of your device and/or encrypts your files, demanding a ransom for their release. Ransomware is a favored weapon of cybercriminals as it demands quick, profitable payments in hard-to-trace cryptocurrency.

    The code behind ransomware is readily available through online criminal marketplaces, and defending against it can be challenging.

    Spyware, malware that covertly monitors the computer user’s activities without permission and reports it to the software’s author.

    Adware, unwanted software designed to display advertisements on your screen, typically within a web browser. It often disguises itself as legitimate or piggybacks on another program to deceive you into installing it on your PC, tablet, or mobile device.

    A virus, the original malware that attaches to another program and replicates itself by modifying other computer programs when executed—usually accidentally by the user.

    Most cybersecurity professionals agree that viruses are more of a legacy threat than an ongoing risk to Windows or Mac users, as they have been around for decades and have not substantially changed.

    Malicious cryptomining, also known as drive-by mining or cryptojacking, is an increasingly common type of malware usually installed by a Trojan. It allows someone else to use your computer to mine cryptocurrency like Bitcoin or Monero.

    Instead of allowing you to benefit from your own computer’s resources, the cryptominers send the collected coins to their own account, essentially stealing your resources to make money.

    “Criminals who engage in malvertising have various illegal objectives that they vigorously pursue. They aim to profit by stealing your personal, financial, and contact information, among other things.”

    Malvertising History

    The first known malvertising attack occurred in late 2007 or early 2008, exploiting a vulnerability in Adobe Flash and targeting platforms like MySpace. This incident marked the end of MySpace’s prominence.

    In 2009, The New York Times fell victim to malvertising when it published an ad that recruited computers into a larger botnet of malware-infected devices. Readers were presented with deceptive ads informing them of fake system infections, tricking them into installing malicious security software.

    In 2010, malvertising spread widely across the internet, with billions of display ads carrying malware across 3,500 sites.

    In 2011, Spotify experienced an early instance of a drive-by download malvertising attack.

    In 2012, a significant malvertising attack targeted The Los Angeles Times, infecting users via drive-by downloads. This approach became a blueprint for future attacks on large news portals.

    In 2013, Yahoo.com faced a major malvertising attack, putting a significant number of its 6.9 billion monthly visitors at risk by infecting their machines with the CryptoWall ransomware.

    In 2014, there was a notable increase in malvertising attacks, affecting Google DoubleClick, Zedo ad networks, as well as news portals like Times of Israel and The Jerusalem Post.

    In 2015, malvertising attacks continued diversifying, leveraging various popular websites to distribute malicious ads and drop malware onto unsuspecting users’ computers. Targeted websites included dating sites, adult video streaming sites, Google Adwords, and MSN.com.

    Malvertising detections have continued to rise. ZDNet reported on a group known as Zirconium, which conducted what was perhaps the largest malvertising campaign in 2017, estimated to have bought one billion ads throughout the year.

    Zirconium designed malicious ads with forced redirects to websites hosting fraudulent schemes or malware. It’s believed that this single campaign was present on 62 percent of ad-monetized websites each week.

    Malvertising actors have also become more inventive. Cybercriminals are now taking over abandoned domains, displaying malicious ads that force redirect users to tech support scam sites and abusing cryptocurrency miners.

    In January 2018, researchers from Malwarebytes found pages with malicious ads containing embedded scripts for Coinhive. While Coinhive has legitimate uses, cybercriminals exploit the service to turn computers into cryptomining machines without users’ knowledge or permission.

    What are the primary types of malvertising campaigns?

    Once online criminals have obtained information about the user’s computer, software, and location, they use this data to create tailored campaigns. Some campaign categories include:

    Schemes promising quick financial gain and other surveys.

    These aggressive efforts by unscrupulous advertising networks disrupt browsing with screen hijacks and may offer false lottery opportunities, work-from-home scams, bogus surveys, and other too-good-to-be-true offers. In the past, surveys in this category have even targeted iPhone users.

    Tech support scams.

    These scams have historically targeted Windows PC users but have also expanded to exploit the assumed sense of security among Mac users. These scams present fake websites as Apple or Microsoft, using JavaScript to prevent victims from naturally closing the page, leading users to call a listed toll-free number for assistance. Scammers, often from India, rely on scare tactics to sell victims hundreds of dollars of worthless “tech support.”

    Fake Flash Player (and other software) updates.

    This is a common technique to distribute adware and malware to Mac users. These pages masquerade as updates for Flash Player or video codecs, appearing well-designed and pushy. In some cases, the installer will automatically download onto the computer. These campaigns are particularly effective on adult or video streaming websites, as they can entice users to download the application to access desired content. However, users should only download from the product’s official repositories, as look-alikes on infected sites are bundled with potentially harmful software that can slow down a Mac or install spyware.

    Scareware

    Similar to the tech support scam, scareware initially claims that your Mac or Windows machine is severely damaged or infected, and then prompts you to download a program to resolve the issue. Scareware scams are typically the work of profit-driven malvertising affiliates seeking to generate as many leads as possible to earn substantial commissions from various PUPs.

    What types of platforms are susceptible to malvertising?

    While Windows has been the primary target of malware attacks for a long time, a malvertising campaign focused on a browser or plug-in can just as easily infect a Mac, Chromebook, Android phone, iPhone, or any similar devices within a business network.

    It’s true that cybercriminals primarily target Windows users due to the large user base, which provides malvertisers with the best return on investment. However, Macs are equally vulnerable to malvertising attacks.

    In terms of mobile devices, malvertising can pose an even greater threat, as many people do not take the same precautions or have the same level of firewall protection on their phones as they do on their desktop or laptop.

    Adding to the risk is the fact that mobile devices are always on and carried from home to work, on weekend outings, and are often used for shopping, making them a prime target for malvertising.

    For example, Android users are increasingly affected by malvertising and online fraud through forced redirects and Trojanized apps, which are the two most common examples.

    How can I defend against malvertising?

    First, address vulnerabilities on your computer and mobile device. Ensure that your operating system, applications, and web browsers (including plug-ins) are kept up to date with the latest security patches.

    Uninstall any unnecessary software, especially Flash or Java, as malvertising seeks to exploit weaknesses in such software.

    Always practice safe computing and carefully consider before clicking on anything. Be skeptical of any suspiciously alarming notices or scareware, as well as any too-good-to-be-true pop-up offers you receive.

    Even if you never click on suspect ads, it won’t fully protect you from drive-by malvertising on reputable sites, but it will reduce your chances of being impacted by much of what the bad guys throw at you, as most malvertising relies on your click to deliver its malware payload.

    Enable click-to-play plugins on your web browser. Click-to-play plugins prevent Flash or Java from running unless you specifically allow them to (by clicking on the ad).

    A significant portion of malvertising exploits these plugins, so enabling this feature in your browser settings will provide excellent protection.

    Consider using ad blockers, which can filter out a lot of the malvertising noise, preventing dynamic scripts from loading dangerous content.

    By blocking all advertisements from displaying on websites, you eliminate the risk of viewing and clicking on potentially harmful ads.

    Ad blocking also brings additional benefits, such as reducing the number of cookies loaded on your machine, protecting your privacy by preventing tracking, saving bandwidth, loading pages faster, and prolonging battery life on mobile devices.

    However, many of the most reputable news sites rely on advertising for revenue, so they request users to disable ad blockers to access content. Malwarebytes has provided insights on this topic.

    There is also extensive guidance on using ad blockers on our blog, outlining some completely free methods available to you for a safer internet experience.

    For instance, here’s one of our blog posts about ad blockers and anti-tracking browser extensions. We also cover a few common ad blocking utilities and how to best configure those tools for maximum effectiveness.

    Statista’s report recorded more than 5.6 billion malware attacks using 678 different types of malware in a single year. Many internet users have become adept at recognizing and being cautious of suspicious activities and phishing attempts.

    Cybersecurity is like a game of cat and mouse, with both malicious and ethical hackers constantly trying to outsmart each other.

    This has led unethical hackers to hide malware within innocent-looking digital ads, which are a crucial part of the internet economy.

    What if clicking on an ad could lead to a malware attack? How can we know if clicking on an ad will harm our devices and systems?

    Malvertising is a new form of malware that poses a significant cybersecurity threat because it targets users through legitimate publishing and advertising platforms. The rapid growth of online advertising has contributed to the widespread use of malvertising.

    It can reach a wide range of users due to the extensive reach of the channels through which it is distributed. Detecting and protecting against the harmful effects of malvertising is challenging for both users and ad publishers.

    Malvertising is a combination of “malicious advertising.” Attackers insert malware into reputable advertising networks used by well-known websites.

    These seemingly harmless ‘infected’ ads contain malicious code that spreads the malware. When a user clicks on them, the code redirects the user to a malicious server, establishes a connection with the device, and installs the malware within seconds.

    Malvertising is prevalent because major publishers often use automated third-party applications to display ads on their websites, making it difficult to monitor and control, which benefits threat actors. Malvertising does not directly harm the publishing websites, making it harder to detect.

    Malvertising not only damages the reputation of advertising platforms and publishers but can also steal sensitive information from users. If the malware is ransomware, the consequences can be even more severe.

    Users who use third-party ad blockers to avoid malvertising directly impact the advertising revenue of both publishers and marketers, which is a significant blow to the online advertising industry.

    Malvertising and adware are often confused by users. Although technically different, both are harmful. Adware, which tracks users’ web activity, displays unwanted ads, and steals user data, is often embedded in legitimate applications. However, adware does not usually breach users ‘privacy or take control of their systems or encrypt their data.

    The codes used for malvertising are deployed on a publisher’s page, while adware is typically deployed directly on an end user’s device. Therefore, malvertising has a much broader impact on users than adware.

    How do malware-infected ads come to be?

    Before inserting malicious codes into ads, threat actors often gain the trust of the publishing platform by initially placing legitimate ads. They may also use clickbait ads to evoke strong emotions in users and generate a high click-through rate.

    When a user clicks on an infected ad, they are directed to a malicious landing page.

    The attackers can use the following methods to infect ads:

    – When a user clicks on an ad, they are often redirected through several intermediate URLs before reaching the final landing page. Attackers can compromise any of these URLs to execute malicious code on the system.

    – HTML5 allows ads to be delivered by combining images and JavaScript, making it easy for attackers to add malicious code within the ad itself.

    – Pixels, used for ad tracking purposes, may contain malicious code placed by attackers. Although a legitimate pixel only returns data to servers, attackers can intercept a pixel’s delivery path to send a response containing malicious code to the user’s browser.

    – Attackers can exploit the fact that video players do not typically protect against malware. For example, a standard video format called VAST contains pixels from third parties that could contain malicious codes. Videos may also contain malicious URLs. additionally, when attackers insert malicious code into the pre-roll banner, users don’t even have to click on the video for the malware to be downloaded.

    – Attackers sometimes compromise legitimate landing pages of products or services by using clickable on-page elements that execute malicious code.

    After a user is directed to the desired location by the attacker, the malware is sent through a browser exploit kit. The harmful landing page gathers information from the user’s device and searches for other weaknesses.

    Fortunately, this method is now outdated due to the advanced cybersecurity technologies used by most web browsers. However, attackers have started using forced redirects, automatically directing users to a malicious landing page by controlling the browser navigation.

    In 2021, REvil, a cybercriminal group known for using ransomware, paid for a prominent position in Google search results to encourage users to click on malicious links.

    Angler, a malicious program, automatically redirected users to a website where vulnerabilities in web extensions like Adobe Flash and Oracle Java were exploited.

    Malvertising can take various forms based on how it is carried out and delivered to users’ devices:

    • Attackers use pop-up ads while users are browsing, tricking them into downloading fake software.
    • Through the drive-by-download method, malware is downloaded without the user’s knowledge by exploiting browser vulnerabilities.
    • Attackers can inject their code into a publisher page using inline frames (iFrames) in HTML, delivering malware when a user accidentally clicks on the frame.

    To protect yourself from malvertising, publishers must thoroughly check their platform for any infected ad placements and employ security solutions to keep malicious codes away. Users should also be vigilant to avoid downloading anything malicious to their devices.

    Users can mitigate risks by following these practices:

    • Keep your browser and plugins up to date.
    • Avoid using Flash and JavaScript.
    • Use high-quality ad blockers.
    • Have legitimate and updated antivirus software and application security resources.
    • Ensure all downloads come from official websites and verified resources.

    You might be a victim of malvertising if your device becomes suspiciously slow or if there are unfamiliar apps installed. Follow these steps if you suspect your device has been compromised by malvertising:

    • Disconnect from the internet.
    • Enter Safe mode.
    • Avoid logging into accounts to prevent attackers from obtaining important credentials.
    • Delete temporary files that may contain malware.
    • Check Activity Monitor (Mac) or Task Manager (Windows) for suspicious programs.
    • Run a reliable malware scanner.
    • Repair your browser by reinstalling it, removing unwanted plugins, or clearing your cache.

    Malvertising is a highly advanced and stealthy form of cybersecurity attack that significantly impacts online advertisers, publishers, and end-users. It is the responsibility of both users and publishers to take necessary steps to mitigate the impact of malvertising.

    What is the significance of browser security?

    Users often configure web browsers for their convenience, but this can compromise their safety. Using an insecure web browser can make users vulnerable to hackers, data theft, malware, and other risks.

    How can I enhance my browser’s security?

    There are a few quick measures you can implement to improve your browsing security. Start by blocking browser cookies, disabling saved passwords, installing robust antivirus software, using a VPN, turning off autofill, and keeping your browser updated.

    Which antivirus software is the most effective?

    Several options are available for top antivirus software. Avira is considered the best value, McAfee offers comprehensive features, and Emsisoft provides advanced defenses.

    If you’re not cautious, advertisers, hackers, governments, and companies can track your online activities. What’s the best way to prevent this? One approach is to use a secure browser that helps protect your online identity, enabling you to reclaim your privacy rights.

    Leading secure browsers make it easy to safeguard your privacy and security while using the internet.

    Most people have a preferred browser for daily use, but does it rank among the top secure browsers? This article will address that question.

    Whether you’re browsing the web, conducting business, or connecting with loved ones, chances are you use a browser as your gateway to the internet. Since you share personal and potentially sensitive information, you may also want to use the best identity theft protection available.

    If your browser isn’t secure enough, malware could infiltrate your systems, infect your devices, and cause significant harm to your important data.

    While antivirus software can make your internet browsing safe and secure, it’s wiser to prevent malware from entering in the first place rather than fixing the damage afterward. By choosing a secure browser, you’ll not only protect all your data but also ensure that no one can snoop on your online activities.

    Key Browser Security Features

    Regrettably, in today’s world, a simple internet search for the best local restaurants or a quick glance at your bank account can expose internet users to various risks. From marketers mining your data for profit to hackers seeking personal information, it’s no surprise that online privacy is a major concern in the tech industry.

    A secure browser with privacy-focused security features is crucial for safeguarding personal data from these malicious activities.

    Blocking Third-Party Trackers

    Several popular web browsers act more like data collection agencies for advertisers than consumer tools. They track and store users’ browsing history, then sell that data to corporations for advertising purposes, allowing tech companies to monetize your data.

    While some users find this helpful as it provides tailored search results, others view it as a privacy breach. If you fall into the latter category, ensure your web browser blocks all third-party trackers and storage to prevent tech companies from collecting and storing your online search data.

    Incognito Browsing

    Despite the perception that incognito or private browsing is secure, it still exposes you and your data. Although private browsing erases your information, your IP address and location are still shared with every website, ad, and tracker that loads in your browser.

    This information can be sold to third parties. Incognito mode also enables people to spy on your computer activities through key-logging software.

    Password Management

    Storing passwords in browsers for automatic logins to frequently used websites is convenient but leaves you vulnerable to hackers. Browsers typically store these passwords using plaintext.

    Use password management software for enhanced password protection. A secure password manager stores user information and passwords in an encrypted archive, ensuring your data isn’t vulnerable to attacks.

    Browser Compartmentalization

    Most of us use the same browser for email, web surfing, social media, and more while being continuously logged into all our accounts. Services like Gmail and Facebook track your web browsing activities while you are logged in to their websites.

    To prevent this, one easy way is to separate your browsers. Use one browser just for web browsing and another solely for online accounts that need a password.

    Ensure that you adjust the privacy settings to turn off cookies and prevent the browser from storing your browsing history. Also, remember to always log out of social media and email accounts when not in use.

    The Most Secure Browsers of 2024

    Brave is possibly one of the top web browsers for overall security. This open source browser comes with a built-in ad blocker, a script blocker, automatic HTTPS upgrades, blocks all third-party storage, and guards against browser fingerprinting.

    One of the main benefits of using Brave is that its privacy features are set up automatically. Users don’t need to customize features to enhance their security.

    Firefox is a good choice for privacy and security, but users need to customize the settings for optimal security. By default, Firefox collects and stores your usage and performance data. To opt out of this, disable telemetry data collection. Spend time in the privacy settings to enable pop-up blocking, anti-fingerprinting protection, and phishing protection.

    Google Chrome is popular due to its functional and enjoyable interface. While the company maintains its security features with regular updates, it’s safe to assume that all your browsing activities on Google are collected, saved to your data profile, and used for targeted advertising. difficult to determine how Google tracks its users as it’s not open source, but it’s not a good option for anyone concerned about privacy issues.

    Advanced Security Options

    Antivirus Software

    Investing in antivirus software is essential to create a secure browsing experience. This software protects your computer from malware and cybercriminals by continuously scanning for malicious attacks. There is a wide range of options available with various price points and security features.

    VPN

    Virtual private networks (VPNs) help secure your web traffic data against hackers, snoopers, and marketers by allowing you to establish a secure internet connection. VPNs use end-to-end encryption to conceal data and IP addresses by routing data through an encrypted tunnel . This is especially valuable when using public Wi-Fi networks, which are susceptible to hackers.

    VPNs are popular for various reasons, but not all VPNs are equally reliable.

    Choose one of the most secure browsers and browse the web smoothly

    While there are many steps you can take to make internet browsing safe and secure, such as minimizing sensitive information, using strong passwords, and keeping your software up to date, the first step should be selecting one of the most secure browsers.

    If you’re comfortable sharing your personal information with Google, Google Chrome offers excellent security and a simple user experience. If you’re concerned about privacy, then Mozilla Firefox is the best choice. And if you’re a tech-savvy individual looking For a secure browser for Linux, Tor Browser is a clear option.

    What is a secure browser?

    In simple terms, a secure browser is everything a browser should be but enhanced with an additional layer of security to keep its users safe from cybercriminal activity while browsing the internet. It creates a whitelist, which is a list of sites, programs, and online activities classified as secure, and keeps its users safe by blocking all functions not included in this list during startup.

    While browser security and privacy are not the same, they ideally go hand-in-hand. While browser security deals with malware and maintains all layers of defense up to date, the privacy aspect primarily focuses on protecting your data and concealing your identity.

    Nevertheless, a browser considered strong in security should possess both of these characteristics in similar proportions.

    How to choose the most secure browser?

    Since there is no shortage of malware, hackers, and identity thieves on the internet, your chosen browser should be able to shield you from all kinds of cyber threats, including phishing sites, web cookies, spyware, keyloggers, and malicious pop-ups.

    Additionally, a secure browser that prioritizes privacy will allow its users to delete all browser history whenever they want and safeguard their personal information from others. It should also enable you to use passwords alongside a browser to further protect all your data.

    Considering there are several solid and secure browsers across multiple operating systems (OS) and devices, choosing the right one for you can be a challenge. To simplify the search, here are our top picks for the most secure browsers on the market.

  • Investors have injected $330 billion into approximately 26,000 AI and machine-learning startups over the past three years

    Investors have injected $330 billion into approximately 26,000 AI and machine-learning startups over the past three years

    Consider it the conclusion of the initial phase of the AI ​​​​Boom

    Since the middle of March, several prominent artificial intelligence startups have been under financial strain. Inflection AI, which secured $1.5 billion in funding but generated minimal revenue, has shut down its original operations.

    Stability AI has laid off staff and parted ways with its CEO. Meanwhile, Anthropic has been rushing to bridge the approximately $1.8 billion gap between its modest earnings and substantial expenses.

    It’s becoming evident in Silicon Valley that the AI ​​revolution will come with a hefty price tag. Tech firms that have staked their futures on it are scrambling to find ways to narrow the chasm between their expenses and the anticipated profits.

    This predicament is especially pressing for a cluster of high-profile startups that have raised tens of billions of dollars for the advancement of generative AI, the technology behind chatbots like ChatGPT.

    Some of them are realizing that directly competing with industry giants such as Google, Microsoft, and Meta will require billions of dollars — and even that may not suffice.

    “You can already see the signs,” remarked Ali Ghodsi, CEO of Databricks, a data warehouse and analysis company that collaborates with AI startups. “No matter how impressive your work is — does it have commercial viability?”

    While substantial funds have been squandered in previous tech booms, the cost of constructing AI systems has astounded seasoned tech industry professionals. Unlike the iPhone, which initiated the last technological transition and cost a few hundred million dollars to develop due to its reliance on existing components , generative AI models cost billions to create and maintain.

    The advanced chips they require are expensive and in short supply. Moreover, each query of an AI system is far pricier than a simple Google search.

    According to PitchBook, which tracks the industry, investors have injected $330 billion into approximately 26,000 AI and machine-learning startups over the past three years. This amount surpasses by two-thirds the funding provided to 20,350 AI companies from 2018 to 2020.

    The challenges confronting many newer AI companies sharply contrast with the early business outcomes at OpenAI, which is backed by $13 billion from Microsoft. The attention garnered by its ChatGPT system has enabled the company to establish a business charging $20 per month for its premium chatbot and offering a platform for businesses to develop their AI services using the underlying technology of its chatbot, known as a large language model.

    OpenAI generated approximately $1.6 billion in revenue over the past year, but the company’s expenditure remains unclear, as per two individuals familiar with its business.

    OpenAI did not respond to requests for comment.

    However, even OpenAI has encountered difficulties in expanding its sales. Businesses are cautious about the potential inaccuracies of AI systems. The technology has also grappled with concerns regarding potential copyright infringement in the data supporting the models.

    (OpenAI and Microsoft were sued by The New York Times in December for copyright infringement related to news content associated with AI systems.)

    Many investors point to Microsoft’s rapid revenue growth as evidence of the business potential of AI In its most recent quarter, Microsoft reported an estimated $1 billion in AI services sales in cloud computing, a notable increase from virtually zero a year earlier, according to Brad Reback , an analyst at the investment bank Stifel.

    offline, Meta does not anticipate earning profits from its AI products for several years, even as it ramps up its infrastructure spending by as much as $10 billion this year alone. “We’re investing to stay at the leading edge of this,” remarked Mark Zuckerberg, Meta’s CEO, in a call with analysts last week. “And we’re doing that while also scaling the product before it becomes profitable.”

    AI startups have been grappling with the disparity between spending and sales. Anthropic, which has garnered over $7 billion in funding with support from Amazon and Google, is spending approximately $2 billion annually but is only bringing in around $150 million to $200 million in revenue, according to two individuals familiar with the company’s finances who requested anonymity due to the confidential nature of the figures.

    Similar to OpenAI, Anthropic has turned to established partnerships with tech giants. Its CEO, Dario Amodei, has been pursuing clients on Wall Street, and the company recently announced its collaboration with Accenture, the global consulting firm, to develop custom chatbots and AI systems for businesses and government entities.

    Sally Aldous, a spokesperson for Anthropic, stated that thousands of businesses are utilizing the company’s technology and that millions of consumers are using its publicly available chatbot, Claude.

    Stability AI, a company specializing in image generation, recently announced that its CEO, Emad Mostaque, had stepped down. This came shortly after three researchers from the original five-person team also resigned.

    A reliable source familiar with the company’s operations indicated that Stability AI was projected to achieve approximately $60 million in sales this year, while incurring costs of around $96 million for its image generation system, which has been available to customers since 2022.

    Investors specializing in AI noted that Stability AI’s financial position appears stronger compared to language-model manufacturers like Anthropic, as the development of image generation systems is less costly. However, there is also less demand for paying for images, making the sales outlook more uncertain.

    Stability AI has been functioning without the backing of a major tech company. Following a $101 million investment from venture capitalists in 2022, the company required additional funding last autumn but struggled to demonstrate its ability to sell its technology to businesses, according to two former employees who preferred not to be named publicly.

    Although the company secured a $50 million investment from Intel late last year, it continued to face financial pressure. As the startup expanded, its sales strategy evolved, while simultaneously incurring monthly costs amounting to millions for computing.

    According to an investor who chose to remain anonymous on the matter, some investors urged the resignation of Mr. Mostaque. Following his departure, Stability AI underwent layoffs and restructured its business to ensure a “more sustainable path,” as per a company memo reviewed by The New York Times.

    Stability AI declined to provide a comment, and Mr. Mostaque also declined to discuss his departure.

    Inflection AI, a chatbot startup founded by three AI experts, had raised $1.5 billion from prominent tech companies. However, almost a year after introducing its AI personal assistant, the company had generated minimal revenue, as per an investor.

    The New York Times reviewed a letter from Inflection addressed to investors, indicating that additional fundraising was not the most beneficial use of their money, particularly within the current competitive AI market. In late March, the company pivoted from its original business and largely merged into Microsoft, the world’s most valuable public company.

    Microsoft also participated in funding Inflection AI. The company’s CEO, Mustafa Suleyman, gained prominence as one of the founders of DeepMind, an influential artificial intelligence lab acquired by Google in 2014.

    Mr. Suleyman, along with Karén Simonyan, a key DeepMind researcher, and Reid Hoffman, a prominent Silicon Valley venture capitalist involved in the founding of OpenAI and serving on Microsoft’s board, established Inflection AI.

    Both Microsoft and Inflection AI declined to provide a comment.

    Inflection AI was staffed with talented AI researchers who had previously worked at companies such as Google and OpenAI. However, nearly a year after launching its AI personal assistant, the company’s revenue was described by an investor as “de minimis,” effectively negligible. Without continuous substantial fundraising, it would be challenging for the company to enhance its technologies and compete with chatbots from companies like Google and OpenAI.

    Microsoft is now absorbing most of Inflection AI’s staff, including Mr. Suleyman and Dr. Simonyan, in a deal costing Microsoft over $650 million. Unlike Inflection AI, Microsoft has the resources to adopt a long-term approach. The company has announced plans for the staff to establish an AI lab in London, focusing on the types of systems that start-ups are striving to advance.

    Middle Eastern funds are investing billions of dollars into leading AI start-ups.

    Sovereign wealth funds from the Middle East are emerging as significant supporters of prominent artificial intelligence companies in Silicon Valley.

    Oil-rich nations such as Saudi Arabia, United Arab Emirates, Kuwait, and Qatar are seeking to diversify their economies and are turning to technology investments as a safeguard. Over the past year, funding for AI companies from Middle Eastern sovereign funds has increased fivefold , according to data from Pitchbook.

    According to sources familiar with the matter, MGX, a new AI fund from the United Arab Emirates, was among the investors seeking to participate in OpenAI’s recent fundraising round. The valuation of OpenAI in this round is expected to reach $150 billion, as indicated by the sources, who requested anonymity due to the confidential nature of the discussions.

    While few venture funds possess the financial capacity to compete with the multibillion-dollar investments from companies like Microsoft and Amazon, these sovereign funds face no difficulty in providing substantial funding for AI deals.

    These funds invest on behalf of their governments, which have benefited from the recent increase in energy prices. It is projected that the total wealth of the Gulf Cooperation Council (GCC) countries will rise from $2.7 trillion to $3.5 trillion by 2026, according to Goldman Sachs.

    The PIF, which stands for the Saudi Public Investment Fund, has exceeded $925 billion and has been actively investing as part of Crown Prince Mohammed bin Salman’s “Vision 2030” initiative. The PIF has made investments in companies such as Uber, and has also made significant expenditures in the LIV golf league and professional soccer.

    Mubadala, a fund from the UAE, manages over $302 billion, while the Abu Dhabi Investment Authority manages $1 trillion. The Qatar Investment Authority has $475 billion under management, and Kuwait’s fund has exceeded $800 billion.

    Earlier this week, MGX, based in Abu Dhabi, formed a partnership for AI infrastructure with BlackRock, Microsoft, and Global Infrastructure Partners, with the goal of raising up to $100 billion for data centers and other infrastructure investments.

    MGX was established as a specialized AI fund in March, with Mubadala from Abu Dhabi and AI firm G42 as its founding partners.

    Mubadala from the UAE has also invested in Anthropic, a rival of OpenAI, and is one of the most active venture investors, having completed eight AI deals in the past four years, according to Pitchbook. Anthropic declined to accept funding from the Saudis in its last funding round, citing national security, as reported by CNBC.

    Saudi Arabia’s PIF is currently in discussions to establish a $40 billion partnership with the US venture capital firm Andreessen Horowitz. It has also launched a dedicated AI fund called the Saudi Company for Artificial Intelligence, or SCAI.

    Despite this, the kingdom’s human rights record remains a concern for some Western partners and start-ups. The most notable recent case was the alleged killing of Washington Post journalist Jamal Khashoggi in 2018, an incident that prompted international backlash in the business community.

    It’s not just the Middle East that is pouring money into this space. The French sovereign fund Bpifrance has completed 161 AI and machine learning deals in the past four years, while Temasek from Singapore has completed 47, according to Pitchbook. GIC, another fund backed by Singapore, has completed 24 deals.

    The influx of cash has some Silicon Valley investors worried backed about a “SoftBank effect,” referring to Masayoshi Son’s Vision Fund. SoftBank notably Uber and WeWork, driving the companies to soaring valuations before their public debuts. WeWork filed for bankruptcy last year after being valued at $47 billion by SoftBank in 2019.

    For the US, having sovereign wealth funds invest in American companies, rather than in global adversaries like China, has been a geopolitical priority. Jared Cohen of Goldman Sachs Global Institute stated that there is a disproportionate amount of capital coming from nations such as Saudi Arabia and the UAE, with a willingness to deploy it globally. He described them as “geopolitical swing states.”

    Over the past eighteen months, there’s a good chance that you’ve heard plenty about how the AI ​​revolution could add $15 trillion to the global GDP and revolutionize our lives. The world’s leading tech companies are engaged in an arms race to dominate in this new era.

    “AI will, probably, most likely, lead to the end of the world, but in the meantime, there’ll be great companies,” declared Sam Altman, co-founder and CEO of OpenAI, in June 2015.

    OpenAI, led by Sam Altman, is the prime example of generative AI (GenAI), a technology wave that began nearly two years ago with the launch of ChatGPT.

    Its rapid rise generated hype and fear unlike any other recent technology, prompting Big Tech to invest billions in data centers and computing hardware for building AI infrastructure.

    “GenAI already has the intelligence of a college student, but it will likely put a polymath in every pocket within a few years,” noted Alkesh Shah, Managing Director at Bank of America.

    Since its establishment nine years ago, OpenAI, an AI research backed organization by Microsoft and employing 1,500 individuals, has raised over $11.3 billion and was valued at around $80 billion in February this year. It is reportedly in discussions to raise a new round of funds at a valuation of $150 billion.

    As a result, investors have injected tens of billions of dollars into both startups and publicly traded companies to capitalize on the third major technology cycle of the past five decades. This led to a significant increase in the stock prices of most businesses involved in AI over the past year.

    Consider the case of Nvidia, one of the biggest winners. In June, the chip vendor surpassed the $3 trillion mark to become the most valued company listed in the US. The shares of the ‘magnificent seven’ group of US tech behemoths also reached record levels.

    The exuberance on Wall Street lasted for a year, but last month saw a sharp decline in Nvidia stocks. Major tech companies also experienced significant stock drops, resulting in over $1 trillion in losses.

    According to Aswath Damodaran, a finance professor at NYU Stern School of Business, Nvidia’s performance in the last three quarters has set unrealistic expectations for the company. He believes that further slowdown is imminent due to scaling pushing revenue growth down and increased competition decreasing operating margins .

    Speculative excitement has given way to concerns about whether companies can effectively profit from their large investments in AI infrastructure. Recent underwhelming earnings reports from tech leaders like Meta, Microsoft, and Google have added to investor worries.

    Arup Roy, VP distinguished analyst and Gartner Fellow, notes that while AI is revolutionary, investors are now questioning its sustainability, leading to a loss of its appeal.

    AI capital expenditure is projected to reach $1 trillion in the coming months, driven by the need for powerful operating systems and accelerator technologies for training large language models (LLMs). This has tech giants to aggressively invest in data centers and graphic processing units ( GPUs ).

    Despite these investments, there is a significant gap in demonstrating the value of AI to end-users, as companies struggle to show revenue growth from AI. David Cahn, partner at Sequoia Capital, argues that AI companies need to generate annual revenues of around $600 billion to cover their AI infrastructure costs.

    According to an analysis by The Information, OpenAI is spending approximately $700,000 per day to operate ChatGPT and is on track to incur a $5 billion loss. The company’s hardware is operating close to full capacity, with the majority of its servers dedicated to ChatGPT.

    Potential regulatory disruptions related to data collection for privacy, safety, and ethics could disrupt growth plans. Additionally, there is less pricing power for GPU data centers compared to building physical infrastructure, as new players enter the market.

    David Cahn warns that if his forecast materializes, it will primarily harm investors, while founders and company builders focusing on AI are likely to benefit from lower costs and knowledge gained during this experimental period.

    Most major tech players have announced plans to increase spending as they position themselves for a future driven by AI. Microsoft plans to exceed last year’s $56 billion in capital expenditure, Meta raised its full-year guidance by $2 billion, and Google estimates its quarterly capex spending to be at or above $12 billion.

    Alphabet CEO, Sundar Pichai, emphasized the greater risk of under-investing in AI, stating that not investing to be at the forefront has significant downsides. Meanwhile, Meta CEO Mark Zuckerberg justified the company’s aggressive investment in AI, citing the risk of falling behind in the most important technology for the next decade.

    Sanjay Nath, Managing Partner at Blume Ventures, observes that a one-size-fits-all approach is not suitable for AI and companies need to choose the best model for each use case. He notes that larger tech incumbents are rapidly investing in training models to stay ahead in the rapidly evolving landscape.

    Bank of America believes that the AI ​​hype cycle has reached a phase of disillusionment, where investors tend to overestimate short-term tech disruptions and underestimate long-term impacts. The analysts expect a relatively short time gap between AI infrastructure investment and monetization due to the strong foundation model operating systems currently in place.

    “We advise investors not to underestimate the potential cost savings and revenue generation of GenAI before it is even used,” Shah emphasizes.

    While industry leaders do not anticipate immediate growth in revenue and profit, they are confident that the latest core models and GenAI applications will enhance operational efficiency and productivity, boosting the economy.

    “We have numerous instances of established businesses purchasing AI-centric workflow products,” Nath remarks. “The adoption of AI is certainly a significant reality.”

    Microsoft’s Chief Financial Officer Amy Hood recently rescued investors that the company’s investments in data centers will facilitate the monetization of its AI technology for at least 15 years and beyond.

    Meta’s Chief Financial Officer Susan Li assured investors that returns from GenAI may take a long time to materialize. “We do not anticipate our GenAI products to significantly drive revenue in 2024,” Li informed analysts. “However, we do expect that they will create new revenue opportunities over time, enabling us to achieve a substantial return on our investment.”

    This presents a challenge for investors in publicly traded companies who typically expect returns within a shorter timeframe compared to venture capital investors, who usually have a longer investment horizon of around 10-15 years.

    Nevertheless, most agree that the current rate of capital expenditure on AI is unsustainable, and one or more of the tech giants may need to scale back investments by early next year to allow revenue growth to catch up.

    Despite the recent decline in tech stocks, experts dismiss any parallels between the current AI surge and the late-90s dotcom bubble.

    “Srikanth Velamakanni, co-founder, group CEO, and executive vice chairman of Fractal, asserts that AI will have a much greater and more transformative impact than the dotcom revolution or any other technological revolution we have seen.”

    While both cycles saw tech company valuations reach unrealistic levels driven by hope and excitement rather than a clearly defined profitable revenue stream, there are differences.

    Crucially, today’s tech leaders are highly profitable and have proven business models that will not collapse even if their AI initiatives fail. They possess strong competitive advantages in the form of proprietary data and a large user base.

    “The dotcom companies did not have the level of cash flow and demand visibility that today’s top US tech companies enjoy,” points out Siddharth Srivastava, head of ETF products and fund manager at Mirae Asset (AMC). “US tech stocks are due for some correction, but the AI ​​theme will remain strong in the next 3-5 years.”

    JP Morgan research indicates that the average price-to-earnings (PE) ratio of today’s tech giants is around 34, which is not excessively high for growth stocks. In contrast, the average PE ratio of the group of listed dotcom companies was 59.

    However, there is growing concern that the valuation of some AI startups may be approaching bubble territory as opportunistic players join the trend.

    “Some startups have ‘.ai’ in their company names but are only capable of creating AI ‘wrappers’,” Nath warns. “We are concerned that these startups may initially succeed in raising funds but will soon struggle and ultimately fail.”

    The AI ​​landscape in India is relatively less crowded. Since 2009, investors have injected $2.6 billion into domestic startups developing AI for various purposes. This is a small fraction of the $55.8 billion invested in AI startups in the US during the same period.

    The launch of ChatGPT in November 2022 made entrepreneurs realize how AI’s true power can be made accessible to millions of users worldwide.

    Roy expresses some disappointment with domestic tech providers. “Most of these companies are followers, and there isn’t much innovation yet,” he complains. “Investors want to see ‘proof of value’ and are no longer swayed by just a ‘proof of concept.’”

    The experienced research analyst, however, is optimistic about the progress of domestic companies in utilizing conversational AI to guide a customer’s buying journey, for example. He is also hopeful that more companies benefiting from AI will emerge. “This presents a wealth of opportunities, ” he states.

    Developing cash-intensive core models from scratch for artificial general intelligence (AGI) applications requires billions of dollars in investment. “There is no chance of any Indian company being funded at that level,” laments Velamakanni. “You need vision along with capital and talent.”

    Velamakanni is confident that India has the potential to establish application-focused companies using foundational models to address real-world challenges in various sectors without requiring substantial funding. He mentioned that startups in this space in India are highly competitive and have been successful in raising funds .

    Fractal, founded 20 years ago, has secured $685 million from 13 investors. In January 2022, it became a unicorn after raising $360 million from TPG, achieving a revenue multiple of 7.1 times and a post-money valuation of $1 billion.

    Nath, in the era of AI, advises founders within the ecosystem to reconsider their go-to-market (GTM) strategy. He emphasized that the traditional sequential approach for SaaS might not be effective anymore. With AI, the path to reaching a $100 million annual recurring revenue (ARR) business seems to be faster, requiring an evolved GTM strategy and channels.

    Historically, disruptive technologies have taken 15-30 years to be widely adopted. For instance, the radio, invented in 1890, only became commercially available in 1920. similarly, the television, developed in the 1920s, was only found in homes in the 1950s . Even though email was invented in 1969, it gained popularity in 1997.

    While predicting the future is uncertain, proponents believe that artificial intelligence (AI) is likely to become mainstream in the next three to five years, potentially benefiting companies investing in it. The ultimate use of AI, however, remains to be seen, and only time will reveal how “real” artificial intelligence is.

    Discover how to invest in AI and take advantage of future opportunities

    Artificial intelligence (AI) is no longer a concept of the future – it is a revolutionary force that is reshaping industries and our daily lives. Before considering AI investments, it is important to grasp the definition of artificial intelligence; AI technology imbues computers and technological products with human-like intelligence and problem-solving capabilities.

    From virtual assistants in our homes to self-driving vehicles on our roads, AI is rapidly being integrated into numerous products and applications, dominating discussions on investments and future prospects.

    The AI ​​landscape is intricate, and news of enhanced capabilities at one company can quickly change the pace of progress for all. Identifying the best AI companies to invest in is a challenging task, even when utilizing the top online brokers and trading platforms.

    Similar to how investors in the past had to discern between promising and less promising web browsers, smartphones, and app-based startups, niche players and established tech giants are now competing for AI market share and research funding.

    In this article, we will explore the process of investing in AI and showcase the most promising AI stocks and funds.

    How to Invest in AI

    Similar to previous emerging technologies like railroads in the late 1800s or personal computers in the 1980s, there are numerous avenues for investing in AI. While some companies will achieve great success, others may falter.

    The computer revolution serves as a fitting analogy for AI investing and understanding how to invest in AI. Computers laid the groundwork for automating routine and repetitive tasks, and now AI aims to build on this concept by automating tasks that previously required human intelligence.

    Investors may find that certain top AI stocks have seen one-year returns in the high double digits, with NVIDIA reporting 176% growth over the past 12 months as of July 23, 2024.

    Some individuals may be interested in directly investing in companies that develop AI, while others may prefer to invest in companies that are poised to benefit significantly from its widespread adoption.

    Drawing from the introduction and growth of the personal computer industry, some investors successfully invested in computer manufacturers or hardware companies that produced routers and switches.

    Others invested in software companies that developed computer programs, while some sought to identify companies that would benefit the most from the automation offered by computers.

    Some of these investments were direct bets on computers and the actual technology, while others were more conservative, such as purchasing shares in already established companies that stood to benefit from the expansion of computer usage. The key point is that there are various methods for investing in a new technology.

    There are instances where one company takes and maintains a leading position in the market, but there are also cases where an imitator can leverage the first company’s technology more effectively, leading to greater success over time. Given the difficulty of predicting the winning AI stocks in advance, holding several stocks or opting for an AI ETF could help minimize the risk of making a wrong move.

    Investing in AI Stocks and ETFs

    Prominent Companies in AI

    While these are some of the top AI stocks, it is advisable to consider the business cycle and valuations before committing fully. Employing dollar-cost averaging in AI stock selections can serve as a hedge against market downturns.

    NVIDIA (NVDA): NVIDIA Corp. is leading the AI ​​revolution through its work in designing and developing graphics processing units (GPUs) and associated software and data center networking solutions.

    Investors have taken notice: as of July 23, 2024, its share price has surged by 176% over the past 12 months and expanded by over 2,885% in the last five years.

    Originally developed for the PC graphics and video gaming industries, these GPUs have become fundamental to AI, machine learning, self-driving vehicles, robotics, augmented reality, virtual reality applications, and even cryptocurrency mining systems.

    Microsoft (MSFT): Microsoft is an example of an established tech company delivering on AI investment promises. Microsoft has partnered with OpenAI, the company behind ChatGPT. It has leveraged this partnership to integrate AI into its Azure cloud services, and Microsoft 365 now offers an add-on subscription for generative AI, known as Copilot.

    Microsoft stated in its April 2024 earnings call that 65% of the Fortune 500 were using its Azure OpenAI service, a similar percentage to those using Copilot.

    AeroVironment Inc. (AVAV): Government contracts with the US Department of Defense and US allies provide a level of support for this narrowly focused AI stock. AeroVironment Inc. supplies unmanned aircraft and tactical mission systems, along with high-altitude pseudo-satellites.

    The AVAV systems offer security and surveillance without the need for a human operator or pilot in the air.

    Amazon.com (AMZN): Amazon’s generative AI capabilities enhance customer experiences, boost employee productivity, foster creativity and content creation, and optimize processes. Amazon employs AI in its Alexa system and also provides machine learning and AI services to business customers.

    Amazon’s cloud computing business, Amazon Web Services, provides an AI infrastructure that allows its customers to analyze data and incorporate AI into their existing systems. Amazon has also made its Amazon Q AI assistant generally available for software development and data analysis.

    Taiwan Semiconductor Manufacturing (TSM): Taiwan Semiconductor Manufacturing is the world’s largest chipmaker and a global player in chip manufacturing for artificial intelligence. As AI grows, the need for robust computing chips will grow with it.

    TSM is a mature company that continues to make chips for non-AI computer applications, so it may represent less risk than other pure plays on AI.

    Arista Networks Inc. (ANET): Launched in 2008, Arista bridges the gap between startup and legacy tech companies. Arista is a networking equipment company that sells ethernet switches and software to data centers.

    With the ethernet among the best options to power AI workloads, Arista is well-positioned to capitalize on its power to improve how we work, recreate, and learn.

    Adobe Inc. (ADBE): Global workers have depended upon Adobe products for content creation, document management, digital marketing, advertising software, and services for years.

    Among the older companies on our list of best AI companies to invest in, Adobe has infused most of its products and services with AI features, boosting its already impressive competitive advantage.

    Recent performance has lagged behind our other best AI firms, but the company could be a bargain now. According to Morningstar, the company is significantly undervalued and holds a four-star ranking.

    Best AI ETFs

    Investing in professionally managed ETFs or mutual funds that hold shares in AI companies allows you to leave it to a fund’s professional managers to research and pick suitable AI companies. Through an ETF, you own a share of a portfolio of multiple AI stocks within a single investment.

    iShares Exponential Technologies ETF (XT): XT is a large capitalization fund that includes 186 US and global stocks trying to disrupt the industry. With $3.4 billion in assets, XT hones in on the power of AI to automate, analyze, and create new ideas The fund spans the tech, healthcare, industrial, and financial sectors.

    Defiance Machine Learning & Quantum Computing ETF (QTUM): This index AI fund brings artificial intelligence and machine learning to a range of industries. The fund replicates the BlueStar Quantum Computing and Machine Learning Index (BQTUM), which tracks 71 global stocks with multi- market capitalization.

    The Defiance Machine Learning & Quantum Computing ETF captures returns of the companies at the forefront of next-gen disruptive technology and machine learning.

    ROBO Global Robotics & Automation Index ETF (ROBO): This ETF invests in companies focused on robotics, automation, and AI, including growth and blend stocks of all market capitalizations.

    How to Search for AI Investments

    Buying individual AI stocks is more work for the investor. Given the multiple ways to invest in AI, the first step is to read about the industry to understand the various aspects of artificial intelligence.

    Within the AI ​​universe, there are pure plays and more conservative plays, and you’ll have to decide the type of exposure you want in this market sector. Once you have an idea of ​​the parts of the AI ​​market you want to invest in, you can perform traditional investment computational—both fundamental and technical.

    Earnings forecasts: Earnings are a great way to judge a company’s performance, and AI companies with consistent and growing earnings should be looked at favorably. Many AI companies will be viewed as growth stocks, so earnings growth will be an important criterion for many investors.

    Earnings releases tend to move AI stocks up or down sharply.

    Annual reports: These reports provide important details about the company’s activities and future growth plans. The financial statements allow you to review the company’s debt-to-equity and other accounting ratios, which are used to make financial decisions about stocks.

    Relative performance vs. the market: Relative performance is how an individual stock performs compared with an index or another stock. For newer AI companies, it’s best to compare their relative performance with similar companies.

    Growth analysis: This deals with a company’s growth over time. You’ll examine earnings, market share, and other metrics to determine the company’s strength and prospects.

    Analyst projections: Analyses and reports can be especially worthwhile if you’re new to the AI ​​​​space. This volatile market has constant and new technological developments, and company prospects change much more quickly than in more mature industries.

    Therefore, it’s good to gain the perspective of professional researchers who understand the overall AI space and the prospects of individual stocks relative to competitors.

    Frequently Asked Questions (FAQs)

    Is It Possible for Investors to Profit from AI?

    AI is rapidly expanding, and the technology behind it seems ready to advance further and meet expectations for broader adoption across various businesses and real-world applications.

    Similar to any technology demanding significant capital investment, AI presents numerous opportunities for investors to earn money, but new technologies also come with risks.

    You’ll need to find the most suitable way to get involved without taking on too much risk. Options include more speculative direct AI investments in individual companies or ETFs and mutual funds that provide a portfolio of multiple companies in the AI ​​space.

    You can also consider investing in companies that are poised to grow their revenues as AI becomes more widely adopted across the economy.

    How Can You Participate in AI Art Investment?

    One of the most popular applications of generative AI is creating images. Users can describe an image they want to create, and an AI program can generate an image that matches that description—most of the time.

    These AI programs utilize the user’s description along with images available globally to create the requested artwork for the user.

    AI-generated artwork has been used by people of all ages and backgrounds. Once you’ve created AI art, you can sell it and/or purchase from others on AI art marketplaces. AI art can be collected as giclee prints, digital downloads, NFTs, and other formats.

    It can be traded on certain crypto platforms and specific AI art websites. However, the profit and investment potential for AI art is still in its early stages and cannot be accurately determined.

    How Can You Invest in AI Startups?

    Startup companies are often founded in new and promising fields, such as AI and machine learning. These are typically companies that have been funded initially by venture capital investors and then taken public to capitalize on their initial investment and to raise more capital as the business expands its operations and begins offering its products to a wider customer base.

    Many startup investments are only accessible to large accredited investors. Other platforms allow the public to invest small amounts in promising new ventures. You’ll need to sift through the offerings to find the AI ​​​​startup companies.

    While investing in startups can be risky, the rewards for investing in a successful startup company can be substantial. Examples of successful startup companies include Apple, Amazon, and Microsoft.

    Is It Possible to Directly Invest in AI?

    Certainly, you can directly invest in AI and machine learning by investing in individual stocks or in ETFs or mutual funds that focus on AI stocks.

    Conservative investors seeking AI stocks to buy might consider established companies that are benefiting from AI processes, while aggressive investors can search for investments in direct AI companies. For AI investment ideas, check out the best AI stocks. This list is updated monthly.

    The Bottom Line

    Investing in AI in 2024 presents compelling opportunities for your portfolio. The technology continues to permeate the media, healthcare, automotive, finance, and other sectors.

    However, you’ll have to navigate challenges that could include potential legal and regulatory changes, supply shortages, and the broader political and ethical considerations concerning the widespread deployment of AI systems and the ecological effects of powering them.

    Similar to investing in the new internet and computing industries decades ago, the winners and losers can change rapidly.

    Staying informed and selectively investing in companies prioritizing robust business models will be crucial for those looking to capitalize on the AI ​​boom while mitigating risks.

    AI stocks have experienced significant growth in 2024. NVIDIA, in particular, has attracted a lot of attention due to its substantial increase in value. In June 2024, it briefly surpassed Apple and Microsoft to become the world’s most valuable company.

     

    However, there have been recent speculations that the excitement around AI might be exaggerated, or that geopolitical issues could hinder semiconductor development crucial to AI’s success. NVIDIA’s time at the top was short-lived, and by late July, its market cap had dropped below $3 trillion, falling behind Apple and Microsoft once again.

    For those who believe in the long-term potential of AI, price pullbacks could be seen as buying opportunities, according to some analysts.

    7 top-performing AI stocks

    Here are the seven best-performing stocks in the Indxx Global Robotics & Artificial Intelligence Thematic Index, ranked by one-year returns. This list is updated on a weekly basis.

    SoundHound AI Inc. (SOUN)

    SoundHound AI develops voice-based AI products, such as a voice assistant for restaurants that enables customers to place orders, inquire about operating hours, and make reservations.

    Apart from the food service sector, SoundHound creates products for the automotive and hospitality industries. The company has an impressive client roster, including Hyundai, Pandora, KrispyKreme, White Castle, Toast, and Square.

    NVIDIA Corp (NVDA)

    NVIDIA initially focused on 3D graphics for multimedia and gaming companies in 1993. The company also began developing AI applications as early as 2012. Today, NVIDIA remains at the forefront of AI and is engaged in the development of software, chips, and AI-related services.

    Procept BioRobotics Corp (PRCT)

    Procept BioRobotics designs medical robotics solutions for urology. The company offers two robotics systems: Aquablation therapy, which provides an alternative to surgery, and AquaBeam, a heat-free robotic therapy for treating symptoms related to benign prostatic hyperplasia.

    What are AI stocks?

    AI stocks are shares of companies involved in the artificial intelligence sector. The applications for AI are diverse, resulting in a wide range of AI stocks: Some companies create voice recognition software, while others develop pilotless aircraft.

    According to Haydar Haba, the founder of Andra Capital, a venture capital firm that invests in AI companies, there are numerous publicly traded companies with substantial AI interests poised to benefit from the industry’s growth.

    AI stocks typically fall into one of two categories: established technology companies that have invested in or partnered with AI developers, and smaller, experimental companies entirely focused on AI development.

    Shares of small AI developers may appear to be the most “direct” investments in AI, but Michael Brenner, a research analyst covering AI for FBB Capital Partners, suggests that they might not necessarily be the best AI investments.

    “Large language models require a significant amount of data and substantial capital to develop,” Brenner states.

    Brenner highlights that small companies may innovate and create new models independently, but eventually, they will need to collaborate with a larger company possessing more infrastructure to run those models on a commercial scale.

    “We are currently sticking with more of the mega-cap tech companies,” Brenner notes, referring to FBB Capital Partners’ AI portfolio.

    How to invest in AI stocks

    If you’re new to stock trading and interested in investing in AI stocks, the first step is to open a brokerage account.

    Following this, you will need to determine the type of AI stock exposure you desire. Individual AI stocks have the potential for high returns but requiring assuming significant risk, upfront investment, and research efforts.

    Another option is to invest in AI stocks through pooled exchange-traded funds that focus on AI.

  • The hardware and build quality of the Apple Vision Pro are undeniably impressive

    The hardware and build quality of the Apple Vision Pro are undeniably impressive

    I attempted to rely solely on the Vision Pro for my work for a week, and it surpassed my expectations once I connected it to my laptop.

    The Apple Vision Pro is the most remarkable mixed reality headset I have ever utilized. It is enjoyable for gaming and watching movies, and it has impressive eye and hand tracking capabilities. However, at a price of $3,500, one would expect it to offer more than just entertainment.

    Considering its cost is equivalent to that of a fully equipped MacBook, one would hope to be able to use it for productivity purposes.

    I have been using the Vision Pro for a few months, and for the past week, I have been attempting to use it in lieu of my traditional PC setup to assess its productivity potential. The positive aspect is that the Vision Pro possesses the capability and adaptability to function as a virtual office.

    The downside is that additional equipment, including a MacBook, is required to fully utilize its potential.

    To facilitate productivity, the addition of a MacBook is necessary.

    Initially, I attempted to work using the Vision Pro without any additional equipment. This appeared feasible since it shares similar power with the iPad Pro, the 2022 MacBook Pro, and the 2023 MacBook Air, and its visionOS is based on both iPadOS and macOS. However, its design and compatibility lean more towards the iPad.

    The Vision Pro encounters similar challenges as the iPad Pro when it comes to serious work, and these challenges are even more pronounced on the headset. iPadOS presents difficulties with multitasking and managing multiple apps simultaneously.

    Managing window placement, multitasking with multiple apps and desktops, and even simply knowing which apps are open is extremely challenging without a task manager or a macOS-like dock with indicators for running apps.

    The iPad Pro has a dock without indicators, but the Vision Pro lacks a dock altogether; users need to access an iPhone-like app list to browse apps, and it does not indicate which apps are open.

    In summary, I do not recommend relying solely on the Vision Pro for work purposes.

    To simplify the process, I utilized a MacBook Air and tested the Mac Virtual Display feature. Connecting to the Mac via Mac Virtual Display is straightforward, although not as seamless as Apple claims.

    By simply looking up while wearing the Vision Pro, the menu can be accessed, settings can be opened, and the Mac Virtual Display icon can be selected. If both the Mac and Vision Pro are on the same Wi-Fi network and logged into the same Apple account, the Mac can be selected and connected instantly.

    The process is fast and simple, and there are no complaints about it. However, it is supposed to be even more streamlined with the Vision Pro displaying a large “Connect” button floating over the Mac when it is looked at.

    I have seen the button appear a few times, but not consistently, and most of the time it does not appear. Nevertheless, manually connecting through the quick menu is almost as smooth.

    Once connected, the Mac Virtual Display presents the Mac’s screen as a floating window that can be repositioned and resized within the headset. Although smart glasses like the Rokid Max and the Viture One, which cost a sixth of the price, offer similar functionality, the Vision Pro has distinct advantages.

    Firstly, the Mac Virtual Display window can be moved and resized, and it will remain fixed in that position even when moving around. Whether you want it to float just above your MacBook or cover your wall like a large TV, it is easy to position and resize. It will remain in place even if you get up and move around.

    The Vision Pro surpasses other smart glasses by allowing the use of apps while using Mac Virtual Display.

    While multitasking on the Vision Pro alone is challenging, being able to manage all your essential tools in macOS on one large screen while simultaneously having a video window open to the left and a chat window open to the right makes it easy.

    Keyboard and mouse control worked well when connected to the MacBook. I couldn’t use my mouse outside of the Mac Virtual Display window because the Vision Pro doesn’t support any form of mouse input.

    However, the Magic Trackpad can be utilized between the MacBook screen and Vision Pro apps by swiping between them.

    Importantly, physical keyboard input from the MacBook was translated to the Vision Pro. I could type in my MacBook apps and then switch to a separate app on the Vision Pro and start typing there with the same keyboard.

    Using your eyes and fingers to type on the Vision Pro’s virtual keyboard is acceptable for a few words, but for longer sentences, a physical keyboard is necessary.

    Coming from a PC setup with an ultrawide monitor and previously using two monitors, I was disappointed to discover a significant limitation in Mac Virtual Display: only one screen is available.

    Even with multiple desktops through macOS’ Mission Control, they cannot be distributed to multiple windows on the Vision Pro. You can still set other apps around you and run them alongside the Mac Virtual Display window, but you’re limited to Vision Pro apps.

    On the positive side, you can choose from various resolutions including 4K and 5K (5,120 by 2,880), surpassing the 2,560-by-1,440 screen of my MacBook Air.

    Less significant but still somewhat irritating, the Mac Virtual Display connection doesn’t detect the Vision Pro’s Persona feature as a webcam feed. If you take a video call on the MacBook, others will only see your headset-covered face.

    To use Persona for calls, you need a browser window or a videoconferencing app running on the Vision Pro itself.

    It took some experimentation to figure out the best configuration for me, but I ultimately settled on the Mac Virtual Display in front of me, a Safari window behind it for taking video calls with Persona, a few Vision Pro communications apps to my right, and the Television app showing a virtual screen playing music to my left.

    I really enjoyed working in this virtual office. Even with only one screen for my tools on the laptop, being able to make it as big as I wanted and place it anywhere around me was a huge advantage.

    I could still run browsers, communications software, and other apps outside of the Mac Virtual Display window through the Vision Pro itself, and they all worked together very well.

    Keyboard controls between apps were generally very smooth, and my clipboard was shared between the Vision Pro and the MacBook, allowing me to copy a URL from a message and drop it on my desktop (which came in handy for iCloud links with large Vision Pro recordings ).

    The experience wasn’t perfect, and I encountered some hiccups. Occasionally, the Mac Virtual Display window would indicate that the connection was interrupted.

    Interestingly, this didn’t prevent me from using the MacBook through the Vision Pro, but it did stop my keyboard inputs from registering in Vision Pro apps until the error message disappeared.

    Chrome on the MacBook consistently crashed when I removed the Vision Pro, which didn’t happen when I physically closed the laptop or manually disconnected from it. These are relatively minor inconveniences that can be smoothed out over time.

    One issue you’ll likely face when working on the Vision Pro is the discomfort of long-term use. While the Vision Pro can run indefinitely when plugged in and the MacBook can last a solid 16 hours without power, I could only tolerate wearing the headset for 90 minutes at a time.

    Removing it after that duration left me with a bit of eye strain and a headache for a short period. The 20-20-20 rule of looking away from a screen at something 20 feet away for 20 seconds every 20 minutes is even more important for a view-replacing headset like the Vision Pro.

    Following a demonstration lasting approximately 30 minutes that covered the key features available for testing, I left with the firm belief that Apple has introduced a significant advancement in the capabilities and implementation of XR, or mixed reality, with its new Apple Vision Pro.

    To clarify, I am not asserting that it fulfills all its promises, introduces a genuinely new computing paradigm, or makes any other high-reaching claims that Apple aims to achieve upon its release. I will require ample time with the device beyond a guided demonstration .

    However, I have experience with nearly every major VR headset and AR device since the Oculus DK1 in 2013 up to the most recent generations of Quest and Vive headsets. I have explored all the experiences and attempts to popularize XR.

    I have witnessed both successful social, narrative, and gaming experiences such as Gorilla Tag, VRChat, and Cosmonius, as well as emotionally impactful first-person experiences created by Sundance filmmakers that shed light on the human (or animal) condition.

    Nevertheless, none of them possess the advantages that Apple brings to the table with Apple Vision Pro, including 5,000 patents filed over the past few years and access to a vast pool of talent and capital.

    Every aspect of this device reflects Apple-level ambition. Whether it will become the “next computing mode” remains uncertain, but the dedication behind each decision is evident. No corners have been cut, and full-fledged engineering is on display.

    The hardware is impressive — with 24 million pixels spread across the two panels, significantly more than what most consumers have encountered with other headsets. The optics are superior, the headband is comfortable and easily adjustable, and there is a top strap for alleviating weight.

    Apple has stated that it is still deliberating on which light seal (the cloth shroud) options to include when it officially launches, but the default one was comfortable for me. They intend to offer variations in sizes and shapes to accommodate different face shapes.

    The power connector features a clever design as well, using internal pin-type power linkages with an external twist lock for interconnection.

    For individuals with varying vision requirements, there is also a magnetic solution for some (but not all) optical adjustments. The onboarding experience includes automatic eye-relief calibration that aligns the lenses with the center of your eyes, eliminating the need for manual adjustments.

    The main frame and glass piece look satisfactory, although it’s worth noting that they are quite substantial in size. Not necessarily heavy, but certainly noticeable.

    If you have any experience with VR, you are likely aware of the two significant obstacles that most people encounter: nausea caused by latency and the sense of isolation during prolonged sessions wearing a device over your eyes.

    Apple has directly addressed both of these challenges. The R1 chip, alongside the M2 chip, boasts a system-wide polling rate of 12ms, and I observed no judder or framedrops. While there was a slight motion blur effect in the passthrough mode, it was not distracting. The windows rendered sharply and moved swiftly.

    Naturally, Apple’s ability to mitigate these issues stems from a plethora of entirely new and original hardware. Every aspect of this device showcases a new idea, a new technology, or a new implementation.

    However, all these innovations come at a cost: at $3,500, it exceeds high-end expectations and firmly places the device in the power user category for early adopters.

    Here’s what Apple has accomplished exceptionally well compared to other headsets:

    The eye tracking and gesture control are nearly flawless. The hand gestures are detected from anywhere around the headset, including on your lap or resting low and away on a chair or couch. Many other hand-tracking interfaces require you to keep your hands raised in front of you, which can be tiring.

    Apple has incorporated high-resolution cameras dedicated to the bottom of the device specifically to track your hands. Similarly, an eye-tracking array inside ensures that, after calibration, nearly everything you look at is precisely highlighted. A simple low-effort tap of your fingers and it works.

    Passthrough plays a crucial role. It’s vital to have a real-time 4K view of the surrounding environment, including any people nearby, when using VR or AR for extended periods.

    Most people have a primal instinct that makes them extremely uneasy when they can’t see their surroundings for an extended period.

    Having the ability to see through an image should increase the likelihood of longer usage times. Additionally, there’s a clever mechanism that automatically displays a person approaching you through your content, alerting you to their presence.

    The exterior eyes, which change appearance based on your activity, also serve as a helpful cue for those outside.

    The high resolution ensures that text is easily readable. Apple’s positioning of this as a full-fledged computing device only makes sense if the text is legible.

    Previous “virtual desktop” setups relied on panels and lenses that presented a blurry view, making it difficult to read text for an extended period.

    In many cases, it was physically uncomfortable to do so. With the Apple Vision Pro, text is incredibly sharp and readable at all sizes and distances within your space.

    There were several pleasantly surprising moments during my brief time with the headset. Apart from the display’s sharpness and the responsive interface, the entire suite of samples demonstrated meticulous attention to detail.

    The Personas Play feature. I had serious doubts about Apple’s ability to create a functional digital avatar based solely on a scan of your face using the Vision Pro headset. Those doubts were unfounded.

    I would say that the digital version it creates for your avatar in FaceTime calls and other areas successfully bridges the uncanny valley.

    It’s not flawless, but the skin tension and muscle movement are accurate, the expressions are used to create a full range of facial movements using machine learning models, and the brief interactions I had with a live person on a call (and it was live, I verified by asking off-script questions) did not feel unsettling or strange. It worked.

    It’s sharp. I’ll reiterate, it’s extremely sharp. It handles demos like the 3D dinosaur with incredible detail down to the texture level and beyond.

    3D movies look great on it. Jim Cameron probably had a moment when he saw “Avatar: Way of Water” on the Apple Vision Pro.

    This device is perfectly designed to showcase the 3D format — and it can display them almost immediately, so there will likely be a substantial library of 3D movies that will breathe new life into the format.

    The 3D photos and videos you can capture directly with the Apple Vision Pro also look excellent, but I didn’t have the chance to capture any myself, so I can’t comment on the experience. Awkward? Hard to say.

    The setup process is simple and seamless. A few minutes and you’re ready to go. Very Apple.

    Yes, it’s as impressive as it looks. The output of the interface and the various apps is so remarkable that Apple used them directly from the device in its keynote.

    The interface is vibrant and bold and feels present due to its interaction with other windows, casting shadows on the ground, and reacting to lighting conditions.

    Overall, I’m cautious about making sweeping claims regarding whether the Apple Vision Pro will deliver on Apple’s promises about the advent of spatial computing.

    I’ve had too little time with it, and it’s not even finished — Apple is still refining aspects such as the light shroud and various software elements.

    However, it is undeniably well-executed. It represents the ideal XR headset. Now, we’ll have to wait and see what developers and Apple achieve over the next few months and how the public responds.

    Recent leak suggests that mass production of the Apple Vision Pro 2 is in progress.

    The Apple Vision Pro 2 is scheduled to commence mass production in 2025, despite previous reports indicating otherwise. The original Vision Pro, Apple’s AR headset, did not perform well in the market, with sales struggling to reach 100,000 units by July 2024.

    Apple intends to introduce new features to enhance the popularity of the sequel. One of these features is a new M5 chipset, expected to enhance the headset’s performance.

    Contrary to earlier rumors of production cessation due to low demand for the original Vision Pro, analyst Ming-Chi Kuo from TF International Securities believes that mass production of the new M5 chipset-equipped AR headset will begin in the second half of 2025. Apple aims to make the Vision Pro 2 more cost-effective, potentially appealing to a broader customer base.

    Kuo also anticipates minimal, if any, changes to the design of the AR headset, which would reduce production costs. This strategic move would leverage the fresh and appealing design of the Vision Pro, featuring the innovative augmented reality display EyeSight and a modern futuristic high -end aesthetic.

    New chip, new enhancements

    According to Kuo, the M5 chipset will enhance the Apple Intelligence experience. The projected launch date of the Apple Vision Pro 2 suggests that the M5 chipset may utilize TSMC’s N3P node, although this is not confirmed.

    In an effort to control production costs, Apple will not utilize its more advanced 2nm chipsets. These chipsets were initially expected to be used for manufacturing next-generation iPhone chips like the A19 and A19 Pro, but it appears that these products will also stick with Apple’s N3P node (3 nm).

    While not as cutting-edge as the 2nm chipsets, the 3nm chipset is still efficient and powerful.

    The high cost of the Apple Vision Pro, starting at $3,500 (£2,800, AU$5,300), is often cited as a reason for its low sales figures. Other reasons include a perceived lack of content for the device, as well as comfort, wearability , and the intuitiveness of the gesture-based control.

    There is still much unknown about the specifications of the Apple Vision Pro 2, but if Apple can deliver the proposed M5 chipset in a more affordable headset, it could be a success for the company.

    The Vision Pro 2 is reportedly set to be released by the end of next year, featuring an M5 chip and designed for AI ‘from the ground up’ (as Apple might say). This news is promising, and I believe it’s the right move for Apple.

    It has been clear for some time that Apple’s vision for its Vision products is long-term.

    AR and VR are still in the early stages of adoption. However, the challenge many tech companies face is how to develop the technology and platform without having devices in the market.

    So, earlier this year, Apple released the Vision Pro. While it has not been a major success or significantly contributed to the company’s bottom line, it is a tangible product. Developers are creating applications for it, and technologies like visionOS, Immersive Video, and Spatial photos are expanding. Slowly, the Vision Pro is making a ‘spatial computing’ future more feasible.

    The objective: appealing to the masses

    Ultimately, Apple aims for its Vision products to become a major success and the next big thing. It wants spatial computing to become mainstream.

    To achieve this goal, at the very least, a Vision product needs to be:

    • Lighter
    • More versatile
    • less expensive

    Therefore, reports that Apple’s priority is not the Vision Pro 2, but instead a more affordable Vision device, make a lot of sense.

    While Apple focuses on the non-Pro version of its Vision line, it is crucial to keep the Vision Pro at the forefront of innovation.

    This is where the latest report becomes relevant.

    The Vision Pro 2 is receiving the necessary upgrades, and perhaps more

    Previously, I suggested that while Apple is concentrating on a less expensive Vision device, it should at least equip the current Vision Pro with an M4 and leave it at that.

    It appears that this is precisely what will happen, except it will feature an M5 instead.

    Reportedly, the Vision Pro 2 will include an M5 chip with a strong focus on Apple Intelligence.

    And I say: great!

    Apple’s focus on Apple Intelligence is evident, and the absence of this feature in visionOS for the $3,500 Vision Pro is disappointing, given its otherwise advanced capabilities.

    If Apple were to introduce a new Vision Pro in 2025 with an M5 chip and integrate several Apple Intelligence features into visionOS 3, it would generate the necessary excitement for the platform.

    Meanwhile, the company can continue prioritizing the more affordable Vision product, as it has a better chance of achieving widespread success.

    For now, it’s crucial for the Vision Pro to remain appealing to early adopters and the curious, and the rumored updates should help achieve this.

    According to Apple analyst Ming-Chi Kuo, a new version of the Vision Pro headset is being developed and is expected to begin mass production in the second half of 2025.

    Kuo suggests that the most significant change in the upcoming model will be the inclusion of Apple’s M5 chip, a substantial upgrade from the current Vision Pro’s M2 chip. This enhancement is expected to significantly boost the device’s computing power, particularly in terms of integrated Apple Intelligence features.

    Despite the upgraded internals, Kuo reports that other hardware specifications and the overall design of the Vision Pro will remain largely unchanged. This approach may help Apple manage production costs, although the price point is anticipated to remain close to the current $3,499 starting price.

    Kuo emphasizes that if the new version introduces compelling use cases, it could propel Apple’s spatial computing platform toward mainstream adoption. He also speculated on the potential integration of advanced AI models, such as text-to-video capabilities similar to OpenAI’s Sora, which could greatly enhance the Vision Pro experience.

    According to Bloomberg’s Mark Gurman, Apple is planning to incorporate Apple Intelligence features into the Vision Pro headset in the future. While the device is capable of running on-device AI functions such as writing tools, notification summaries, and an enhanced Siri, these features are not expected to be available in 2024. Instead, Apple may be saving the Apple Intelligence integration for visionOS 3, potentially launching in 2025.

    Apple’s exploration of a new product category includes venturing into robotics. Additionally, the company is preparing new iPads and accompanying accessories for a May release, the Vision Pro is set to receive another Personas upgrade, and there has been a significant management change at Apple.

    Just a year ago, Apple’s future product pipeline seemed abundant. The Vision Pro had not yet been introduced, smart home devices were in development, and the Apple electric car project seemed to be gaining traction.

    Today’s situation is markedly different. While the Vision Pro is now available for purchase, it has not achieved widespread popularity. The Apple vehicle project has been scrapped, along with efforts to develop next-generation smartwatch screens.

    The performance improvements of processors have begun to level off, and the company is lagging behind in the smart home market.

    To compound the situation, Apple’s competitors, such as Microsoft Corp. and Alphabet Inc.’s Google, have made significant progress in generative AI, much to the excitement of consumers and investors. Meanwhile, Apple has remained relatively inactive.

    Apple’s business is heavily reliant on the iPhone, which contributes to more than half of its revenue. Sales in that market have stagnated, underscoring the importance of finding a major new product category.

    Apple has faced similar challenges in the past. The iMac revitalized the company in the late 1990s, the iPod propelled it into consumer electronics in the early 2000s, and the iPhone transformed Apple into the industry giant it is today. The iPad further solidified its position in our lives.

    While Apple is starting to generate more revenue from online services and other offerings, it remains fundamentally a company focused on devices. During the most recent holiday season, the majority of its revenue was derived from products such as the iPhone, Mac, iPad, Apple Watch, and AirPods.

    Ultimately, services like the App Store, TV+, and Apple One bundles depend on the iPhone and other devices to function. This underscores the importance of staying at the forefront of hardware innovation.

    An Apple vehicle was seen as the “ultimate mobile device,” and it’s clear why that possibility was exciting. It’s a low-profit industry, but the vehicles could have been sold for $100,000 each.

    Even if Apple only sold a portion of the number of units of Tesla Inc., that could have resulted in a $50 billion business (or approximately equivalent to the iPad and Mac combined).

    The Vision Pro headset introduced Apple to the mixed-reality category, which the company calls spatial computing. However, its greatest potential might be in replacing the Mac and iPad, rather than creating an entirely new source of revenue.

    For the device to gain any significant traction, the company will need to produce a more affordable model and ideally bring it to market within the next two years.

    Then there’s the smart home sector, where Apple still has large aspirations. It has discussed automating household functions and offering an updated Apple TV set-top box with a built-in camera for FaceTime video calls and gesture-based controls. And all the technology will seamlessly integrate with both the iPhone and Vision Pro.

    One aspect of the plan is a lightweight smart display — something similar to a basic iPad. Such a device could be moved from room to room as needed and connected to charging hubs located around the house. Apple has initiated small-scale test production of the screens for this product, but has not made a decision on whether to proceed.

    Establishing a unified smart home strategy remains a goal for Apple, but fulfilling the vision has proven challenging. The need to complete the Vision Pro took priority, diverting resources away from smart home efforts.

    But now that the Vision Pro has been released and the electric car project has been canceled, Apple has more capacity to refocus on the home. And there’s an exciting potential opportunity in that area. As reported recently, Apple is exploring the concept of creating personal robotic devices infused with artificial intelligence.

    The company has internal teams within its hardware engineering and AI divisions exploring robotics. One recent project involved a home robot that could follow a person around the home.

    Some involved in the effort have even suggested that Apple could delve into humanoid technology and develop a machine capable of handling household chores. However, such advancements are likely a decade away, and it doesn’t seem that Apple has committed to moving in that direction .

    A more immediate move into robotics would be a device that Apple has been working on for several years: a tabletop product that utilizes a robotic arm to move around a display.

    The arm could be used to mimic a person on the other side of a FaceTime call, adjusting the screen to replicate a nod or a shake of the head. However, this device also lacks unified support from Apple’s executive team.

    So for now, Apple will likely make more gradual improvements to its current lineup: new device sizes, colors, and configurations, in addition to accessories that could generate more revenue from the iPhone. This has largely been the key to the company’s success during Tim Cook’s tenure as CEO.

    But with robotics and AI advancing every year, there’s still hope that something from the Apple lab could eventually make its way into consumers’ living rooms.

    2024 is shaping up to be the year of the iPad. The new iPads are finally on the horizon. You can mark early May on your calendar if you — like many Power On readers, apparently — have been eagerly anticipating an upgraded tablet.

    On the agenda is the overhauled iPad Pro, an iPad Air, a new Magic Keyboard, and an Apple Pencil. In total, this launch is set to be one of the most extensive updates to the Apple tablet in a single day.

    And it’s been a long time coming, especially for the iPad Pro. That model hasn’t received a substantial update since 2018.

    For those seeking more specific timing, I’m informed that the launch will likely take place the week of May 6. Another indication of this: Apple retail stores are gearing up to receive new product marketing materials later that week.

    This is usually a sign that a new product release is imminent. It’s also worth noting — as I reported at the end of March — that the intricate new iPad screens are the reason behind the roughly one-month delay from the initial March release plan.

    Regardless, the new lineup is expected to increase sales, but I’m uncertain whether it will address the broader challenges faced by the iPad. As a frequent user of a Mac and iPhone, and now a Vision Pro for watching videos, I find the iPad extremely irrelevant.

    The device isn’t sufficiently capable to fully replace a Mac for everyday tasks, and its software still has significant room for improvement. Hopefully, the introduction of iPadOS 18 will bring about substantial enhancements, making the device a true alternative to a Mac.

    Setting aside software considerations, the hardware upgrades in the new iPads mark some of the most significant changes in the product’s history. For the first time, Apple will be transitioning its tablet screens to OLED, or organic light-emitting diode, a technology already utilized in the iPhone.

    Reportedly, this technology looks stunning on larger displays, taking the experience that iPhone users have had since 2017 to a whole new level. However, one downside to this transition is that the new models will likely come with higher price points, according to the information I’ve received. The current iPad Pro starts at $799.

    Additionally, the company is working on new iterations of the entry-level iPad and iPad mini, but they are not expected to be released before the end of the year at the earliest. The new lower-end iPad will likely be a cost-reduced version of the 10th generation model from 2022, while the update for the iPad mini is expected to mainly involve a processor upgrade.

    Looking further ahead, Apple engineers are exploring the possibility of foldable iPads. However, this initiative is still in its early stages, and the company has yet to find a way to create foldable screens without the crease seen on similar devices from Samsung Electronics Co. and others.

    I’ve been cautioned that if Apple is unable to solve this issue, it might decide to abandon the concept of foldable iPads altogether. Nevertheless, there’s still time.

    Apple has introduced more realistic Personas for the Vision Pro, while visionOS 1.2 is currently undergoing testing. The visionOS 1.1 update was released a few weeks ago, and Apple has just added a new feature: Spatial Personas. These are advanced avatars that create the sensation of being in the same room as other people during FaceTime calls (in contrast to the original Personas, which felt more like being confined in a frosted glass box).

    Ironically, the initial beta version of visionOS 1.2 was released last week and brought almost no new features. (In fact, two of the original environments that were included with the Vision Pro on Feb. 2 are still not functional.)

    I have tested the new Spatial Personas, which are still in beta, with two different individuals for several minutes. I am extremely impressed — I would even go so far as to say that Apple’s communications and marketing teams have not fully highlighted this feature so far . It’s incredibly impressive and unlike anything I have experienced before.

    In fact, it’s so impressive that the absence of this feature in the initial Vision Pro launch likely held back the product. If you have a Vision Pro (and somehow know someone else with one), you absolutely have to try it.

    Why did Kevin Lynch, the head of Apple Watch, transition to the company’s AI group? One of the behind-the-scenes stories that was overshadowed by the cancellation of the Apple car is the change in Kevin Lynch’s role, who led the project in recent years.

    For about ten years, Lynch reported to Apple’s Chief Operating Officer, Jeff Williams. In addition to overseeing the car project, he has been in charge of software engineering for the Apple Watch under Williams.

    In an unexpected move, Lynch has now started reporting to John Giannandrea, Apple’s AI chief. Lynch and Williams still have oversight of the Apple Watch, leading to the question: Why was this change necessary?

    Those close to the situation believe that Lynch’s move is intended to bring clarity to an area that has posed challenges for Apple: AI. This is something Apple also attempted to address with the car project.

    Lynch initially joined that project in 2021, a few months before the project’s leader, Doug Field, stepped down to lead the electric vehicle efforts at Ford Motor Co. Within the company, Lynch is seen as a highly skilled engineering manager.

    With AI, it’s no secret that Apple has been struggling to develop large language models and other tools that can compete with the best in the industry. If Giannandrea were to eventually leave the company, Lynch — who has been due for a promotion to the senior vice president level — could be well-positioned to step into his role.

  • Huawei provides smart components and systems for autonomous vehicles

    Huawei provides smart components and systems for autonomous vehicles

    Huawei is distributing samples of its Ascend 910C processor to conduct tests, aiming to address the gap left by Nvidia.

    Under US sanctions, Huawei Technologies has initiated the testing of a new AI chip with potential customers in China, as they seek alternatives to high-end Nvidia chips, moving closer to bolstering China’s self-sufficiency in semiconductors despite US restrictions.

    Huawei has provided samples of its Ascend 910C processor to major Chinese server companies for testing and setup, as stated by two sources familiar with the matter.

    According to one of the sources, a distributor of Huawei AI chips, the upgraded 910C chip is being offered to large Chinese internet firms, which are significant Nvidia customers. Huawei did not respond immediately to a request for comment on Friday.

    Huawei has been striving to fill the gap left by Nvidia after the ban on the California-based chip designer from exporting its most advanced GPUs to China.

    The Ascend 910B chips, which Huawei has claimed to be comparable to Nvidia’s popular A100 chips, have emerged as a leading alternative in various industries in China.

    Huawei’s Ascend solutions were utilized to train approximately half of China’s top large language models last year, as per Huawei’s statement.

    Although Huawei has been discreet about its progress in chip advancements, it is evident that the company is establishing a support system for the domestic AI industry.

    During the Huawei Connect event, the company unveiled various new solutions and their alignment with its Digitalized Intelligence vision for 2030.

    At Huawei Connect Shanghai 2024, the company introduced upgrades to its AI, cloud, and compute capabilities, aligning with the company’s ‘Digitalized Intelligence’ 2030 strategic vision.

    Huawei’s Deputy Chairman and Rotating Chairman, Eric Xu, emphasized the importance of envisioning the future of intelligent enterprises and aligning current strategies and actions with that vision during the opening keynote.

    As part of its objectives in AI Intelligence and Amplifying Industrial Digitalization and Intelligence, Huawei’s updates aim to assist enterprises in effectively implementing the AI ​​revolution.

    With extensive experience in intelligent transformation, Huawei aims to develop products based on enterprises’ needs for successful deployment of new digital technologies.

    Huawei has outlined a roadmap for creating an intelligent enterprise, characterized by six key aspects.

    The first four aspects result from intelligent transformation:
    – The first aspect focuses on Adaptive User Experience for customers.
    – The second aspect is Auto-Evolving Products and their inherent product functionality and adaptability.
    – The third aspect pertains to Autonomous Operations, covering sensing, planning, decision making, and execution.
    – The fourth aspect involves an Augmented Workforce.

    The remaining two aspects serve as the foundation of AI:
    – The fifth aspect focuses on All-Connected Resources, aiming to connect all parts of an enterprise, including assets, employees, customers, partners, and ecosystems.
    – The last aspect is AI-Native Infrastructure, aiming to meet the needs of intelligent applications through ICT infrastructure.

    As a company rooted in telecommunications, Huawei recognizes the significance of interconnectedness and networks.

    Therefore, the fifth aspect emphasizes the importance of All-Connected Resources, aiming to connect every part of an enterprise, from assets and employees to customers, partners, and ecosystems.

    The final aspect, AI-Native Infrastructure, is twofold, with a focus on building ICT infrastructure to support the demands of intelligent applications.

    This transitioned to David Wang, Huawei’s Executive Director of the Board and Chairman of the ICT Infrastructure Managing Board, as he emphasized Huawei’s commitment to collaborating with customers and partners to build future-proof infrastructure capable of supporting these initiatives.

    To this end, Huawei introduced a new report called the Global Digitalization Index (GDI), which builds on the Global Connectivity Index (GCI) and incorporates new indicators to assess digital infrastructure, including computing, storage, cloud, and green energy. It also quantifies the value of each country’s ICT industry and its impact on the national economy.

    A study found that every US$1 investment in ICT results in an US$8.3 return in a country’s digital economy.

    Recognizing these returns, Huawei released an Amplifying Industrial Digitalization & Intelligence Practice White Paper with 10 major solutions for industrial intelligence to help businesses understand how to implement digitalization.

    “We will also develop new scenario-specific solutions and create an environment for both the economy and society to flourish,” David stated during the keynote address. “Let’s seize the opportunities presented by this transformation and make its benefits accessible to all.”

    Notwithstanding, ambitions need support. Supporting these digitalization efforts are a range of new product solutions that enterprises can use to advance their journey into AI.

    Acknowledging that widespread AI usage will bring new demands, Huawei announced a focus on several key areas: connectivity, storage, computing, cloud, and energy.

    New announcements, including the launch of cloud to mainframe technology by Zang Ping’an, Executive Director of the Board and CEO of Huawei Cloud, aim to facilitate better integration between cloud and computing environments for a centralized view to optimize IT operations.

    Huawei’s continued efforts to help enterprises digitalize more widely are evident in their endeavors to make their AI more accessible and easier to deploy.

    “We believe that if a company lacks the ability or resources to build their own AI computing infrastructure or train their own foundation model, then cloud services are a more feasible, sustainable option,” explains Eric.

    Their Pangu models have been utilized in various industries, and experience suggests that a 1-billion-parameter model is sufficient for scientific computing and prediction scenarios, such as rain forecasts, drug molecule optimization, and technical parameter predictions.

    Following AI, Pangu Doer, an intelligent assistant powered by the Pangu large model, was announced to usher in a new era of intelligent cloud services.

    Designed around a “1+N” architecture, its uses extend to planning, using, maintaining, and optimizing the cloud through a series of specialized assistants tailored to key enterprise scenarios.

    Huawei also introduced its new CANN 8.0 and opened its openMind application enablement kit, aiming to make the industry ecosystem more dynamic by providing wider access.

    Additionally, Huawei announced the launch of their new Atlas 900 SuperCluster, the latest offering in Huawei’s Ascend series of computing products, utilizing a brand-new architecture for AI computing.

    This was followed by an announcement for enterprises needing to build AI-native cloud infrastructure that matches their requirements.

    Zhang subsequently announced the launch of CloudMatrix, designed to interconnect and pool all resources including CPUs, NPUs, DPUs, and memory, comprising an AI-native cloud infrastructure in which everything can be pooled, peer-to-peer, and composed, providing enterprises with significant AI computing power.

    Building the foundation for the future of business

    Huawei is actively addressing the key challenges businesses face as they adapt to the rapidly evolving digital landscape, with a focus on integrating AI, cloud, and computing technologies to improve operational efficiency and foster innovation.

    By concentrating on six key areas – Adaptive User Experience, Auto-Evolving Products, Autonomous Operations, Augmented Workforce, All-Connected Resources, and AI-Native Infrastructure – Huawei aims to empower enterprises to effectively navigate their digital transformation journeys and develop digital and AI applications that enhance their offerings.

    This comprehensive approach not only addresses immediate business needs but also prepares organizations for future challenges in an increasingly interconnected world.

    This commitment to developing intelligent infrastructure, through collaboration with industry partners and a focus on innovative ICT solutions, is positioning Huawei as a leader in driving the future digital economy.

    These innovative solutions and more can be experienced at this year’s GITEX. From October 14 to 18, Huawei will be a Diamond Sponsor at the 44th GITEX GLOBAL 2024, one of the world’s largest technology exhibitions.

    With the theme of “Accelerate Industrial Digitalization and Intelligence”, Huawei will launch a series of flagship products and solutions for the global enterprise markets, Reference Architecture for Intelligent Transformation, and rich innovative practices in digital intelligence in the global industry.

    During this exhibition, Huawei will also host the Huawei Industrial Digital and Intelligent Transformation Summit 2024, featuring dozens of forums, hundreds of talks, and keynote speeches, promoting discussions with the industry.

    China has increased its computing power by 25% to meet the growing demand for artificial intelligence (AI) and other technologies. At the annual China Computational Power Conference in Zhengzhou, it was reported that the country’s total computing capacity reached 246 EFLOPS as of June, showing a significant growth from the previous year. If this trend continues, China is expected to achieve a total computing power of 300 EFLOPS by 2025.

    Intelligent computing power used in AI-related tasks experienced a remarkable 65% growth, contributing to China’s position as the second-strongest computing powerhouse globally, after the United States. The US accounted for 32% of the world’s total computing power, surpassing China’s 26 %. This data was compiled by the state-backed China Academy of Information and Communications Technology (CAICT).

    Zhao Zhiguo, chief engineer at the Ministry of Information and Technology, emphasized the urgent need for digital information infrastructure such as computing power facilities due to the accelerated pace of digitalization and intelligent transformation of various industries.

    To address regional imbalances in digital resources, China launched the Eastern Data and Western Computing project in 2022, aiming to achieve a balance between the more prosperous areas of eastern China and the energy-rich west. The plan includes the construction of 10 computing clusters across the country.

    Huawei’s 2023 Annual Report revealed impressive revenue of nearly US$100 billion, positioning the Chinese technology giant above companies such as Tesla, Bank of America, Dell, and NTT in terms of annual revenue. The report highlighted steady growth in the cloud computing and digital power businesses, as well as significant investment in research and innovation, with R&D investment in the past decade reaching US$157 billion.

    Ken Hu, Huawei’s Rotating Chairman, expressed gratitude for the trust and support of customers, partners, and friends, emphasizing the company’s resilience and growth despite facing challenges in recent years.

    Huawei’s revenue is largely driven by its ICT infrastructure, accounting for over half of the company’s total revenue at US$51.1 billion, a 2.3% increase compared to the previous year. The consumer business experienced a 17.3% growth, reaching a revenue of US$35.5 billion.

    Cloud computing also saw significant growth, with revenue increasing by 21.9% to reach US$7.8 billion. Huawei aims to focus on developing core ICT technologies and building platform capabilities for complex hardware and software systems, which are then made available to partners.

    Huawei’s chairman, Hu, expressed the company’s commitment to creating greater value for customers and society through open innovation, thriving ecosystems, and a focus on quality. Huawei has partnered with Chinese EV company BYD to incorporate Huawei’s autonomous driving system into its off-road Fangchengbao EVs, with the aim of boosting car sales.

    BYD, the world’s largest electric vehicle maker, is partnering with Huawei to utilize its autonomous driving system in its premium cars. The Fangchengbao lineup will be the first BYD model to use Huawei’s Qiankun intelligent driving system and is expected to be launched later in 2024.

    The introduction of Qiankun in April 2024 aims to enhance the self-driving systems, including driving chassis, audio, and driver’s seat, reflecting the EV market’s increasing investment in AI and automation to attract potential buyers.

    The partnership between Huawei and BYD comes at a time when the EV leader is seeking to improve profitability, as its premium car brands accounted for only 5% of its total sales in the first half of 2024. The EV market is seen as a significant advancement in vehicle engineering, with autonomous vehicles expected to enhance safety and driving experiences.

    McKinsey research predicts that electrified passenger vehicle sales will reach 40 million in 2030, indicating rapid market growth, technological advancements, and intense competition across the value chain.

    In 2024, EV sales have experienced a slight slowdown as leading organizations compete for market share. The use of Huawei technology by BYD underscores the pressure on large EV companies to offer the latest technology.

    BYD aims to maintain its market dominance by improving smart driving configuration, leveraging its cost advantage through vertical integration, and investing in advanced driver-assistance systems (ADAS) and AI and automation offerings.

    Huawei’s 2023 Annual Report highlights its robust financial performance, surpassing that of Tesla, with reaching revenue US$99.5 billion and a profit of US$12.3 billion. The organization’s partnership with BYD spans various areas of technology and innovation.

    Prior to the recent announcement, both companies collaborated on intelligent driving technologies, leveraging Huawei’s expertise in AI, 5G, and cloud computing to advance capabilities in new EVs and rail transport systems. Additionally, Huawei provided smart factory solutions for BYD and assisted in building a high-quality 10 Gbps data centre campus network.

    BYD, a Chinese electric vehicle (EV) maker, has recently teamed up with Huawei to incorporate Huawei’s advanced autonomous driving system, Qiankun, into BYD’s off-road Fang Cheng Bao EVs.

    This strategic partnership is aimed at advancing BYD’s premium brands, including Denza, Fangchengbao, and Yangwang.

    The collaboration is crucial for BYD to narrow the technological divide in the self-driving space with its competitors.

    Huawei’s Qiankun ADS 3.0, which was introduced in April 2024, is composed of two characters: “Qian,” symbolizing heaven, and “kun,” representing the Kunlun Mountains, demonstrating Huawei’s ambition to reach new heights and excel in core technologies within the smart driving landscape.

    Qiankun offers advanced smart driving features, such as Navigate on Autopilot (NOA), similar to Tesla’s Full Self-Driving (FSD), and other end-to-end network architecture capabilities that provide a more human-like driving experience.

    Qiankun’s development stems from the legacy of Huawei Intelligent Automotive Solution, the company’s previous automotive business unit. Initially established in 2019 as a division within Huawei, this branch transitioned into an independent entity, Shenzhen Yinwang Intelligent Technology Co., Ltd., focusing on providing automotive hardware and software solutions to manufacturers.

    The change to Yinwang marked a significant step in Huawei’s commitment to the automotive sector, solidified through key partnerships and investments with companies like Avatr Technology and Seres Group.

    Regarding usage, the Harmony Intelligent Mobility Alliance (HIMA) developed by Huawei allows automakers to leverage Huawei’s comprehensive vehicle solutions, facilitating collaboration in product definition, design, marketing, quality control, and delivery.

    Noteworthy brands such as Seres, BAIC BluePark, Chery, and JAC Group have benefited from the standardized parts supply model as well as the “Huawei Inside” (HI) and “HI Plus” models, incorporating Huawei’s technologies into their vehicles across different tiers.

    Through this alliance, companies like Deepal, M-Hero, Avatr, and other leading manufacturers have embraced Huawei’s innovative solutions, including Qiankun Smart Driving and Harmony Cockpit, to improve their offerings and cater to evolving consumer demands in the competitive automotive market.

    Key Differences Between Huawei Cars and BYD

    Huawei and BYD are both significant players in the Chinese automotive market, but they differ fundamentally in their approaches, core competencies, and market offerings. Here’s a detailed look at the main distinctions:

    1. Core Business and Expertise

    Huawei:

    Focus on Technology: Huawei is primarily a technology company with a strong background in telecommunications and information technology. Their entry into the automotive industry leverages their expertise in ICT (Information and Communication Technology), AI, and cloud computing.

    Autonomous Driving and Connectivity: Huawei focuses on integrating advanced autonomous driving systems and connectivity solutions, such as their Huawei ADS (Autonomous Driving System) and HarmonyOS-powered smart cockpits.

    These features are designed to enhance the driving experience through advanced driver assistance systems and seamless connectivity.

    BYD:

    Automotive Manufacturer: BYD (Build Your Dreams) is an established automotive manufacturer with a comprehensive focus on producing electric and hybrid vehicles. They have a deep expertise in battery technology and electric drivetrains.

    Battery Technology: BYD is a leader in battery manufacturing, known for their Blade Battery technology, which emphasizes safety, efficiency, and longevity. Their focus is on creating vehicles that are efficient, affordable, and environmentally friendly.

    2. Product Range and Market Strategy

    Huawei:

    Collaborative Ventures: Huawei collaborates with established car manufacturers such as Seres and Chery to produce vehicles. Models like the AITO M5, M7, and Luxeed S7 showcase these collaborations. Huawei provides the technological backbone, including autonomous driving features, connectivity, and infotainment systems.

    Technology Integration: The main selling point of Huawei’s vehicles is the integration of cutting-edge technology, making their cars highly advanced in terms of connectivity and autonomous driving capabilities.

    BYD:

    Diverse Vehicle Portfolio: BYD produces a wide range of electric vehicles (EVs) and plug-in hybrids (PHEVs), including sedans, SUVs, and buses. They offer models like the Tang, Han, and Qin, which cater to different segments of the market.

    Vertical Integration: BYD’s strategy involves vertical integration, controlling the entire supply chain from battery production to vehicle manufacturing. This allows them to optimize costs and ensure high quality across all components of their vehicles.

    3. Market Position and Brand Identity

    Huawei:

    Pioneer in Technology: Huawei positions itself as a pioneer in technology within the automotive industry, with a focus on integrating the latest ICT advancements into vehicles.

    Their branding highlights the fusion of smart technology and advanced driving systems.

    New Player: As a relatively new player in the automotive market, Huawei is using its technological expertise to distinguish itself from traditional car manufacturers.

    BYD:

    Established Electric Vehicle (EV) Brand: BYD has a strong presence in the EV market and is globally recognized for its contributions to electric mobility.

    Their brand identity is built on their extensive experience in producing dependable and efficient electric vehicles.

    Focus on Sustainability: BYD emphasizes sustainability and eco-friendliness, showcasing their efforts in emission reduction and promotion of green energy through their EV offerings.

    The fundamental disparities between Huawei and BYD in the automotive market arise from their core business areas, product strategies, and market positioning.

    Huawei harnesses its technological capabilities to offer highly connected and autonomous vehicles through collaborations, while BYD concentrates on producing a wide array of electric vehicles with a strong focus on battery technology and sustainability.

    These differences shape their respective approaches to revolutionizing the automotive industry and meeting the needs of contemporary consumers.

    How Huawei is Revolutionizing the Chinese Automotive Market

    Huawei’s foray into the automotive market is causing significant waves throughout the industry, fundamentally changing the dynamics of the Chinese car market.

    Their innovative approach, integrating advanced technology with automotive manufacturing, is setting new benchmarks and widespread driving transformation.

    Huawei’s dedication to research and development (R&D) and innovation is a game-changer. By leveraging their expertise in ICT, they have introduced features such as the HarmonyOS smart cockpit, which offers seamless connectivity and a user-friendly interface.

    This integration is not just about incorporating new gadgets; it transforms the entire driving experience, making it more interactive and personalized.

    Furthermore, the adoption of 5G technology ensures that Huawei’s cars are at the forefront of the connected vehicle revolution, offering real-time data exchange and enhanced vehicle-to-everything (V2X) communication.

    Additionally, Huawei’s partnerships with established automotive manufacturers like Seres and Chery are accelerating the pace of innovation in the industry.

    These collaborations combine Huawei’s technological prowess with the automotive expertise of their partners, resulting in the rapid development and deployment of new vehicle models.

    The AITO M5, M7, and M9, as well as the Luxeed S7, showcase how these partnerships can yield high-quality, technologically advanced vehicles that meet the evolving demands of consumers.

    These efforts are not only enhancing the competitiveness of the Chinese automotive industry but also positioning it as a leader in the global market.

    Conclusion

    Huawei’s entry into the automotive market has transformed significantly the Chinese automotive industry by integrating cutting-edge technology with traditional vehicle manufacturing.

    Known for its advancements in telecommunications, Huawei utilizes its expertise in ICT, AI, and cloud computing to introduce state-of-the-art autonomous driving systems and smart cockpits powered by HarmonyOS.

    These features enhance the user experience by providing seamless connectivity, real-time data exchange, and advanced driver assistance.

    Huawei, known for its advancements in telecommunications, is making significant strides in the automotive industry, especially in the realm of autonomous driving.

    Their collaboration with AITO, a joint venture with Seres, demonstrates their commitment to integrating cutting-edge technology into modern vehicles.

    Here’s a detailed look at how Huawei’s autonomous driving solutions differ from traditional cars and enhance the user experience.

    Advanced Autonomous Driving Technology

    Huawei’s smart cars are equipped with the HUAWEI ADS 2.0 (Advanced Driving System), which incorporates several state-of-the-art technologies to facilitate autonomous driving:

    AI and Machine Learning: Huawei’s autonomous driving system utilizes AI algorithms to process extensive amounts of sensor data, enabling real-time decision-making and adjustments.

    Comprehensive Sensor Suite: The vehicles come with a combination of LIDAR, radar, and high-definition cameras, providing a 360-degree view of the surroundings and ensuring precise navigation and obstacle detection.

    High-Performance Computing: These systems require robust computing power to handle complex driving scenarios, which Huawei provides through its advanced processors.

    Seres: Huawei and Seres have a substantial partnership, resulting in the AITO brand. This collaboration has given rise to models such as the AITO M5, M7, and M9, which integrate Huawei’s advanced ICT and autonomous driving technologies with Seres’ automotive expertise.

    Chery: The collaboration with Chery resulted in the creation of the Luxeed S7, an electric sedan that combines Chery’s automotive experience with Huawei’s state-of-the-art technology.

    Avatr (Changan, Nio, and CATL): Huawei is working with Avatr, a joint venture involving Changan, Nio, and CATL, to develop new electric models. This partnership aims to produce vehicles that utilize a new platform supporting various types of advanced electric powertrains.

    BAIC BluePark: BAIC BluePark partners with Huawei to incorporate smart selection technologies into high-end models, enhancing the user experience with advanced features and connectivity.

    Honda: In the Chinese market, Honda integrates Huawei’s components into their electric vehicle lineup, showcasing a successful integration of traditional automotive engineering with Huawei’s advanced technological capabilities.

    Driving Experience and PCB Integration

    Huawei’s smart cars have garnered positive feedback for their driving experience, especially for the stability and reliability of their autonomous driving systems.

    The HarmonyOS-powered smart cockpit offers intuitive controls and personalized settings, significantly improving user satisfaction.

    PCBs (Printed Circuit Boards) play a critical role in Huawei’s automotive technology. They support central computing systems, connectivity modules, sensor integration, and power management systems, enabling sophisticated functionalities and ensuring the performance and reliability of Huawei’s vehicles.

    The Fangchengbao brand of BYD will be the first to incorporate Huawei’s Qiankun intelligent driving system according to the agreement. The Bao 8 SUV, a model in the Fangchengbao range, is expected to be the first vehicle equipped with this technology, with plans for its release later this year.

    As BYD aims to move upmarket and increase sales of its premium brands, including Denza, Fangchengbao, and Yangwang, this collaboration is part of the company’s strategy to focus on higher-margin vehicles to improve profitability.

    Despite the fact that these premium brands collectively made up only 5% of BYD’s total sales in the first half of the year, as reported by the China Association of Automobile Manufacturers, BYD is facing a significant challenge.

    The decision to integrate Huawei’s autonomous driving system into its vehicles underscores the competitive pressure BYD is experiencing in the rapidly evolving EV market. Despite its dominance in EV sales, largely due to its cost-effective vertical integration strategy, BYD has been striving to catch up in the area of ​​smart driving technologies.

    The company has been heavily investing in the development of its own advanced driver-assistance system (ADAS) and has reportedly recruited thousands of engineers since last year to strengthen its in-house capabilities.

    However, BYD’s dependence on external suppliers for intelligent features in its upmarket models remains. For example, the company uses Momenta ADAS in its Denza cars. The partnership with Huawei is a significant move in BYD’s efforts to enhance its offerings in this critical area of ​​automotive technology.

    The collaboration also highlights Huawei’s increasing influence in the EV sector as a major supplier of ADAS. The tech conglomerate has been expanding its presence in the automotive industry and has formed notable partnerships beyond BYD. For instance, Volkswagen’s Audi brand has also announced plans to utilize Huawei’s ADAS in its EVs intended for the Chinese market.

    This strategic alliance between BYD and Huawei reflects the broader trends in the global automotive industry, where traditional automakers and tech companies are increasingly collaborating to meet the demands of next-generation vehicles.

    As autonomous driving technology becomes a key differentiator in the premium EV segment, such partnerships are likely to become more common.

    Industry observers will closely monitor the success of this venture, as it could potentially reshape the competitive landscape of China’s EV market.

    For BYD, the integration of Huawei’s advanced autonomous driving system presents an opportunity to strengthen its position in the premium segment and potentially capture a larger share of this lucrative market.

    Closing the Technology Gap

    This collaboration comes as BYD aims to narrow the technological gap with Tesla and other emerging Chinese automakers. Despite its dominance in the Chinese EV market, BYD acknowledges the increasing demand for advanced features among buyers.

    The partnership represents a significant shift in BYD’s stance on autonomous driving technology. In 2023, the company had argued that self-driving technology was “basically impossible” for consumer applications. However, in 2024, BYD announced a $14 billion investment in smart car technology , including autonomous driving software and driver-assistance systems.

    Expanding BYD’s Premium Offerings

    BYD’s decision to integrate Huawei’s technology aligns with its strategy to boost sales of its premium brands, including Denza, Fangcheng Bao, and Yangwang. These brands currently make up only 5% of BYD’s total sales in the first half of 2023, according to the China Association of Automobile Manufacturers as cited by Reuters.

    By leveraging Huawei’s expertise, BYD aims to differentiate its high-end offerings and improve profitability. The move is part of BYD’s broader ambition to establish itself as a top global automaker, competing with established players like Hyundai Motor Company and Volkswagen, which encompasses several successful brands.

    Huawei’s Influence in the EV Sector

    Moreover, the partnership also underscores Huawei’s increasing presence in the EV industry as a major supplier of advanced driver-assistance systems (ADAS). Beyond BYD, Huawei has secured a deal with Volkswagen’s Audi to provide ADAS technology for its EVs in the Chinese market.

    With this strategic alliance, BYD is positioning itself to compete more effectively in the premium EV segment, both in China and globally. As a result, collaborations like the one between BYD and Huawei are likely to become more common as the prevalence of EVs continues to grow on roads.

    5 levels of Autonomous Driving Network

    Autonomous driving networks go beyond innovating a single product and are more about innovating system architecture and business models, which requires industry players to collaborate to define standards and guide technology development and rollout.

    Huawei has suggested five levels of Autonomous Driving Network systems for the telecom industry:

    • L0 manual O&M: provides assisted monitoring capabilities and all dynamic tasks must be executed manually.
    • L1 assisted O&M: performs a specific sub-task based on existing rules to enhance execution efficiency.
    • L2 partial autonomous networks: enables closed-loop O&M for specific units under certain external environments, reducing the requirement for personnel experience and skills.
    • L3 conditional autonomous networks: expands on L2 capabilities, allowing the system to sense real-time environmental changes, and in certain domains, optimize and adjust to the external environment to enable intent-based closed-loop management.
    • L4 highly autonomous networks: builds on L3 capabilities to accommodate more complex cross-domain environments and achieve predictive or active closed-loop management of service and customer experience-driven networks. Operators can then resolve network faults before customer complaints, reduce service outages, and ultimately improve customer satisfaction.
    • L5 fully autonomous networks: represents the telecom network evolution goal. The system possesses closed-loop automation capabilities across multiple services, multiple domains, and the entire lifecycle for true Autonomous Driving Network.

    The future of autonomous driving networks

    At Mobile World Congress 2018, Huawei introduced its Intent-Driven Network (IDN) solution, which establishes a digital twin between physical networks and business goals, and helps advance networks from SDNs towards autonomous driving networks. The solution also assists operators and enterprises in implementing digital network transformation centered on service experience.

    The solution necessitates four transformations within the industry: from network-centric to user-experience-centric; from open-loop to closed-loop; from passive response to proactive prediction; and from skill-dependent to automation and AI.

    Huawei’s IDN solution encompasses various scenarios, including broadband access, IP networks, and optical and data center networks. It enables telecom networks to progress towards Autonomous Driving Networks.

    For instance, in the broadband access field, there is an average of 1,000 customer complaints and 300 door-to-door maintenance visits per year for every 10,000 users. Due to a lack of data, about 20 percent of customer complaints cannot be entirely resolved However, the IDN perceives broadband services in real time.

    Big data and AI algorithms quickly locate faults and optimize the network, resulting in a 30 percent reduction in home visits and an improved service experience.

    In September 2018, Huawei enhanced its Intent-Driven Network (IDN) solution and proposed its “digital world + physical network two-wheel drive” strategy to accelerate IDN innovation.

    Huawei is also expediting the implementation of autonomous driving networks in wireless network scenarios. At the 9th Global Mobile Broadband Forum, Huawei published the Key Scenarios of Autonomous Driving Mobile Network white paper, outlining seven key sub-scenarios, such as base station deployment and network energy efficiency, to progressively achieve network automation.

    As research progresses, Huawei will continuously update its application scenarios and release its research findings. Huawei and leading global operators have jointly initiated the NetCity project to promote the application of new technologies such as big data, AI, and cloud computing in telecom networks.

    By defining business scenarios and introducing innovations following the DevOps model, Huawei and its operator partners have introduced cutting-edge technologies to enhance users’ service experience, driving telecom networks to evolve towards Autonomous Driving Networks.

    By the end of 2018, Huawei had collaborated with leading customers to launch 25 NetCity innovation projects. Achieving autonomous driving networks will be a lengthy journey. To realize our vision, the industry must collaborate.

    Huawei is dedicated to leading developing ICT solutions through continuous innovation, and simplifying complexity for customers. Together, we will embrace a fully connected, intelligent world.

    Huawei introduced a new software brand for intelligent driving called Qiankun on Wednesday (Apr 24), as part of its efforts to establish a strong presence in the electric vehicle industry.

    The name Qiankun represents a fusion of heaven and the Kunlun Mountains and the brand aims to offer self-driving systems for various components such as driving chassis, audio, and driver’s seat. Jin Yuzhi, CEO of Huawei’s Intelligent Automotive Solution (IAS) business unit , made this announcement during an event preceding the Beijing auto show.

    Jin stated that by the end of 2024, over 500,000 cars equipped with Huawei’s self-driving system will be on the roads, marking the beginning of mass commercialization of smart driving. Huawei’s smart car unit was established in 2019 with the vision of becoming a leading supplier of software and components to partners, akin to German automotive supplier Bosch in the era of intelligent electric vehicles.

    In November, Huawei revealed plans to spin off the smart car unit into a new company, which will inherit the unit’s core technologies and resources and receive investments from partners like automaker Changan Auto.

    At the event, Jin Yuzhi also announced the launch of the Qiankun ADS 3.0 intelligent driving system, which is an upgraded version of the previous Huawei ADS 2.0, featuring enhancements in mapless intelligent driving, collision avoidance, and all-scenario parking.

    The ADS 3.0 boasts improved road and scene recognition through cloud and real vehicle training, providing the system with the ability to make decisions similar to an experienced human driver. It also introduces the GOD network for general obstacle detection, an upgrade from the architecture seen in ADS 2.0.

    Huawei claims that the Qiankun ADS 3.0 is the first product in the industry to enable Navigation Cruise Assist (NCA) from parking space to parking space, allowing drivers to exit the car and walk away after selecting the target parking space. This system supports parking in all visible spaces and is not limited to specific types.

    The upgrade to Qiankun ADS 3.0 also includes improved capabilities for the omnidirectional collision avoidance system (CAS) to CAS 2.0 standard, covering front, rear, and side collision avoidance. According to Huawei, a test with the Aito M9 equipped with CAS 2.0 outperformed comparable models in various scenarios, including pedestrian crossing and left turns.

    The high-end version of Qiankun 3.0 is dependent on Lidar, while there is also a Qiankun SE version for non-Lidar equipped vehicles, which is expected to replace the current Huawei ADS Basic Intelligent Driving system.

    Other components of the Qiankun brand include the Qiankun iDVP intelligent vehicle digital platform, Qiankun Vehicle Control Module, and XMotion 2.0 Body Motion Collaborative Control. Huawei claims that the Qiankun vehicle control module is the world’s first 5-in-1 vehicle control SoC (system on chip), leading in terms of high integration, high performance, low latency, high reliability, and high security.

    Additionally, the XMotion 2.0 system uses 6D vehicle motion algorithms to enhance driving performance and provide a better driving experience, offering stability control at speeds up to 120 km/h and stability during events such as punctures and high-speed obstacle avoidance. Adaptive slip control ensures that a car equipped with the system does not skid on slippery roads.

    Huawei also announced at the conference that the system will be integrated into 10 “new” models to be launched in 2024 from brands including Dongfeng, Changan, GAC, BAIC, Aito, Chery, and JAC.

    Finally, upgrades to the HarmonyOS cockpit system were also unveiled at the conference.

  • What’s Next for EV Battery Technology?

    What’s Next for EV Battery Technology?

    Electric vehicles are less complex than gasoline-powered ones. They lack gas tanks, pistons, spark plugs, and tailpipes. Assembly specialist Chris Rehrig explains that the concept involves fewer parts, amidst the plant’s noise.

    Offline, they are equipped with large batteries. At Volkswagen, battery packs weighing over 1,000 pounds will be assembled across the street and transport in by autonomous vehicles. Each battery pack, encased in a plate with cooling fluid, will be attached to a car’s underbody using automated tools.

    When a gasoline-powered car approaches, the same machine will instead install a heat shield. Ensuring a smooth operation will require coordination, as noted by Noah Walker, Rehrig’s supervisor.

    The fact that Volkswagen and many others are now attempting this transition indicates a critical moment for the planet. The company and the industry are shifting away from what made Volkswagen the world’s largest manufacturing company by revenue: the carbon dioxide-emitting internal combustion engine.

    As more individuals and governments advocate for urgent action on climate change, cars and trucks are experiencing their most significant transformation since their inception over a century ago. Both startups and established companies are vying for a position in what industry leaders now view as the most promising path forward: vehicles with no tailpipe emissions.

    By almost every metric, their popularity is rapidly increasing. Overnight, the era of the electric car has arrived.

    However, the transition away from gasoline-powered vehicles remains too slow to address the climate challenge within the necessary timeframe. Greenhouse gas emissions are on the rise, resulting in extreme droughts and wildfires from the Arctic to Australia, and breaking global temperature records.

    Melting ice sheets are causing sea levels to rise, leading to increased flooding and more severe storms. To avert danger for millions of people, the Intergovernmental Panel on Climate Change states that the world needs to eliminate carbon dioxide emissions by 2050, preferably even sooner.

    With nearly a quarter of global emissions stemming from all types of transportation, can we quickly reduce our dependence on gasoline-powered cars to avoid the most severe consequences? And can we do so without causing new environmental disasters? Several emerging companies and many traditional players are now staking their future—and ours—on the belief that millions of consumers are finally ready to make the switch.

    It’s difficult to argue that we are witnessing anything less than a revolution. In the 1990s, General Motors introduced an electric car, produced fewer than 1,200 units, and recalled them. Today, the pace of change is rapid.

    The number of all-electric and plug-in hybrid electric vehicles (EVs) increased by nearly half last year, despite an overall 16 percent decrease in car sales. The variety of models available to drivers worldwide expanded by 40 percent, totaling about 370.

    In North America, the variety is projected to nearly triple by 2024, reaching 138. Electric Mini Coopers, Porsches, and Harley-Davidsons are already on the market.

    Governments from California to China, Japan, and the United Kingdom have recently announced plans to prohibit the sale of new passenger vehicles powered solely by gas or diesel by 2035 or sooner. Automotive giants such as Volvo and Jaguar now say they will phase out piston engines by then, while Ford states that its passenger vehicles in Europe will be all EVs or hybrids in five years and all electric by 2030.

    GM has committed to achieving carbon neutrality by 2040. President Joe Biden has vowed to transition the federal fleet of over 600,000 vehicles to EVs, and his administration plans to tighten fuel-efficiency standards.

    Wall Street and investors are making substantial bets. At one point last year, Tesla, which was responsible for almost 80 percent of all US EV sales in 2020, was valued higher than oil giants ExxonMobil, Chevron, Shell, and BP combined. New electric car and electric truck companies continue to emerge: Bollinger, Faraday Future, Nio, Byton.

    Others are gaining traction as well. A two-door electric micro-car with a top speed of 62 miles per hour and a starting price below $6,000 has been outselling Tesla in China, which is home to over 40 percent of the world’s plug-in vehicles.

    The days of light-duty combustion engines appear to be truly numbered. “The dam is breaking; the tipping point is here,” says Sam Ricketts, a member of the team that authored Washington governor Jay Inslee’s climate action plan during his presidential run. Many of Inslee’s later ideas found their way into Biden’s climate plans.

    “Electrifying transportation is our future. I think that train has left the station,” says Kristin Dziczek, an economist with the Center for Automotive Research, a Michigan-based organization partially backed by car manufacturers.

    Even before the successful mass production of the Prius hybrid by Toyota over twenty years ago, some environmentally conscious countries had started to tighten emissions standards. For example, in Norway, where half of the new vehicles on the road in 2020 were electric, incentives such as tax savings for electric cars were introduced.

    In China, cities dealing with air pollution issues streamlined the registration process for electric vehicles, making it faster and more cost-effective compared to vehicles with internal combustion engines. The US government also provided consumers with incentives of up to $7,500 for the purchase of electric vehicles and hybrids, while also investing in battery research and development.

    In 2009, Tesla received a $465 million loan from the Department of Energy to kick-start sedan production. Over the next decade, battery prices dropped by 89 percent, and Tesla managed to sell 1.5 million plug-in vehicles.

    However, there is still a long way to go. Globally, approximately 12 million plug-in cars and trucks have been sold, with nearly 90 percent of them in just three regions: China, Europe, and the United States. Despite this progress, there are still around 1.5 billion traditional gasoline-powered vehicles on the roads, and the total number of passenger vehicles, of all types, is projected to increase by another billion over the next 30 years due to rising incomes in underdeveloped countries.

    The rapid adoption of electric vehicles by drivers worldwide depends on various factors. The industry is addressing some of the major obstacles for consumers, such as range, recharging times, charging infrastructure, and cost. For instance, a laboratory prototype of a solid-state battery could potentially enable electric vehicles to be recharged in just 10 minutes. Additionally, companies like Tesla and Lucid Motors are already developing all-electric vehicles that could exceed 400 miles per charge.

    Lucid claims its car will surpass 500 miles, while Aptera suggests that some drivers of its three-wheeled, aerodynamic, solar-powered vehicle may never need to visit a charging station. Although most new electric vehicles are currently luxury cars that are out of reach for many consumers, investment bank UBS and research firm BloombergNEF predict that electric cars could achieve cost parity with conventional vehicles in approximately five years.

    Nevertheless, analysts emphasize that more action is needed to accelerate this transition. It is not expected that the variety of options for consumers will match the range of choices available for traditional gasoline-powered vehicles in the near future. Government incentives, such as the reinstatement of the $7,500 tax credit, which is no longer available for certain automakers, may be crucial in attracting buyers.

    According to Dziczek, “There isn’t a market in the world that can do this without some kind of public investment.” The Biden administration has proposed a $15 billion investment to help install 500,000 charging stations, but this has faced resistance from many in Congress.

    The price and sustainability of electric vehicles also hinge on the availability of raw materials. EV batteries rely on materials such as lithium, nickel, cobalt, manganese, and graphite, most of which are mined in a few specific locations, with much of the refining taking place in China. With demand on the rise, nations that manufacture electric vehicles are working to secure supplies, including plans for a lithium mine in Nevada.

    However, according to Jonathon Mulcahy of the research firm Rystad Energy, which projects potential lithium shortages later this decade, “there’s no point in having a lithium mine in America, shipping the lithium out to Asia for processing, then shipping it back to America for use in your batteries.”

    At the same time, the extraction of these metals has led to environmental and human rights concerns. Entities like the European Union are grappling with ways to ensure stability,, and safe supply chains, while ethical automakers, including Volkswagen, are implementing auditing and certification systems to ensure that battery suppliers adhere to environmental and labor regulations.

    Consumers may be hesitant to trust these commitments, as automakers have previously let them down. For example, when issues arise on Volkswagen’s Chattanooga assembly line, production is halted by workers pulling cords, and a distinctive song plays across the factory floor to identify the cause of the delay. During a tour, Scott Joplin’s “The Entertainer” was heard, signaling a stoppage in operations. However, on that particular day, significant progress was being made, such as the installation of special robots needed for EV production, the securing of new parts suppliers by buyers, and preparations for the hiring of hundreds more workers by executives.

    The groundwork for this moment was laid in 1979 when Tennessee governor Lamar Alexander traveled to Japan with maps and a satellite photograph to persuade the chairman of Nissan that his state offered ideal manufacturing land connected by rail and highway to major population centers on both coasts. This led to other carmakers following suit, and today, this conservative state plays a role in the shift towards eco-friendly vehicles.

    From 2013, the Nissan factory in Smyrna, located outside Nashville, has been producing the electric Leaf, the first commercially successful modern EV. Priced at under $25,000 after tax credits, it remains one of the most affordable options in North America.

    Approximately 40 miles away, GM is investing two billion dollars to revamp its Spring Hill plant in order to manufacture an electric Cadillac, which will be the first of several EVs to be produced there. By 2023, the entire operation will be powered by solar energy .

    The company is also dedicating $2.3 billion to establish a battery plant that will provide employment for 1,300 individuals. Additionally, the Tennessee Valley Authority, responsible for operating hydroelectric dams and other power plants, intends to finance fast-charging stations along Tennessee’s highways, with stations available every 50 miles.

    Then there’s Chattanooga. In 1969, a year before the inception of the Environmental Protection Agency, the US government labeled the city’s particulate-matter pollution as the most severe in the country. Its ozone pollution was second only to that of Los Angeles.

    After years of revitalization, the city achieved one of the most noteworthy environmental successes in history. In 2008, shortly after the city met ozone standards, Volkswagen initiated the construction of its new plant.

    The Volkswagen Group, which includes Audi, Porsche, and nine other carmakers, has embraced EVs. This is partly due to the emissions scandal that came to light in 2014, resulting in substantial fines, the recall of millions of cars, and the indictment of its former CEO on conspiracy charges.

    As part of a settlement with the EPA for installing devices on approximately 590,000 diesel vehicles sold in the US that misrepresented their pollution levels, the company was required to make substantial investments in EV charging infrastructure. However, this alone does not fully explain Volkswagen’s profound shift towards EVs.

    The company is allocating over $40 billion globally to develop 70 new electric models and manufacture 26 million of them by 2030. In collaboration with partners, VW anticipates installing 3,500 fast chargers in the US by the end of the year and 18,000 in Europe by 2025.

    Volkswagen has invested $300 million in a battery start-up aimed at reducing charging times by half. The company is also constructing and expanding battery plants across Europe with the goal of halving battery prices.

    Nic Lutsey, director of the electric-vehicle program for the International Council on Clean Transportation, which provides data and analysis to aid governments in promoting environmentally friendly transportation, acknowledges Volkswagen’s substantial investments in EVs, stating, “It is absolutely clear that VW, among the large automakers, is by far making the largest investments in EVs.” It was Lutsey’s organization that first uncovered Volkswagen’s emission cheating.

    Scott Keogh, CEO of Volkswagen Group of America, grew up in the 1970s on Long Island, riding in the back of his family’s VW Beetle. He pursued a degree in comparative literature and engaged in development projects in Bolivia before entering the automotive industry, initially at Mercedes-Benz and later at Audi. In 2018, following the EPA settlement, he assumed leadership of VW’s North American business.

    Keogh acknowledges the emissions scandal as a corporate disaster, describing it as a significant setback for the company. However, he emphasizes the company’s decision to emerge from the crisis stronger, more resilient, and purpose-driven.

    Volkswagen announced its commitment to EVs so early that when the plan was presented to US car dealers, Keogh notes that it was met with skepticism. Not long ago, dealers assumed EVs would remain a niche market.

    Keogh asserts that this perception has since evolved. He regularly receives research indicating that, under optimistic scenarios, electric vehicles could account for 50 percent of car purchases within the next decade. Suddenly, VW’s investments appear astute and imperative.

    However, Keogh is well aware of the challenges ahead. Currently, all-electric vehicles constitute less than 5 percent of new-car sales in Europe and 2 percent in the US (The figure rises to 8 percent in China.) Keogh anticipates that, within 10 years, this percentage could reach 30 to 40 percent. Counting on such rapid growth is certainly a source of concern.

    Nevertheless, Keogh does not perceive Tesla or other EV manufacturers as the primary competitors. His target audience consists of individuals considering the purchase of small gas-powered SUVs, such as Toyota’s RAV4 or Subaru’s Forester. He emphasizes the company’s intense focus on the 98 percent of the market not currently driving an electric vehicle.

    At the start, there was a similar competition for consumers’ attention. In 1896, during a major car exhibition in London, potential buyers were faced with a choice between electric and gas-powered vehicles, as horses and buggies still vied with automobiles. Some Aspects of this choice remain unchanged.

    The British Medical Journal noted that electricity had the advantage of being odorless and producing less noise and vibration, but it was hindered by the cost of batteries and the limitation of recharging only where electric supply was available.

    When the first US auto dealership opened in Detroit a few years later, it exclusively sold electric cars. In Austria, Ferdinand Porsche’s early designs also relied on electricity. His partner, Ludwig Lohner, favored electric drives due to the already high pollution from petrol engines in Vienna. However, the availability of cheap oil and improved rural roads led to the dominance of gasoline-powered vehicles. Electric vehicles disappeared by the end of the 1930s.

    In Normal, Illinois, I met a man with a unique vision for reviving electric vehicles. In 2015, Mitsubishi closed its auto plant in the area, resulting in the layoff of nearly 1,300 workers. Two years later, engineer and entrepreneur Robert “RJ” Scaringe repurposed the vacant space to establish a factory for his startup, Rivian.

    Scaringe, a slim man in his late 30s, with an unassuming demeanor, was seen standing in line at the cafeteria on a day when his company’s value was nearly $28 billion.

    Even as a Florida teenager working on Porsches in a neighbor’s garage, Scaringe was determined to build cars. During his time at MIT, where he obtained a doctorate in mechanical engineering, his concerns about climate change became a major focus.

    As we toured the old Mitsubishi plant before Rivian’s new vehicles went into production, Scaringe described his mission as finding a way to transition approximately 90 to 100 million cars to electric power.

    Scaringe chose to focus on designing electric vehicles that consumers would desire. What do consumers desire most? Some of the least fuel-efficient vehicles on the road. There are now over 200 million SUVs worldwide, six times more than a decade ago, and millions more trucks.

    In the US, together they accounted for 70 percent of the new-vehicle market in 2019. “Not only is it the biggest problem in terms of carbon and sustainability… but they’re also the most popular vehicle type,” Scaringe remarked .

    Rivian’s first two electric vehicles, a short-bed pickup named the R1T and an SUV called the R1S, will provide environmentally friendly options for outdoor enthusiasts. Similar to Tesla, the company is establishing its own dedicated charging network: 3,500 fast chargers on highways, and thousands more in state parks and near trailheads.

    Scaringe felt compelled to do so. While most charging occurs at home, he explained that a patchy and inconsistent charging network complicates long trips and remains “a reason for someone not to buy the vehicle.”

    Rivian won’t be the only player in the truck market. Tesla has unveiled its Cybertruck, and an electric version of Ford’s F-150, the most popular vehicle in America with annual sales approaching 900,000 in 2019, is expected in 2022.

    The base price for the F-150 Lightning will be considerably lower than Rivian’s high-end vehicles. Within a month of its debut, over 100,000 customers had made reservations.

    Ford is an investor in Rivian, and Scaringe is optimistic about the competition; he is quick to emphasize that a complete shift to electric vehicles cannot be achieved by any single company. However, he and his team also recognize that our relationship with vehicles is changing in ways that could support the adoption of electric vehicles.

    “Fifteen years ago, if we wanted bananas, we’d go to the store. If I wanted new shoes, I’d drive to the store,” he stated. Now, deliveries bring books, meals, groceries, and shoes to our door. Others make trips for us. In that, Scaringe sees an opportunity.

    What if he could convert a fleet of delivery trucks to electric vehicles? “You may, as a customer, not yet choose to switch to electric for your personal vehicle. But because you’re outsourcing a significant portion of your last-mile logistics, you will now be transitioning to electricity whether you realize it or not.”

    Rivian is producing 100,000 electric delivery vehicles for Amazon, the retail giant. Some are currently being tested on the streets. FedEx has also announced its plan to go electric. UPS has taken a stake in another electric vehicle company and is purchasing 10,000 electric delivery vans .

    Scaringe is considering the developing world, where few individuals own new cars and trucks, and the relationship with vehicles is fundamentally different. He anticipates the emergence of new user models there, such as partial ownership, flexible leasing, and subscription services.

    Rather than witnessing the spread of new gas-powered vehicles in regions like Africa and India, he believes the solution is to innovate on the product, business model, and ecosystem to enable these markets to bypass the inefficient and dirty transportation systems seen in the US , Europe, and China.

    In Nairobi’s industrial district, warehouses house various businesses, including a unique venture where employees work on converting old petroleum-powered transit vehicles into electric vehicles and building affordable electric motorcycles, while also providing financing options.

    Opibus, a startup aiming to bring electrification to developing countries, is not only converting old petroleum-powered transit vehicles into electric vehicles but also constructing new, inexpensive electric motorcycles.

    Wairimu, an engineer at Opibus, emphasizes the opportunity to have a better vision for Africa, where many places lack gas stations and where the majority of vehicles are older ones imported used.

    The developing world presents a significant untapped market, which traditional vehicle manufacturers find daunting, according to chief strategist Albin Wilson.

    The number of gas- and diesel-powered vehicles in parts of East Africa is roughly doubling each decade, with the majority of vehicles being older ones imported used. Opibus is one of the few organizations trying to lay the groundwork for change, given the lack of focus from major vehicle manufacturers on this emerging market.

    Opibus initially created conversion kits for safari companies, then transitioned to building electric motorcycles, which have been well received due to their lower cost for fuel and maintenance.

    According to Wilson, many motorcycle owners in the region are primarily interested in whether electrification will improve their livelihoods and help them earn more money.

    Similar to promote electrification efforts are taking place in other developing countries, including EV start-ups in Rwanda and Ethiopia, as well as experiments with electric postal vehicles in the Philippines and potential electric buses in the Seychelles.

    Wairimu believes that transitioning to electric vehicles could have a significant positive impact on East Africa and the world as a whole, particularly in the face of climate change and its potential threats to agriculture.

    The interest in electric vehicles is currently at a peak, with a 55% increase in new EV sales in 2022 compared to the previous year. However, there is still a large number of gasoline cars on the roads, and it is likely to remain so for the foreseeable future.

    A growing industry is revitalizing internal combustion vehicles by converting them to electric power, and both the shops and aftermarket community are expanding significantly to meet the increasing demand.

    “This is a 1976 BMW 2002 — a really enjoyable car to drive but lacking in power,” according to Michael Bream, CEO and founder of EV West, as reported by CNBC. “This particular customer opted for what we call ‘the whole hog ,’ and is installing the 550 horsepower Tesla drive unit in this car.”

    Bream’s shop, located in San Diego, California, is a pioneer in EV conversions and has gained significant popularity, resulting in a four-to-five-year waitlist.

    “Being involved in electric cars right now is akin to being involved in computers in the ’90s… We want this shift to sustainable fuels to be engaging and enjoyable for car enthusiasts and automobile culture participants,” explained Bream.

    In addition to conversion shops, there is a growing community of DIYers undertaking these projects themselves. While the complexity of electric vehicles can be daunting, 14-year-old Frances Farnam is undeterred. She is working on converting a 1976 Porsche 914, which she acquired three years ago and has been documenting the process on her YouTube channel, Tinkergineering.

    “I’ve always wanted an electric car, and my mom has a BMW i3,” said Frances. “I hope that by doing this, I can prove that it’s not too challenging… I’m simply doing this in my backyard with my dad.”

    She has recently completed priming the car for paint and is preparing to rebuild it. The 914 internet community has been invaluable in assisting her and her father throughout the entire process.

    To learn about working with electrical systems, she took a course with Legacy EV, an electric vehicle aftermarket shop, which taught her the intricacies of performing a conversion.

    The aftermarket ecosystem for electric vehicles appears to be expanding, with an array of EV-focused parts available to support individuals like Frances who aspire to build their own electric car. Notably, both Ford and GM offer components for EV conversions, and numerous other companies are entering this space. According to the Specialty Equipment Market Association, a trade organization representing automotive manufacturers and resellers, the number of EV-focused products in the market has grown significantly.

    “We began two years ago at SEMA with an EV section at the show,” said Mike Spagnola, president and CEO of SEMA. “It encompassed 2,000 square feet. This past year, it expanded to 22,000 square feet… I’m confident that in the next five years, it will reach 100,000 square feet.”

    Can AI aid in the discovery of rare metals such as cobalt and copper for the electrification of global vehicles?

    Securing access to rare Earth minerals is a crucial national security concern, given the significant dependence of the entire United States’ economy on minerals, the majority of which have been discovered in China so far.

    The Defense Advanced Research Projects Agency (DARPA) has collaborated with a company called HyperSpectral, which utilizes artificial intelligence to analyze spectroscopic data. This could prove instrumental in using satellites or drones to locate minerals that would otherwise be challenging to detect.

    HyperSpectral CEO Matt Thereur provided an exclusive interview to Defense One, explaining how the process works. Spectroscopy involves the study of how matter interacts with light or other forms of radiation across different wavelengths.

    The unique molecular makeup of a specific mineral or substance emits distinctive solar radiation, serving as a unique identifier.

    Previously, the company focused on food safety. Whether it concerns identifying potentially harmful pathogens in large food shipments or detecting a new outbreak of medication-resistant streptococcus, spectroscopy can aid in uncovering bacteria that are imperceptible to the naked eye.

    “At present, the existing procedures take a couple of days to distinguish between drug-resistant and drug-sensitive staphylococcus bacteria, as they need to culture the bacteria, apply antibiotics, and observe the response. This is in contrast to our approach, where we typically provide results within a few minutes based on a swab from a wound, rather than several days,” Thereur explained.

    How does AI come into play? According to Thereur, “Pure samples do not occur naturally. Nature is a very noisy environment. Therefore, when we construct these models using artificial intelligence, we seek out all the relationships that may be obscured by the noise , such as when one section of the spectrum is influenced by another substance within it.”

    Furthermore, there are various types of spectroscopic nesting that are not easily amalgamated into a comprehensive data overview, and this is where AI plays a role. Just as AI-driven transcription and translation are made possible by combining auditory data from human speech with textual data related to the likelihood of specific combinations of letters and words, the same principle could apply to spectroscopic data from diverse sources.

    “Understanding the spectrographic response of materials, whether through absorbance, reflectance, Fourier Transform Infrared Spectroscopy, Raman, or surface-enhanced Raman, is crucial for differentiating between various materials,” explained Thereur.

    Thereur highlighted that the DEA used a similar technique to distinguish between cocaine originating from different areas of Colombia.

    According to Thereur, the cooperative agreement with DARPA is still in its early stages, and the potential Defense Department applications for better material understanding are extensive.

    Spectroscopy can be conducted using specific satellites, making it valuable for intelligence collection, such as identifying specific materials used in enemy equipment or vehicles.

    The Pentagon is interested in improving access to rare Earth materials and relocating the production of essential weapons and supplies closer to the front lines, reducing reliance on vulnerable supply lines in the Pacific.

    “There are numerous applications and use cases for analyzing spectral data. There is a significant amount,” stated Thereur.

    A machine learning model can forecast the locations of minerals on Earth and potentially other planets by leveraging patterns in mineral associations. The scientific community and industry are interested in locating mineral deposits to gain insights into our planet’s history and for use in technologies like rechargeable batteries.

    Shaunna Morrison, Anirudh Prabhu, and their colleagues aimed to develop a tool for identifying occurrences of specific minerals, a task that has historically been legendary on individual experience and luck.

    The team created a machine learning model that utilizes data from the Mineral Evolution Database to predict previously unknown mineral occurrences based on association rules. The model was tested in the Tecopa basin in the Mojave Desert, a well-known Mars analog environment.

    The model successfully predicted the locations of geologically important minerals, including uraninite alteration, rutherfordine, andersonite, and schröckingerite, bayleyite, and zippeite.

    Additionally, the model identified promising areas for critical rare earth element and lithium minerals, including monazite-(Ce), and allanite-(Ce), and spodumene. The authors suggested that mineral association analysis could be a powerful predictive tool for mineralogists, petrologists , economic geologists, and planetary scientists.

    According to the International Energy Forum, between 35 and 194 new mines will be required by 2050 to meet technology and energy demands. These new mines are not small-scale coal mines but major operations related to rare earth metals.

    Recognizing that few new mines have been opened in the US over the years, this presents a significant challenge. The methods for identifying geographic locations with potential rare earth metal deposits have not kept pace with the times.

    However, the innovative use of AI in mining exploration appears to be changing this. An AI model for rare earth mining is currently undergoing testing and has shown promising results.

    One company, KoBold Metals, achieved a major breakthrough using AI in mining exploration by locating a large copper deposit deep beneath the surface in Zambia. This discovery generated significant excitement due to the demand for copper and other rare earth metals.

    To keep up with the demand for rare earth metals, such discoveries need to occur multiple times a year. It is hoped that the new AI model for rare earth mining will lead to a surge in these resources.

    In this context, AI could potentially serve as both a supplier and consumer of these metals. With substantial investment in these endeavors, it is evident that a solution is needed, and many believe AI will play a crucial role.

    The AI ​​Model for Rare Earth Mining

    Unlike traditional mining exploration, KoBold has taken a different approach. A key aspect of this approach involves a device that was originally developed to identify dark matter.

    When efforts to find dark matter proved unsuccessful, the technologies were repurposed for other applications, including the identification and location of rare earth metals.

    In essence, after preliminary research narrows down potential sites, a hole is drilled, sometimes deep below the earth’s surface. The technology is then utilized to identify minute subatomic particles and assess density readings to profile key metals, such as copper, lithium, cobalt, and nickel. AI plays a significant role in mining exploration projects like this.

    The scientific principles behind this AI model for rare earth mining have been applied in some unusual situations. For instance, it has been used to uncover ancient Egyptian burial tombs and to identify underground tunnels potentially used for unauthorized border crossings.

    The application of AI in mining exploration is a new development and is showing significant potential. KoBold is at the forefront utilizing this technology, receiving substantial financial support from venture capitalists worldwide as well as from governments, including the US If this AI model for rare earth mining consistently delivers results, it has the potential to revolutionize the mining industry.

    KoBold, which commenced operations approximately five years ago, recognized the early potential of AI in mining exploration. Rather than abandoning traditional mining exploration practices, the company integrated them with its AI model for rare earth mining. For example, Cessna planes are still used to conductor radar and magnetic readings to investigate potential metal deposits underground, and historical research and ground sampling are also routinely employed. All this information, along with AI drilling data, is combined to create KoBold’s Terrashed, a 3D model that integrates tens of millions of documents and data points to assist in metal discoveries.

    In terms of operations, KoBold has made substantial investments, with approximately $2.3 billion already invested in a recent copper discovery in Zambia. The company also controls a significant percentage of ownership in this mine. Additionally, KoBold has secured significant funding from private equity firms in the US and Europe, and has received substantial government support. The US has committed to building a railway in Zambia to facilitate copper exporting for KoBold, and negotiations are ongoing with Zambia regarding ownership rights to the mine.

    Assuming KoBold’s AI model for rare earth mining continues to perform well, the company is poised for success. It already has around 60 mining projects in progress, with AI in mining exploration driving future endeavors as well.

    One could argue that KoBold’s AI model for rare earth mining emerged at the right time, almost like a chicken and egg scenario. While AI in mining exploration represents a significant advancement, it also contributes to the problem. The energy requirements of AI are substantial, and rare earth metals are essential for developing energy solutions. In addition to their role in consumer goods, rare earth metals are also crucial for power grid solutions and large-scale battery storage facilities necessary for storing wind and solar energies. Moreover, these resources are also needed for advanced weaponry. While AI mining advancements are proving to be timely, they have also fueled the rising demand for rare earth metals.

    Assuming KoBold’s copper discovery in Zambia can be effectively mined, it has the potential to generate billions of dollars annually, with revenues projected to persist for decades. However, this is just one mine for a single rare earth metal, and many more will be needed . The use of AI in mining exploration holds promise in addressing this demand. Initial indications from KoBold’s AI model for rare earth mining and its copper discovery support this potential, and this breakthrough couldn’t have come at a better time.

    KoBold is not the only mining company embracing big data to facilitate the next wave of discoveries. However, its prominent financiers and focus on metals essential for the green energy revolution are drawing attention to an emerging bottleneck in raw materials that could impede global efforts, including those negotiated at the United Nations Climate Change Conference in Scotland, aimed at creating a less carbon-intensive world.

    According to the International Energy Agency, achieving the central goal of the 2015 Paris Climate Agreement to keep global warming “well below” 2 degrees Celsius will necessitate unprecedented growth in the production of commodities like copper, cobalt, nickel, and lithium. These materials are essential for solar panels, wind turbines, power lines, and, most importantly, battery-powered electric vehicles, which are less carbon intensive, especially when powered by renewable energy sources.

    In 2040, the IEA predicts that meeting the Paris targets will necessitate over 70 million electric cars and trucks to be sold globally each year, which will require up to 30 times more metals than are currently used in their production.

    Transitioning to a sustainable future presents challenges, especially in the near term. While advancements in technology and stricter regulations have reduced the environmental impact of mining, the extraction and processing of metals still pollute water and soil, encroach on habitats, and release pollutants and greenhouse gases.

    Emissions associated with the minerals used in green energy technologies are a small fraction of those produced by the fossil fuel-powered systems they aim to replace. With the acceleration of electric vehicle adoption, increased battery recycling could reduce the necessity for new battery metals.

    Other solutions under development, such as hydrogen-fueled cars, or yet-to-be-imagined technologies, could share the burden of green transportation. However, analysts emphasize that there is currently no alternative to extracting rocks from the Earth.

    Limiting warming to below 2 degrees Celsius using existing technologies will require a “massive additional volume of metals,” according to Julian Kettle, senior vice president of mining and metals at Wood Mackenzie, a global energy consultancy. “There’s simply no way around that. ”

    Established in 2018, KoBold derives its name from cobalt, a shiny bluish-silver metal crucial for the performance of lithium-ion batteries that revolutionized consumer electronics in the early 1990s.

    These batteries are now used on a larger scale in electric vehicles, and cobalt improves their range, lifespan, and protection against fires by reducing corrosion.

    However, its supply is particularly precarious, with nearly 70 percent sourced from the Democratic Republic of the Congo, where a history of labor abuses corruption and has gas the urgency to find deposits elsewhere.

    Automakers are also exploring cobalt alternatives due to its high cost, but the limitations of current cobalt-free batteries make it likely that demand for cobalt will increase.

    Other metals sought by KoBold could also face shortages. Gerbrand Ceder, a materials scientist at the University of California, Berkeley, believes nickel is at the greatest risk of long-term shortages, partially because it is the most viable substitute for cobalt.

    Analysts also anticipate a shortage of copper, which is used in various green technologies, including electric vehicle motors, wiring, and charging infrastructure. A typical battery-powered car uses three times as much copper as traditional vehicles.

    These supply constraints have arisen because finding viable metal deposits has become more challenging. This is largely due to the depletion of accessible deposits. In Zambia, Africa’s second-largest copper producer, the ores currently mined were either easily accessible or just below the surface when discovered, according to David Broughton, a geologist with 25 years’ experience in the region who advises KoBold and others.

    However, this does not mean that there are no deposits deeper in the earth. The interaction between rocks and fluids that formed them over 400 million years ago occurred deep beneath the surface.

    Unlike the oil and gas industry, which has significantly improved its access to hard-to-reach places, mining exploration has not experienced a major technological leap in decades.

    As a result, the likelihood of success is very low. According to most industry estimates, fewer than one percent of projects in areas without extensive prior exploration result in commercially viable deposits.

    KoBold aims to “reduce the uncertainty of what’s under the surface,” according to Josh Goldman, the company’s chief technology officer. Enhanced data application and advancements in artificial intelligence are crucial for improving the odds of success.

    AI techniques, including automation and machine learning, have already aided the fight against climate change by enabling better tracking of emissions, more advanced climate modeling, and the development of energy-saving devices such as smart grids.

    While AI applications in mining have mainly focused on improving extraction from existing operations, there is a growing trend in using them to assist in the search for new deposits.

    Today, companies ranging from tech giants like IBM to specialized firms like Canada’s Minerva and GoldSpot offer AI tools or services tailored to exploration. KoBold, however, is one of the few companies that invests its own capital in projects, including its ventures in Zambia and other locations in Canada, Greenland, and Western Australia.

    The company’s impressive technology includes two complementary systems. Connie Chan, a partner at the venture capital firm Andreessen Horowitz, which invested in KoBold in 2019 along with Gates’ Breakthrough Energy Ventures, compares the first system to a “Google Maps for the Earth’s crust and below.”

    Creating this technology is like a treasure hunt in geology. KoBold not only gathers its own data from rock and soil samples, as well as measurements such as gravity and magnetism taken from a helicopter, but it also twinned historical records using machine learning tools to extract crucial information from old maps and geological reports, some of which can be millions of pages long.

    In some cases, KoBold forms partnerships with established mining companies, such as BHP, the world’s most valuable mining firm, in Australia, which provide their own data.

    KoBold utilizes this information to develop and train a set of analytical tools called “machine prospector.” While these tools don’t directly uncover metals, they can provide geologists with better guidance on where to search, or where not to search.

    One specific tool used by KoBold in Zambia helps identify mafic rock, which can mislead explorers into thinking they’ve found copper, thereby preventing costly failed drilling.

    Another tool, currently in use in northern Quebec where KoBold hopes to find nickel, copper, and cobalt, assists its research team in identifying the most promising rock outcroppings for sampling, expediting the search. “You can actually get through an area of ​​​​a couple hundred square kilometers in a season,” says David Freedman, a KoBold geologist who spent last summer traversing the tundra.

    How effective will machine learning be?

    Machine learning tools developed by KoBold and others have already simplified the lives of geologists; as Freedman points out, there’s no wind, rain, or mosquitoes when planning a prospecting route from behind a computer. Nevertheless, these methods are still in their early stages, and their potential to lead to major discoveries remains uncertain.

    Antoine Caté, a geologist and data scientist at SRK, an international consultancy, believes that machine learning models have the potential to “significantly enhance” success rates in exploration, partly due to their ability to detect patterns among datasets with more variables than the human brain can process.

    However, he warns that such tools are only as effective as the data fed into them: If an algorithm is built with poor data, it will be ineffective at best and could lead prospectors astray at worst.

    AI does not eliminate the need for human creativity. “These tools are exceptional for diagnostics,” Caté says. “But ultimately, you still need a skilled individual to interpret the information and draw conclusions from it.”

    KoBold’s Goldman shares this view. He emphasizes the importance of robust data, explaining that KoBold’s thorough investigative work reflects this need. Nevertheless, he acknowledges that it may take time for the company’s technology to deliver on its potential, and the extent to which it could accelerate the discovery of deposits is uncertain.

    Chan, whose firm has supported tech giants like Airbnb and Instagram, is optimistic about the future. She believes that the challenges faced by the mining industry in exploration and the urgency to find more battery metals make a software-driven approach long overdue. “If anyone can demonstrate they are more effective at selecting the right places to explore, that’s incredibly valuable.”

    Even if machine learning techniques prove successful, it may not be sufficient to avert future shortages. Better exploration is just one aspect of the equation: To achieve the two-degree goal of the Paris Agreement, Wood Mackenzie estimates that the mining industry will need to invest over $2 trillion in mine development over the next 15 years—a substantial increase from the approximately $500 billion committed in the previous 15 years.

    Scaling up will also require action from governments. Kettle points out that policymakers often stimulate demand for green technologies while simultaneously enacting regulations that make it more challenging to mine the materials needed to power them.

    In Zambia, embracing the green energy revolution is a national priority. The new government, elected in August, aims to revitalize an economy burdened by debt. Minerals, which make up three-quarters of exports, are crucial to this effort.

    Minister of Mines, Paul Chanda Kabuswe, believes that the potential “looming boom” in battery metals could bring immense benefits to Zambia. However, major deposits have not been discovered in decades. To ensure a stable supply in the long term, the mining industry will need to improve its methods.

    Humphrey Mbasela, a Zambian geologist assisting KoBold in analyzing the soil in Mushindamo District, believes that a big data approach will be beneficial. He thinks that explorers have been too focused on surface-level searches, while the most valuable resources may lie deeper.

    “After a day of collecting samples in the woods and fields, I can confidently say that the resources are there, hidden underground and waiting to be uncovered,” Mbasela explains.

  • The advantages of AI for mineral exploration have become gradually apparent

    The advantages of AI for mineral exploration have become gradually apparent

    The transition to sustainable energy requires a large amount of essential minerals, and this demand is only expected to increase. By 2050, the demand for minerals such as graphite and cobalt is projected to rise by over 200%, while the demand for lithium is expected to increase by 910% and rare earths by 943%.

    Despite the high demand, there are sufficient mineral reserves in the earth’s crust to support the energy transition. However, the exploration and processing of these minerals are heavily concentrated in specific geographical areas, mainly in China. Additionally, discovering and extracting critical minerals, including finding new ones, is a complex and costly process.

    To expedite operations, the mining industry for critical minerals has shown growing interest in artificial intelligence (AI). AI has the potential to help in locating new deposits of sought-after minerals and even discovering entirely new materials. Despite a challenging investment market, there has been continuous investment in early-stage AI solutions throughout 2023.

    For instance, in March, VerAI, an AI-based mineral asset generator, secured $12 million in Series A funding. In June, GeologicAI raised $20 million for its “core scanning robot” in a Series A round. Later that same month, KoBold Metals, based in Berkeley, raised $195 million with investments from T. Rowe Price, Andreessen Horowitz, and Breakthrough Energy Ventures.

    Recently, Google introduced the DeepMind Graph Networks for Materials Exploration, an AI tool for predicting the stability of new materials. According to Google, out of 2.2 million predictions made by GNoME, 380,000 show promise for experimental synthesis, including materials that could lead to future transformative technologies such as superconductors, powerful supercomputers, and advanced batteries for electric vehicles.

    The potential role of AI in mineral exploration is vast, and current offerings each take a slightly different approach. For example, GNoME is a graph neural network model trained with data on the structure and chemical stability of crystals. It identifies new minerals with similar structures to known materials, potentially replacing highly demanded minerals like lithium.

    On the other hand, KoBold, a favorite among investors, uses machine learning and geological data to predict the locations of mineral deposits below the earth’s surface. Founded in 2018, the company has expanded rapidly by not only offering software but also making strategic investments in land claims and selling mining licenses. KoBold claims to have over 60 mining projects globally.

    Other startups utilize machine learning to analyze geological data and identify promising mineral deposits or develop robots capable of scanning and analyzing rock samples.

    In the United States, there is a pressing need for clean energy companies to accelerate mineral extraction and processing outside of China, driven by the Biden administration’s guidelines for Inflation Reduction Act tax credits. For instance, automakers must eliminate reliance on critical minerals extracted, processed , or recycled by “foreign entities of concern” by 2025 to qualify for significant benefits.

    In response to the Biden administration’s efforts to reduce dependence on China, it is anticipated that China may face a shortage of critical minerals by 2030 or 2035, according to Tom Moerenhout, a research scholar and adjunct professor at Columbia University. While processing capacity can be increased relatively quickly, exploration and other upstream activities typically progress slowly, averaging 12.5 years, as per the International Energy Agency.

    The majority of untapped domestic deposits in the US are located near or within Native American reservations, with 97% of nickel, 89% of copper, and 79% of lithium reserves in these areas. However, companies encounter opposition to mine development due to cultural and environmental concerns. For example, in Arizona, mining company Rio Tinto has faced a decade-long dispute over a copper deposit under an Apache religious site.

    Due to the challenges related to obtaining permits and complying with legal regulations for these deposits, Moerenhout mentioned that there have been discussions at the national level about initiating “another extensive exploration round, beginning with areas that are much easier to permit and have fewer environmental and social implications than current projects.”

    This implies that the US must discover new mineral deposits, and do so quickly. Although there have not been significant technological advancements in mineral exploration for many years, Moerenhout noted that AI has been a major focus for the past few years, especially among smaller ” junior miners” concentrating on a specific mineral.

    For these junior miners, he explained that the potential for AI-driven mineral discoveries is enormous. Traditional exploration is a multibillion-dollar endeavor that often does not yield immediate returns.

    Moerenhout stated that AI could reduce the exploration timeline and risk, ultimately lowering the cost. In the case of GNoME, the technology could enable miners to target higher-quality ore, facilitating easier production and processing.

    “All of this is still in the testing and development phase,” he added. “But if this type of technology can be developed, it could potentially overcome some of the challenges associated with exploration. The potential is significant.”

    Additionally, the failed battery start-up Britishvolt’s site in Nothumberland, intended for a gigafactory, will reportedly be acquired by the US private equity firm Blackstone, which plans to repurpose the site for a data center.

    Britishvolt was once seen as a leading British green energy innovator, aiming to construct a £3.8 billion car battery factory and create 3,000 jobs.

    However, the company collapsed in January 2023 due to overspending or lack of government support, depending on who you ask.

    Reportedly, Blackstone intends to develop a hyperscale data center campus on the site, taking advantage of access to affordable renewable energy from offshore wind.

    This serves as a powerful metaphor. Although there is a pressing need for more battery capacity in the western world, attention has already shifted to AI.

    The overall impact of artificial intelligence and its increasing integration into everyday life is uncertain. However, one thing is certain – AI consumes a significant amount of power and data. Commodity traders anticipate a substantial increase in demand for copper as a result of the AI ​​​revolution.

    Furthermore, data centers require more than just copper; they also need chips. The 2022 US CHIPS and Science Act has already spurred investment in chip production capacity.

    The current surge in chip demand is drawing attention to various niche minerals, many of which are predominantly produced in China.

    Tin, for instance, is a beneficiary of the chip boom, as nearly half of all tin is used as solder in circuit boards.

    Tantalum, used in capacitors, is another mineral needed by data centers. It is exported from East Africa through complex trade routes that often lead back to artisanal mines controlled by rebels in the eastern DRC.

    Additionally, rare earths such as neodymium and yttrium find their way into data centers, used in drive boards and superconductors, respectively.

    Renewables demand is expected to increase even further as AI is extremely energy-intensive, with data centers already consuming about 1-1.5% of global electricity production, and this demand is projected to rise as capacity expands.

    Increased demand for electricity further strengthens the positive outlook for minerals in the energy transition. The rise in electricity prices will help in the expansion of renewable energy. Wind farms require copper and rare earths, while solar panels need silver, cadmium, and selenium.

    The increase in power demand, whether from renewable sources or fossil fuels, will create a need for copper and aluminum for transmission.

    The impact of the AI ​​​​​​boom that is often overlooked is its potential to utilize stranded electricity. Aluminum production, in particular, heavily relies on inexpensive electricity, leading to the movement of production to regions with low electricity costs, especially remote areas with limited transmission capabilities.

    For example, Iceland has effectively exported its abundant geothermal electricity to the world through aluminum smelting. This trend can also be observed in Norway, Saudi Arabia, Bahrain, and remote areas of Russia with access to hydroelectric power.

    In recent years, China has become a major player in the global aluminum market, supported by industrial policies and benefiting from cheap electricity from coal and hydroelectric power.

    The growing demand for data centers is changing this dynamic. High-speed fiber optic cables can connect data centers in remote areas with affordable electricity, enabling them to export data rather than power.

    If the demand for AI continues to rise, the issue of stranded power may become a thing of the past, leading to higher electricity prices in remote locations worldwide and potentially reducing margins for many aluminum producers, despite the increasing demand for the metal.

    There is also another aspect to the AI ​​​​​​demand puzzle: its potential impact on supply. According to Dutch bank ING, artificial intelligence could assist in meeting the rising demand for critical minerals by aiding the mining industry in discovering new deposits.

    “AI, machine learning, and data analytics could be utilized in the discovery and extraction processes to meet the increasing demand for these minerals,” ING stated. However, this would require increased investment in the sector and the willingness of mining companies to adopt new technology.

    Although the potential increase in mineral demand, coupled with the assistance of AI in boosting discoveries and refining processes, may seem like good news for miners, it is important to exercise caution.

    Investors should remember instances like the old Britishvolt site in Northumbria, which demonstrate the flightiness of capital. The substantial expansion of data centers will also require considerable capital, potentially diverting funds away from the mining industry, which is in dire need of investment.

    Historical cycles have shown that mining often struggles to attract sufficient capital, especially after the tech sector has secured its share.

    The recent energy boom has proven that optimistic projections for mineral demand alone are insufficient to drive the development of new mines.

    How AI is aiding in the discovery of valuable mineral deposits

    A metals company based in California, by prominent figures like Bill Gates and Jeff Bezos, has utilized AI to identify one of the largest copper mines backed globally.

    Quartz noted that while the association of Bill Gates, Jeff Bezos, and AI may not immediately evoke the image of a massive copper mine in Zambia, the increasing reliance on electric power will necessitate a significant amount of batteries, motors, and wires. This will lead to a high demand for cobalt, copper, lithium, and nickel, creating favorable conditions for prospectors, especially those aiming to enhance the efficiency of their profession.

    According to The Economist, KoBold Metals, named after underground sprites from medieval Germany, uses AI to analyze historical geological records and create a “Google Maps” of the Earth’s crust.

    The Economist mentioned that while some of the geological, geochemical, and geophysical data required for AI analysis is new, a significant amount was previously stored in national geological surveys, geological journals, and other historical repositories.

    Algorithms are then used to “identify patterns and make inferences about potential mining sites,” as reported by the publication. Mining.com highlighted that this technology can uncover resources that traditional geologists may have overlooked and assist miners in determining where to acquire land drill .

    KoBold is not the only mining company employing AI, but its significant discovery in Zambia marks a pivotal moment in demonstrating the potential of technology in exploration.

    There is a sample room for improvement in AI

    AI is increasingly being promoted as a valuable method for discovering new sources of lithium, cobalt, copper, and nickel “more efficiently and with potentially less environmental impact than previous methods”, Business Green reported.

    The International Energy Agency has stated that access to these minerals, as well as the necessary investments to obtain more, “do not meet the requirements for transforming the energy sector”.

    Copper, in particular, is utilized in solar panels, wind turbines, and other equipment essential for transitioning the world to net-zero energy. “So, if AI has the potential to extract critical minerals from the ground and into products more rapidly, that could be beneficial,” Quartz noted.

    The world’s largest mining companies are facing challenges in finding high-quality assets, and the demand for copper is “expected to surge as countries strive to electrify their transportation systems and shift to renewable energy,” according to the Financial Times (FT).

    The recent discovery in Zambia offers a “potential boost to the efforts in the west to reduce its dependence on China for metals crucial to decarbonizing everything from vehicles to power transmission systems”.

    Up to 99% of exploration projects fail to materialize into physical mines. “AI, therefore, has a lot of room for improvement,” as stated significantly by The Economist. “It may also assist with a more nuanced issue. By expanding the amount of rock that can be explored, it will enable new discoveries in familiar, well-governed countries.”

    Josh Goldman, founder and president of KoBold Metals, said to the FT: “Exploration is where babies come from. You can help babies grow but you’ve got to get the birth rate up. That’s the hardest part: how do you find things in the first place.”

    It appears that AI could offer a solution

    Researchers at the China University of Geosciences in Wuhan utilized artificial intelligence (AI) to identify deposits of rare earth minerals and identified a significant potential reserve in the Tibetan plateau in the Himalayas, according to the South China Morning Post.

    In the past, China held a dominant position in mining bulk minerals such as copper, iron, aluminum, and coal, which fueled its industrial and urban growth. However, the evolving landscape of technology now necessitates the use of rare earth minerals for various applications spanning from energy to defense.

    Since rare earth resources exist in countries other than China, China’s dominance has been waning. Reserves discovered in Inner Mongolia have become a major production zone for China. nevertheless, the accidental discovery of lithium in some rock samples from Tibet nearly a decade ago provided hope that could shift the balance in China’s favor once again.

    Turning to AI

    Geologists in China have long studied the Himalayan belt for minerals but only found granite in locations, including Mount Everest. Two years ago, a team of researchers led by Zuo Renguang at the China University of Geosciences developed an AI-based system to analyze raw satellite data to identify new rare earth deposits.

    The AI ​​was trained on a limited data set to recognize light-colored granite that could contain rare-earth minerals such as niobium and tantalum alongside lithium, a crucial component for manufacturing electric vehicles.

    However, the team worked on enhancing the accuracy of its algorithms by incorporating information about the chemical composition of rocks, their magnetic and Initially electrical properties, and geological maps of the region, resulting in an increased accuracy rate of 96 percent.

    Mining in the Himalayas

    The mineral reserves identified by the machine are estimated to be at least the size of the site in Mongolia, if not larger. However, mining in the Himalayas is not as straightforward as in Inner Mongolia.

    For one, the reserves are located in the Tibetan belt of the country, where there is a commitment to protecting the environment. The Himalayan belt extends into countries such as India, Nepal, and Bhutan and holds strategic significance.

    Activities like mining contribute to economic growth and draw more people, but some areas are contested territories and could escalate geopolitical tensions.

    From China’s perspective, the regions are also remote and will require additional investments in infrastructure to make them accessible while also managing waste from the operations, as reported by the SCMP. In an area with limited water resources, poorly managed activities could have serious repercussions.

    Chinese researchers are not the only ones utilizing AI to locate lithium, nickel, cobalt, and copper deposits. KoBold, a mining company based in Berkeley, has adopted this approach and operates at 60 sites across three continents.

    The company is backed by venture capitalist firm Andreessen Horowitz. A recent round of funding received support from Bill Gates’ VC firm Breakthrough Energy Ventures and achieved a valuation of one billion dollars, as reported by Fortune.

    US Critical Materials Corp. has announced the signing of a definitive agreement with VerAI Discoveries Inc., a company that uses artificial intelligence (AI) to generate mineral discoveries, to deploy its AI-Powered Mineral Targeting Platform.

    This technology increases the likelihood of detecting minerals under covered terrain and reduces surface disturbances at US Critical Materials’ Sheep Creek rare earths properties in Montana, USA.

    The exploration partnership between US Critical Materials and VerAI uses top-notch technology to explore the covered terrain at the Sheep Creek Area of ​​​​​​​​Interest (AOI), significantly boosting the likelihood of success by 100 times compared to industry benchmarks. This AOI has a rich geological landscape, confirmed by Idaho National Laboratory and independent geophysical surveys.

    With VerAI’s AI-powered mineral targeting technology, US Critical Materials aims to establish new industry standards for environmentally conscious mineral exploration activities, offering the opportunity to bring rare earth elements to the market in their purest form, crucial for the green energy transition.

    Jim Hedrick, President of US Critical Materials, and former rare earths commodity specialist for the US Geological Survey (USGS), mentioned, “The addition of this AI/ML technology will enhance US Critical Materials’ current exploration methodologies. We are excited to have signed a definitive agreement with VerAI Discoveries to utilize its next-generation AI technology and unique capabilities to discover high-probability targets under covered terrain.”

    Hedrick added, “AI-assisted mineral exploration platforms are gaining recognition in the mining industry and major media outlets. The Defense Advanced Research Projects Agency (DARPA) is also exploring AI-assisted mining to expedite the search for critical minerals needed for US industry, consumer use, and, most importantly, the US military.”

    US Critical Materials’ latest samples indicate total rare earth elements (TREE) readings up to 20.1%, with combined neodymium praseodymium up to 3.3%. The company also has gallium readings as high as 490 ppm (parts per million). Gallium is profitable to produce at 50 ppm. The company believes there is a substantial tonnage at Sheep Creek and expects to discover more high-grade critical mineral locations using VerAI’s innovative, artificial intelligence technology.

    “VerAI is leading a paradigm shift in the exploration sector. We believe that AI and machine learning are essential tools for revolutionizing mineral exploration,” said Yair Frastai, CEO of VerAI Discoveries. “With this definitive agreement, US Critical Materials is proving its forward -thinking approach by leveraging our advanced AI-based targeting technology to systematically de-risk the economics of discovering concealed mineral deposits.”

    Both companies acknowledge the commitment and responsibility to protect all aspects of the environment in the Bitterroot Valley.

    A Singapore-based startup is using artificial intelligence (AI) to search for reserves of critical minerals, betting that the technology can help reduce the cost and time spent in mining.

    Atomics, the firm, has employed gravity and AI to develop a “virtual drill” technology known as Gravio that can define ore bodies and enhance the efficiency of minerals projects.

    Drilling a single hole to search for a mineral can cost from $7,000 to $33,000. A lithium miner might need as many as 400 holes to prove up a resource, so building a more accurate virtual picture before drilling can reduce costs.

    “The key challenge is that sometimes (drill holes) don’t actually hit the reserve,” said Atomionics CEO Sahil Tapiawala.

    The company aims to decrease these “empty” samples by at least half, he added.

    Like many exploration techniques, Atomionics uses the gravity signatures of different minerals to pinpoint where they lie beneath the earth.

    It is able to do so more precisely than typical air-based survey techniques and processes data in real time using AI, speeding up the work of defining ore bodies, Tapiawala said.

    The mining industry employs various techniques to find minerals, including ground-penetrating radar and aeromagnetic surveys, but no one method guarantees success.

    KoBold Metals, a California-based startup backed by billionaires Bill Gates and Jeff Bezos, is also utilizing AI to search for metals such as lithium.

    “The energy industry would traditionally defer to seismic data before undertaking any drilling project,” stated Cameron Fink, Bridgeport Energy exploration manager.

    “With further development, Gravio can present as a low-cost alternative to traditional methods of exploration.”

    First customers in Australia, US

    Atomionics has secured agreements with three major mining companies as part of a plan to locate metal ore deposits crucial to the energy transition, according to Tapiawala.

    This will complement the firm’s existing work in Queensland with New Hope’s Bridgeport Energy division.

    The mining giants are anticipating to complete data collection and analysis using Gravio by early next year.

    “We are actively implementing our technology for vital minerals, specifically copper, nickel, and zinc,” Tapiawala stated, noting that the technology is being introduced in Australia and the US.

    He chose not to disclose the names of the miners due to commercial confidentiality reasons. The privately-held company is supported by various Singapore-based government agencies and strategic investors.

    Critical minerals company signs definitive agreement with VerAI to explore high-grade project in Montana

    US Critical Materials Corp. has signed a definitive agreement with VerAI Discoveries Inc. to utilize VerAI’s AI-powered mineral targeting platform for the exploration of high-grade rare earths and gallium at the Sheep Creek project in southwestern Montana. This partnership builds upon the AI -powered critical minerals exploration collaboration announced in May.

    VerAI’s AI and machine learning technology assists geologists in locating hidden mineral deposits with greater accuracy by processing geophysical and other exploration-related data.

    VerAI Discoveries CEO Yair Frastai stated that AI and machine learning are essential tools for revolutionizing mineral exploration and are spearheading a paradigm shift in the exploration sector.

    The use of AI technology for identifying buried mineral targets ahead of drilling can accelerate the mineral discovery process, reduce costs, and minimize environmental impact.

    VerAI Discoveries COO Amitai Axelrod highlighted that their discovery process primarily occurs in the data space, significantly reducing the environmental footprint compared to traditional methods. This approach minimizes disturbance to ecosystems and local communities by avoiding extensive drilling and physical access to remote areas.

    US Critical Materials plans to utilize VerAI’s technology to set new industry standards for environmentally conscious mineral exploration and aims to establish a domestic source of rare earths, gallium, and other critical minerals found at Sheep Creek.

    US Critical Materials President Jim Hedrick, a former rare earths commodity specialist for the US Geological Survey (USGS), stated that the addition of AI/ML technology will enhance the company’s current exploration methodologies.

    The AI-assisted exploration at Sheep Creek could help establish this southwestern Montana project as an important domestic source of critical minerals essential to America’s economy and national security.

    Samples collected from Sheep Creek contain high grades of rare earths, including neodymium, and praseodymium, crucial for electric vehicle motors, with grades as high as 20.1% total rare earths.

    Gallium, used in semiconductor production, is found alongside the rare earths at Sheep Creek, with samples containing as much as 490 parts per million gallium, indicating the project’s potential to become the highest-grade source of gallium in the US

    According to the USGS, China supplied the majority of the world’s gallium and rare earths in 2023, highlighting the significance of developing domestic sources for these critical minerals.

    The same rocks hosting rare earths and gallium at Sheep Creek also contain niobium, scandium, and yttrium, all considered critical to the US

    US Critical Materials has finalized a deal to leverage VerAI’s mineral targeting technology to accelerate the discovery of concealed mineralization at Sheep Creek, aiming to provide a domestic alternative to China for rare earths, gallium, and other critical minerals.

    Earth AI, a clean energy metals explorer, has announced the first discovery of a greenfield molybdenum deposit using artificial intelligence near Armidale, New South Wales, Australia. The land is free and unlicensed, previously believed to be barren.

    Sign Up for the Suppliers Digest

    But the founder and CEO of Earth AI, Roman Teslyuk, and his team had a feeling. As a result, they made the decision to create a series of hypotheses and methodically test them. Each hole they drilled tested a single hypothesis.

    After eight months and the loss of much equipment to snow, four holes were drilled under winter conditions in the high Australian plateau, and they were able to pinpoint high-grade ore.

    “Before this, we drilled four holes in the Northern Territory, which brings us to a success rate of one in eight at discovering economic grade mineralization. This is a significant improvement over the industry standard of one in 200,” Teslyuk told Mining.com .

    MDC: Can you provide more details on how the discovery happened?

    Teslyuk: Our Mineral Targeting Platform is a geological deep learning solution that excels at finding mineral systems using surrounding geological and geophysical data. It is trained on virtually all known mineral prospects across the continent and, using this knowledge, predicts new systems.

    In this instance, we had a “promising target” on land that had been explored four times previously by junior explorers and major companies. However, despite the substantial amount of money spent on exploration, no mineral deposits were found.

    But we were committed, licensed the area, consulted with the community, obtained all the permits, and began exploring. We discovered high-grade molybdenum. The observed grades are 1.5-2 times higher than the world’s leading molybdenum mines.

    High molybdenum grades were confirmed in three samples analyzed by a certified laboratory. These grades, registered at 0.3%, 0.26%, and 0.135%, exceed the currently mined grades of 0.16% and 0.14% found in the world’s leading molybdenum mines, Climax and Henderson. Both of these mines are owned by Freeport McMoran.

    As a high-performance explorer of clean energy minerals, we don’t focus solely on one element during our exploration. This is because deposits usually contain multiple metals. We analyze the mineral system to understand which metals are likely to form an economic deposit, but also indirectly track other critical metals like copper, tin, tungsten, and gold that might form adjacent deposits or be mined as a secondary commodity.

    In this case, we also intersected low-grade copper at 0.3% adjacent to high-grade molybdenum mineralization.

    MDC: Earth AI mentions using modular drilling. Can you explain this?
    Teslyuk: Modular drilling, also known as responsible drilling, refers to our innovative approach to mineral exploration drilling, which embraces modularity as crucial for redundancy and operational efficiency. It is a drilling hardware system designed by Earth AI to be self-sufficient, minimize environmental impacts, and ensure a safe, efficient drilling operation in the most remote desert environments.

    Our modular hardware eliminated the need for groundwork by design. Our onboard waste management system ensures the safe treatment and disposal of drilling waste. Modular drilling also enables significant logistical gains, as we can carry more stock in a highly organized manner, we come more prepared , and our operation can remain self-sufficient no matter what drilling challenge we encounter.

    MDC: Can you describe how the AI ​​system used to find the greenfield deposit works?
    Teslyuk: It is helpful to understand how our entire process system works, which consists of three phases: Targeting, hypothesis, and drilling.

    Our AI system is employed in the foundation of our exploration, the targeting phase – our models train on millions of geological cases from the entire continent and have learned to identify areas of mineralization and highlight locations with a high probability of finding a mineralized system. We deploy teams into the field to sample and review the targets.

    In the hypothesis phase, geologists are on the ground studying the mineral system. At this stage, a sister technology is utilized that helps them better understand the geological setting and aids them in forming hypotheses.

    The drilling phase is where we test our hypotheses by drilling down to a depth of 600 meters and proving or disproving the presence of mineralization. Each drill hole provides invaluable knowledge that is then fed back into the system and used to form new hypotheses.

    As a result of this process, our AI prediction tools are the most accurate in the industry.

    MDC: What baseline data are fed to this AI system?
    Teslyuk: It is trained on a vast amount of data – 400 million geological cases from across the continent. The fundamental datasets for learning are remote sensing, geophysical, and geochemical datasets.

    MDC: How is your AI system different from others?
    Teslyuk: Geoscience is a new domain for AI, and our AI system is unique in its approach as it thinks like a geologist. The unique aspect lies in how we teach the AI ​​​to learn geology. To do this, you need to be both a geology and AI expert, a skillset that is incredibly rare.

    Another important aspect of the Mineral Targeting Platform is the focus on re-learning the archive data at the continental scale.

    Geoscientists are motivated to produce papers, resulting in the generation of increasingly detailed but unconnected data sets, with no incentives for drawing conclusions. This challenging task may not lead to any significant outcomes.

    The unique feature of our technology is its ability to predict mineral systems with extremely low detection limits. This capability is highly valuable given that all easily accessible resources have been depleted, and traditional regional targeting tools are unable to solve this issue.

    For instance, in the case of the molybdenum porphyry, a 0.3% molybdenum mineralization soil anomaly with a detection limit of 0.002% was observed at the surface.

    AI Develops Revolutionary Magnet Without Rare-Earth Metals in Just 3 Months

    There is an immediate need to transition away from fossil fuels, but the adoption of electric vehicles and other green technologies can create environmental pressures of their own. This pressure could be alleviated by a new magnet design, free from rare-earth metals, developed using AI in only three months.

    Rare-earth metals are vital components in modern gadgets and electric technology, such as cars, wind turbines, and solar panels. However, extracting these metals from the ground comes with significant costs in terms of finances, energy, and environmental impact.

    As a result, technology that does not rely on these metals can accelerate the transition towards a greener future. Enter Materials Nexus, a UK-based company that utilized its custom AI platform to create MagNex, a permanent magnet that does not require rare-earth metals.

    While this is not the first magnet of its kind to be developed, discovering such materials typically involves extensive trial and error and can take decades. The use of AI accelerated the process by approximately 200 times – the new magnet was designed, synthesized, and tested in just three months.

    The AI ​​​​evaluates over 100 million compositions of potential rare-earth-free magnets, considering not only their potential performance but also supply chain security, manufacturing costs, and environmental impact.

    Physicist Jonathan Bean, CEO of Materials Nexus, anticipates that “AI-powered materials design will not only impact magnetics but also the entire field of materials science.”

    Materials Nexus collaborated with a team from the University of Sheffield’s Henry Royce Institute in the UK to produce the magnet. It is believed that similar techniques could be employed to develop other devices and components that do not rely on rare-earth magnets.

    According to the creators of MagNex, the material costs are 20 percent of what they would be for conventional magnets, and there is also a 70 percent reduction in material carbon emissions.

    In the electric vehicle industry alone, the demand for rare-earth magnets is expected to be ten times the current level by 2030, according to Materials Nexus, underscoring the potential significance of these alternative materials.

    In addition to using AI to enhance manufacturing processes, researchers are actively seeking more sustainable methods for obtaining rare-earth materials. Breakthroughs like this should expedite the shift away from fossil fuels and carbon emissions.

    Of course, the AI ​​industry itself faces challenges in terms of carbon emissions. If its carbon footprint can be managed, AI could prove to be a valuable tool in the transition to green technology.

    “This accomplishes demonstrate the promising future of materials and manufacturing,” states materials scientist Iain Todd from the University of Sheffield.

    “Unlocking the next generation of materials through the power of AI holds great promise for research, industry, and our planet.”

    Embracing AI: Revolutionizing sustainable mining and mineral exploration in Saudi Arabia

    As Saudi Arabia explores the integration of AI in mineral exploration, it marks a significant milestone not only for prosperity economic but also for upholding its commitment to environmental sustainability.

    Saudi Arabia is at a crucial juncture, aiming to diversify its economy by leveraging the abundant mineral wealth beneath its soil. This shift aligns with Saudi Vision 2030, which promotes sustainable development and a technology-driven future, ushering in a new era of economic diversification .

    Traditionally, mineral exploration has relied on extensive fieldwork, geophysical surveys, and geological analysis. However, this landscape is rapidly evolving globally, including in Saudi Arabia, driven by Artificial Intelligence (AI).

    One of the most influential innovations, AI is transforming industries worldwide, including the mining sector, by reshaping technological interactions. AI improves mineral exploration and environmental protection in various ways:

    AI’s ability to process and analyze extensive datasets, including geological, geophysical, and geochemical data, satellite imagery, and historical exploration records, makes it a leader in adopting safer, more efficient, and environmentally friendly mineral exploration practices.

    Machine learning models in AI can identify patterns, anomalies, and potential mineral deposits that traditional methods often miss, providing precise forecasts and detection of mineral availability, reducing unnecessary exploratory drilling and preserving the environment.

    AI plays a crucial role in integrating advanced technologies such as drones, robotics, and autonomous systems into mineral exploration, replacing traditional, labor-intensive methods by rapidly analyzing large datasets to locate mineral deposits accurately.

    Satellite imagery, processed by AI, plays a crucial role in improving mining operations and environmental management, providing detailed insights into site conditions, vegetation cover, and topography, essential for planning and managing mining activities.

    AI-driven solutions improve safety by reducing human errors and ensuring compliance with the highest safety standards, minimizing the environmental impact of exploration and aligning with global sustainability goals.

    AI’s impact on mining promises to transcend today’s applications, shaping paths we have yet to fully imagine. Imagine a world where AI not only enhances mineral extraction but also pioneers the creation of self-sustaining, closed-loop ecosystems within mining sites.

    Deep sea mining, emerging as a significant new frontier, stands to benefit immensely from AI technologies, optimizing the mapping of seabed minerals, automating submersible operations, and monitoring environmental impacts.

    AI acts as a force multiplier, enhancing human capabilities and enabling more informed decision-making based on data-driven insights, fostering collaboration and innovation. As the Kingdom of Saudi Arabia continues to explore the integration of AI in mineral exploration, it opens a promising new chapter not only for economic prosperity but also for upholding its commitment to environmental sustainability.

    Rare minerals occur in a wide variety of deposits across the Earth. Their demand has grown rapidly, but they occur in limited minable deposits. Conventional technology allows searching for rare minerals using geochemical exploration as the main method. X-ray fluorescence (XRF) is a very useful instrument for real-time qualitative and quantitative evaluation of rare minerals.

    Nevertheless, it is often challenging to predict the presence of minerals and mineral-forming locations due to the complex interactions between geological, chemical, and biological systems in nature.

    Scientists are actively seeking new technologies to identify mineral deposits more easily, as doing so can improve our understanding of Earth’s history and help meet industrial demands.

    Hunting for Valuable Minerals

    Mineralogist Shaunna Morrison and geoinformatics scientist Anirudh Prabhu have developed a machine learning model based on artificial intelligence (AI) that has the potential to identify specific mineral occurrences.

    In collaboration with their research colleagues, they utilized data from the Mineral Evolution Database to predict previously unknown mineral occurrences.

    The database contains information on 295,583 locations of 5,478 mineral compounds, and the model used patterns based on association rules, which are a result of Earth’s dynamic evolutionary history.

    To test the efficiency of their AI-based model, the researchers explored the Tecopa basin in the Mojave Desert in eastern California, known for its Mars-like geographic conditions.

    Following their exploration, the machine learning model successfully predicted the presence of important minerals such as rutherfordine, bayleyite, and zippeite, as well as deposits of critical rare earth elements like monazite-(Ce), allanite-(Ce), and spodumene.

    The study’s findings demonstrate the effectiveness of mineral association analysis as a predictive tool, which could benefit mineralogists, economic geologists, and planetary scientists. The researchers hope that this analysis will enhance our understanding of mineralization on Earth and in the broader Solar System.

    Exploring the Past Through Minerals

    According to the American Museum of Natural History, the Earth is home to 5,000 mineral species. Minerals not only serve as raw materials for industry, but they also provide the oldest surviving records of our Solar System’s formation and evolution.

    They serve as enduring evidence of geological events and ancient terrains. Understanding how minerals have changed over time can help experts unravel the history of our planet.

    The International Mineralogical Association (IMA) has established a standard for classifying minerals based on their composition and structure. Categorizing minerals by origin using the IMA’s system can provide valuable insights into Earth and other planets.

    The role of minerals in the scientific community goes beyond tracing Earth’s past; they also play a crucial part in current activities on our planet. The Earth’s interior dynamics are reflected in tectonic events such as volcanic eruptions and earthquakes. Chemically zoned minerals are essential for understanding These catastrophic events.

    Scientists Propose a New Approach for Locating Rare Earth Deposits

    A team of geologists and materials scientists from the University of Erlangen-Nuremberg in Erlangen, Germany, has developed a new method for identifying untapped rare earth deposits.

    The researchers suggest that despite the name “rare earth metals,” these materials are actually relatively evenly distributed worldwide. However, not all deposits are economically viable or easily extractable, leading them to propose a new technique for locating these deposits.

    The researchers explain their new technique for finding rare earth metals in an article titled “Cumulate olivine: A novel host for heavy rare earth element mineralization,” published in the journal Geology.

    Finding Rare Earth Metals in Igneous Rocks

    Researchers have analyzed rock samples from the Vergenoeg fluorite mine in South Africa, where they discovered fayalite crystals – an iron-rich member of the olivine mineral group – deposited in granite-like magma sediments, potentially containing significant amounts of heavy rare earth elements. Fayalite is a reddish-brown to black rock mainly used as a gemstone and for sandblasting processes.

    This mineral is found worldwide, primarily in igneous rocks resulting from volcanic activity and abyssal rocks formed deep in the crust.

    The researchers also explained that olivine, the mineral class to which fayalite belongs, and its rare earth element systematics are not well understood. Using atom probe tomography maps, researchers confirmed the highest concentrations of heavy rare earth elements in the crystal lattice of fayalite, with lithium traces acting as the main charge balancer in the chemical structure.

    Furthermore, the German team utilized laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) – a sophisticated analytical technique that employs micro-sampling to deliver precise elemental analysis of solid materials – to identify that the cumulate fayalite in the Paleoproterozoic Vergenoeg F -Fe-REE site in South Africa contains the highest recorded rare earth element (REE) contents, indicating a heavy rare earth element (HREE) enrichment at approximately 6000 times the chondritic values.

    Dr. Reiner Klemd from the Geozentrum Nordbayern at the University of Erlangen-Nuremberg emphasized the significance of the discovery of fayalite as a new potential source for identifying new rare earth element deposits, particularly due to the increasing scarcity of heavy rare earth elements on the global market.

    Rare Earth Elements

    The elements known as rare earth elements or rare earth metals are part of a group of 17 heavy metals with similar structures. According to the American Geosciences Institute, this group includes the fifteen lanthanides on the periodic table, as well as yttrium and scandium.

    Additionally, the US Geological Survey explains that these rare earth elements are essential in various applications, including high-tech consumer electronics, defense, navigation and communication systems, and more.

    It was also reported that in 1993, 38 percent of the world’s rare earth element supplies came from China, 33 percent from the United States, 12 percent from Australia, and five percent each from India and Malaysia. However, by 2011, China had already accounted for 97 percent of the world’s rare earth element supplies.

    For a technology to make a significant impact on the mining industry, it must be capable of greatly enhancing the speed of execution and process efficiency, from exploration all the way through production and reclamation.

    Deloitte states that Artificial Intelligence (AI) is a developing suite of advanced and practical technologies that empowers mining companies to evolve into insight-driven enterprises that utilize data to derive key advantages.

    AI-powered systems utilize various algorithms to arrange and comprehend vast amounts of data, with the aim of assisting miners in making optimal decisions. AI’s immediate application in mining is particularly useful during the prospecting phase, especially for uncovering deposits.

    The traditional method of discovering the world’s next copper or gold deposit relies more on art than science, revolving around outdated technologies that provide incomplete or conflicting data. This generates inefficiencies that contradict the principles of mining and causes unnecessary disruptions for the global supply chain.

    AI systems, however, can ingest and analyze diverse data to help miners gain a better understanding of the environment and the terrain, bringing them closer to potential discoveries.

    AI technology can identify the precise locations of hidden mineral deposits, particularly in underexplored regions of the world, in a fraction of the time and with significantly greater accuracy.

    About VerAI

    VerAI Discoveries is dedicated to accelerating the global zero-carbon transformation by uncovering the minerals essential for our sustainable future.

    VerAI employs an innovative AI targeting platform that detects concealed mineral deposits in covered terrain, while continuously enhancing the probabilities of success and reducing the time to discovery.

    Headquartered in Boston and operating in both North and South America, VerAI generates multiple high-probability target portfolios in select jurisdictions and collaborates with leading exploration companies to create long-term value by discovering new mineral deposits.

    Its board of directors, advisors, and technical team possess decades of experience in the mineral exploration and AI sectors.

    VerAI is supported by two venture capital funds: Chrysalix Venture Capital, which includes strategic investors such as Teck Resources, South32, Caterpillar, and Shell, and specializes in mining transformation innovation; and Blumberg Capital, experienced in applying AI solutions to disrupt various traditional industries .

    VerAI Methodology

    VerAI utilizes high-resolution geophysics data as the primary data source for generating its targets. The data covers an area of ​​approximately 170 km north-south by 60 km east-west, mainly situated over the Paleocene (or Central) mineral belt in northern Chile , but also partially encompassing portions of the Coastal mineral belt.

    The study block extends from just south of the multi-million ounce El Peñon gold-silver mining district (Yamana Gold) in the north, to the Franke copper mine (KGHM) in the south. It also includes several historic mines and exploration projects, as well as the operating Guanaco and Amancaya mines (Austral Gold).

    The AI ​​targeting process is multi-faceted and iterative, enhancing the confidence that the generated targets are narrowed down to the very best matches to be staked and claimed in northern Chile.

    The targets are mostly concealed by post-mineral, gravel-filled basins, or “pampas” where the underlying geology of interest is largely not visible or available for geologic mapping, resulting in targets that have eluded previous exploration campaigns.

    Conclusion

    After years of insufficient investment, we are finally beginning to confront the reality of having inadequate copper for our future needs.

    It is estimated that at least 20Mt of copper supply must be developed in the next two decades, equivalent to one large million-tonne mine (ie, an Escondida) every year from now on.

    The solution, as straightforward as it may seem, is to have more copper projects that can be developed into producing mines.

    The next major copper mine(s) are likely to be discovered in Chile, the world’s top producer by far, which is why we have been monitoring the progress of Pampa Metals closely over the past year.

    With one of the largest prospective property packages along the mineral belts of northern Chile, the company stands to take advantage of some areas that major miners may have overlooked while conducting brownfield exploration on the peripheries of existing mines.

    Drilling conducted so far has already indicated signs of a fertile porphyry system, which will undoubtedly be followed up by more drilling and positive results.

    And the recent agreement with VerAI, supported by a technology that has demonstrated success in locating mineral deposits, could expand Pampa’s dominant land position even further.

  • AI Allows Examination of More Than 31 Million New Materials

    It is now feasible to significantly expand the exploration of new materials.

    It can require extended periods of meticulous effort — consulting information, conducting calculations, and performing precise laboratory tests — before scientists can introduce a new material with specific properties, whether it is for constructing an improved device or an enhanced battery. However, this process has been simplified thanks to artificial intelligence advancements.

    A new AI algorithm named M3GNet, developed by researchers at the Jacobs School of Engineering at the University of California San Diego, is capable of predicting the structure and dynamic properties of any material, whether existing or newly conceptualized.

    In fact, M3GNet was utilized to create a database containing over 31 million innovative materials that have not yet been synthesized, with their properties being forecasted by the machine learning algorithm. Moreover, this process occurs almost instantaneously.

    Millions of potential materials

    M3GNet can search for virtually any assigned material, such as metal, concrete, biological material, or any other type. To predict a material’s properties, the computer program needs to understand the material’s structure, which is based on the arrangement of its atoms.

    In many ways, predicting new materials is similar to predicting protein structure — an area in which AlphaFold AI, developed by Google DeepMind, has excelled. Earlier this year, DeepMind announced the successful decoding of the structures of nearly all proteins in scientists’ catalogs, totaling over 200 million proteins.

    Proteins, as the fundamental building blocks of life, undertake various functions within cells, from transmitting regulatory signals to protecting the body from bacteria and viruses. The accurate prediction of proteins’ 3D structures from their amino-acid sequences is therefore immensely beneficial to life sciences and medicine, and represents a revolutionary achievement.

    Similar to how biologists previously struggled to decode only a few proteins over the course of a year due to inherent complexities, materials scientists can now invent and test novel materials much faster and more cost-effectively than ever before. These new materials and compounds can subsequently be integrated into batteries, drugs, and semiconductors.

    “We need to know the structure of a material to predict its properties, similar to proteins,” explained Shyue Ping Ong, a nanoengineering professor at UC San Diego. “What we need is an AlphaFold for materials.”

    Ong and his colleagues adopted a similar approach to AlphaFold, combining graph neural networks with many-body interactions to create a deep learning AI capable of scanning and generating practical combinations using all the elements of the periodic table. The model was trained using a vast database of thousands of materials, complete with data on energies, forces, and stresses for each material.

    As a result, M3GNet analyzed numerous potential interatomic combinations to predict 31 million materials, over a million of which are expected to be stable. Additionally, the AI ​​​​can conduct dynamic and complex simulations to further validate property predictions.

    “For instance, we are often interested in how fast lithium ions diffuse in a lithium-ion battery electrode or electrolyte. The faster the diffusion, the more quickly you can charge or discharge a battery,” Ong stated. “We have demonstrated that the M3GNet IAP can accurately predict the lithium conductivity of a material. We firmly believe that the M3GNet architecture is a transformative tool that significantly enhances our ability to explore new material chemistries and structures.”

    The Python code for M3GNet has been made available as open-source on Github for those interested. There are already plans to integrate this powerful predictive tool into commercial materials simulation software.

    Google AI discovers 2.2 million new materials for various technologies

    DeepMind has propelled researchers centuries ahead of the traditional pace of materials discovery methods.

    Inorganic crystals play a crucial role in modern technologies. Their highly-ordered atomic structures provide them with unique chemical, electronic, magnetic, or optical properties that are utilized in a wide range of applications, from batteries to solar panels, microchips to superconductors.

    Creating innovative inorganic crystals in a laboratory — whether to enhance an existing technology or to power a new one — is theoretically simple. A researcher sets up the conditions, conducts the procedure, and learns from failures to adjust conditions for the next attempt. This process is repeated until a new, stable material is obtained.

    However, in practice, the process is exceedingly time-consuming. Traditional methods rely on trial-and-error guesswork that either modifies a known crystalline structure or makes speculative attempts. This process can be costly, time-intensive, and, if unsuccessful, may leave researchers with limited insights into the reasons for failure.

    The Materials Project, an open-access database established at the Lawrence Berkeley National Laboratory, has revealed that about 20,000 inorganic crystals were discovered through human experimentation. Over the past ten years, researchers have utilized computational techniques to increase that number to 48,000.

    Google’s AI research lab, DeepMind, has unveiled the results of a new deep-learning AI designed to forecast the potential structures of previously unidentified inorganic crystals. And the outcomes are far ahead of schedule.

    DeepMind’s latest AI, named the “Graph Networks for Materials Exploration” (GNoME), is a graph neural network that identifies connections between data points through graphs.

    GNoME was trained using data from the Materials Project and began creating theoretical crystal structures based on the 48,000 previously found inorganic crystals. It made predictions using two pipelines.

    The first pipeline, known as the “structural pipeline,” relied on known crystal structures for its predictions. The second pipeline, known as the “compositional pipeline,” took a more randomized approach to explore potential molecule combinations.

    The AI ​​then validated its predictions using “density functional theory,” a method used in chemistry and physics to calculate atomic structures. Regardless of success or failure, this process generated more training data for the AI ​​to learn from, informing future pipeline predictions.

    In essence, the AI’s pipelines and subsequent learning mimic the human experimental approach mentioned earlier, but with the advantage of the AI’s processing power for faster calculations.

    The researchers emphasize that, unlike language or vision, in materials science, data can continue to be generated, leading to the discovery of stable crystals, which can be used to further expand the model.

    Overall, GNoME forecasted 2.2 million new materials, with around 380,000 considered the most stable and potential candidates for synthesis.

    The potential inorganic crystals include layered, graphene-like that compounds could aid in the development of advanced superconductors and lithium-ion conductors that may enhance battery performance.

    The authors of the study, Google DeepMind researchers Amil Merchant and Ekin Dogus Cubuk, stated that “GNoME’s discovery of 2.2 million materials would be equivalent to about 800 years’ worth of knowledge and demonstrate an unprecedented scale and level of accuracy in predictions.”

    The research findings were published in the peer-reviewed journal Nature, and the 380,000 most stable materials will be freely available to researchers through the Materials Project, thanks to DeepMind’s contribution.

    GNoME’s predicted materials are theoretically stable, but only 736 of them have been experimentally verified to date. This suggests that the model’s predictions are accurate to some extent but also highlights the long road ahead for experimental fabrication, testing, and application of all 380,000 materials.

    To bridge this gap, the Lawrence Berkeley National Laboratory assigned their new A-Lab, an experimental lab that combines AI and robotics for fully autonomous research, to synthesize 58 of the predicted materials.

    A-Lab, a closed-loop system, can make decisions without human input and has the capability to handle 50 to 100 times as many samples per day as a typical human researcher.

    Yan Zeng, a staff scientist at A-Lab, noted that “We’ve adapted to a research environment, where we never know the outcome until the material is produced. The whole setup is adaptive, so it can handle the changing research environment as opposed to always doing the same thing.”

    During a 17-day run, A-Lab successfully synthesized 41 of the 58 target materials, at a rate of more than two materials per day and with a success rate of 71%. The NBNL researchers published their findings in another Nature paper.

    The researchers are investigating why the remaining 17 inorganic crystals did not materialize. In some cases, this may be attributed to inaccuracies in GNoME’s predictions.

    In some cases, expanding A-Lab’s decision-making and active-learning algorithms might produce more favorable outcomes. Two instances involved successful synthesis after human intervention was attempted.

    As a result, GNoME has provided A-Labs and human-operated research facilities worldwide with ample material to work with in the foreseeable future.

    “My goal with the Materials Project was not only to make the data I generated freely available to expedite materials design globally, but also to educate the world on the capabilities of computations.

    They can efficiently and rapidly explore vast spaces for new compounds and properties, surpassing the capabilities of experiments alone,” stated Kristin Persson, founder and director of the Materials Project.

    He further stated, “In order to tackle global environmental and climate challenges, we need to develop new materials. Through materials innovation, we could potentially create recyclable plastics, utilize waste energy, manufacture better batteries, and produce more durable and affordable solar panels, among other things.”

    In November, DeepMind, Google’s AI division, released a press statement titled “Millions of New Materials Discovered with Deep Learning.” However, researchers who analyzed a subset of the discoveries reported that they had not yet come across any strikingly novel compounds in that subset .

    “AI tool GNoME finds 2.2 million new crystals, including 380,000 stable materials that could power future technologies,” Google announced regarding the discovery, stating that this was “equivalent to nearly 800 years’ worth of knowledge,” many of the discoveries “defied previous human chemical intuition,” and it represented “a tenfold increase in stable materials known to humanity.” The findings were published in Nature and garnered widespread attention in the media as a legacy to the tremendous potential of AI in science.

    Another paper, simultaneously released by researchers at Lawrence Berkeley National Laboratory “in collaboration with Google DeepMind, demonstrates how our AI predictions can be utilized for autonomous material synthesis,” Google reported.

    In this experiment, researchers established an “autonomous laboratory” (A-Lab) that utilized “computations, historical data from the literature, machine learning, and active learning to plan and interpret the outcomes of experiments conducted using robotics.

    ” Essentially, the researchers employed AI and robots to eliminate human involvement in the laboratory, and after 17 days, they had discovered and synthesized new materials, which they asserted “demonstrates the effectiveness of artificial intelligence-driven platforms for autonomous materials discovery.”

    However, in the past month, two external groups of researchers who analyzed the DeepMind and Berkeley papers and published their own DNA at the very least suggest that this specific research is being overhyped.

    All the materials science experts I spoke to emphasized the potential of AI in discovering new types of materials. However, they contend that Google and its deep learning techniques have not made a groundbreaking advancement in the field of materials science.

    In a perspective paper published in Chemical Materials this week, Anthony Cheetham and Ram Seshadri from the University of California, Santa Barbara selected a random sample of the 380,000 proposed structures released by DeepMind and stated that none of them met a three-part test to determine whether the proposed material is “credible,” “useful,” and “novel.”

    They argue that what DeepMind discovered are “crystalline inorganic compounds and should be described as such, rather than using the more general term ‘material,’” which they believe should be reserved for substances that “demonstrate some utility.”

    In their analysis, they wrote, “We have not yet come across any surprisingly novel compounds in the GNoME and Stable Structure listings, although we anticipated that there must be some among the 384,870 compositions.

    We also note that, while many of the new compositions are minor adaptations of known materials, the computational approach produces reliable overall compositions, which gives us confidence that the underlying approach is sound.”

    In a phone interview, Cheetham informed me, “The Google paper falls short in terms of offering a useful, practical contribution to experimental materials scientists.” Seshadri stated, “We believe that Google has missed the mark with this.”

    “If I were seeking a new material to perform a specific function, I wouldn’t sift through over 2 million proposed compositions as suggested by Google,” Cheetham mentioned. “I don’t think that’s the most effective approach.

    I think the general methodology probably works quite well, but it needs to be much more targeted towards specific needs, as none of us have enough time in our lives to evaluate 2.2 million possibilities and determine their potential usefulness.”

    We dedicated a significant amount of time to examining a small portion of the proposals, and we discovered that not only were they lacking in functionality, but most of them, although potentially credible, were not particularly innovative as they were essentially variations of existing concepts.

    According to a statement from Google DeepMind, they stand by all the claims made in the GNoME paper.

    The GNoME research by Google DeepMind introduces a significantly larger number of potential materials than previously known to science, and several of the materials predicted have already been synthesized independently by scientists worldwide.

    In comparison to other machine learning models, the open-access material property database, The Materials Project, has acknowledged Google’s GNoMe database as top-tier. Google stated that some of the criticisms in the Chemical Materials analysis, such as the fact that many of the new materials have known structures but use different elements, were intentional design choices by DeepMind.

    The Berkeley paper asserted that an “autonomous laboratory” (referred to as “A-Lab”) utilized structures proposed by another project called the Materials Project and employed a robot to synthesize them without human intervention, yielding 43 “novel compounds.” While a DeepMind researcher is listed as an author on this paper, Google did not actively conduct the experiment.

    Upon analysis, human researchers identified several issues with this finding. The authors, including Leslie Schoop of Princeton University and Robert Palgrave of University College London, pointed out four common shortcomings in the analysis and concluded that no new materials had been discovered in that work.

    Each of the four researchers I spoke to express that while they believe an AI-guided process for discovering new materials holds promise, the specific papers they evaluated were not necessarily groundbreaking and should not be portrayed as such.

    “In the DeepMind paper, there are numerous instances of predicted materials that are clearly nonsensical. Not only to experts in the field, but even high school students could identify that compounds like H2O11 (a DeepMind prediction) do not seem plausible,” Palgrave conveyed to me.

    “There are many other examples of clearly incorrect compounds, and Cheetham/Seshadri provide a more diplomatic breakdown of this. To me, it seems that basic quality control has been neglected—for the machine learning to output such compounds as predictions is concerning and indicates that something has gone wrong.”

    AI has been employed to inundate the internet with vast amounts of content that is challenging for humans to sift through, making it difficult to identify high-quality human-generated work.

    While not a perfect comparison, the researchers I consulted with suggested that a similar scenario could unfold in materials science: Having extensive databases of potential structures does not necessarily facilitate the creation of something with a positive societal impact.

    “There is some value in knowing millions of materials (if accurate), but how does one navigate this space to find useful materials to create?” Palgrave questioned. “It is better to have knowledge of a few new compounds with exceptionally useful properties than a million where you have no idea which ones are beneficial.”

    Schoop pointed out that there are already “50k unique crystalline inorganic compounds, but we only understand the properties of a fraction of these. So, it is unclear why we need millions more compounds if we have not yet comprehended all the ones we do know. It might be more beneficial to predict material properties than simply new materials.”

    While Google DeepMind maintains its stand on the paper and disputes these interpretations, it is fair to say that there is now considerable debate regarding the use of AI and machine learning for discovering new materials, the context, testing, and implementation of these discoveries, and whether inundating the world with massive databases of proposed structures will truly lead to tangible breakthroughs for society, or simply generate a lot of noise.

    “We do not believe that there is a fundamental issue with AI,” Seshadri remarked. “We think it is a matter of how it is utilized. We are not traditionalists who believe that these techniques have no place in our field.”

    AI tools and advanced robotics are accelerating the search for urgently needed new materials.

    In the latest development, researchers at Google DeepMind reported that a new AI model has identified over 2.2 million hypothetical materials.

    Out of the millions of structures predicted by the AI, 381,000 were identified as stable new materials, making them prime candidates for scientists to fabricate and test in a laboratory.
    Current Situation: The advancement of novel materials is crucial for the development of the next generations of the electrical grid, computing, and other technologies such as batteries, solar cells, and semiconductor chips.

    This has led to significant investments in materials science and engineering by countries worldwide, including the US, China, and India. According to a report released this week from Georgetown’s Center for Security and Emerging Technology, AI and materials science are the top recipients of US federal grants to industry over the past six years.

    China currently leads the field of materials engineering in several key areas, including publications, employment, and degrees awarded in the field.
    Background: Traditionally, new materials were discovered by modifying the molecular structure of existing stable materials to create a novel material.

    This method predicted approximately 48,000 stable inorganic crystal structures, with more than half of them being discovered in the past decade. However, this process is expensive, time-consuming, and less likely to yield radically different structures as it builds on known materials.

    DeepMind accomplished this feat by utilizing existing data from the Materials Project at Lawrence Berkeley National Laboratory (LBNL) and other databases to train the AI, which then expanded the dataset as it learned.

    How it Works: DeepMind’s tool, known as Graph Networks for Materials Exploration (GNoME), employs two deep learning models that represent the atoms and bonds in a molecule as a graph.

    One of the models starts with known crystal structures and substitutes elements to create candidate structures. The other model, aiming to go beyond known materials and generate more diverse materials, uses only the chemical formula or composition of a candidate material to predict its stability.

    The pool of candidates is filtered, and the stability of each structure is determined through energy measurements.
    The most promising structures undergo evaluation with quantum mechanics simulations, and the resulting data is fed back into the model in a training loop known as active learning.
    The Intrigue: GNoME seemed to grasp certain aspects of quantum mechanics and made predictions about structures it had not encountered.

    For instance, despite being trained on crystals consisting of up to four elements, the AI ​​system was able to discover five- and six-element materials, which have been challenging for human scientists, as stated by Ekin Dogus Cubuk, who leads the materials discovery team at DeepMind, during a press briefing.

    The ability of an AI to generalize beyond its training data is significant. Keith Butler, a professor of computational chemistry materials at University College London, who was not involved in the research, emphasized the importance of exploring uncharted territories if these models are to be used for discovery.

    What’s Next: Although predicting the stability of a potential structure does not guarantee its manufacturability.

    In another paper published this week, researchers at LBNL shared the results from a lab equipped with AI-guided robotics for autonomous crystal synthesis. The material synthesis recipes were suggested by AI models that utilized natural language processing to analyze existing scientific papers and were then refined as the AI ​​system learned from its mistakes.

    Over a span of 17 days, operating 24/7, the A-Lab successfully synthesized 41 out of 58 materials they attempted to create, averaging more than two materials per day. However, some routes of synthesis may be challenging or costly to automate due to involving intricate glassware, movement across a lab, or other complex steps, as pointed out by Butler.

    The A-Lab’s failures in synthesizing 17 materials were attributed to factors such as the requirement for higher heating temperatures or the need for better material grinding, which are standard procedures in a lab but fall outside the current capabilities of AI.

    Ultimately, a material’s performance in conducting heat, being electronically insulating, or fulfilling other functions is essential. However, the synthesis and testing of a material’s properties are costly and time-consuming aspects of the process of developing a new material. Moreover, similar to many other AI systems, the models do not provide explanations for their decision-making process, as highlighted by Butler.

    Butler also emphasized the impact of competitions to predict new structures and the influence of large language models (LLM) such as ChatGPT and other generative AI in the field.

    The University of Liverpool’s Materials Innovation Factory, the Acceleration Consortium at the University of Toronto, and other organizations are working on developing self-driving laboratories.
    According to Olexandr Isayev, a chemistry professor at Carnegie Mellon University involved in their automated Cloud Lab, certain scientific experiments can be effectively automated with machine learning and AI, although not all of them.

    He also mentioned that the future progress in the field of science will be driven by the combination of software and hardware.

    The expansion of this open-access resource is crucial for scientists who are striving to create new materials for upcoming technologies. With the help of supercomputers and advanced simulations, researchers can avoid the time-consuming and often ineffective trial-and-error process that was previously necessary.

    The Materials Project, an open-access database established at the Lawrence Berkeley National Laboratory (Berkeley Lab) of the Department of Energy in 2011, calculates the properties of both existing and predicted materials.

    Researchers can concentrate on the development of promising materials for future technologies, such as lighter alloys to enhance car fuel efficiency, more effective solar cells for renewable energy, or faster transistors for the next generation of computers.

    Google’s artificial intelligence laboratory, DeepMind, has contributed nearly 400,000 new compounds to the Materials Project, expanding the available information for researchers. This dataset includes details about the atomic arrangement (crystal structure) and the stability (formation energy) of materials.

    The Materials Project has the capability to visually represent the atomic structure of various materials. For example, one of the new materials, Ba₆Nb₇O₂₁, was computed by GNoME and consists of barium (blue), niobium (white), and oxygen (green). This was acknowledged by the Materials Project at Berkeley Lab.

    Kristin Persson, the founder and director of the Materials Project at Berkeley Lab and a professor at UC Berkeley, stated, “We need to develop new materials to address global environmental and climate challenges. Through innovation in materials, we have the potential to create recyclable plastics, utilize waste energy, produce better batteries, and manufacture more cost-effective solar panels with increased longevity, among other possibilities.”

    The Role of GNoME in Material Discovery

    Google DeepMind created a deep learning tool called Graph Networks for Materials Exploration, or GNoME, to generate new data. GNoME was trained using workflows and data that had been developed over a decade by the Materials Project, and the GNoME algorithm was refined through active learning.

    Ultimately, researchers using GNoME generated 2.2 million crystal structures, including 380,000 that are being added to the Materials Project and are predicted to be stable, thus potentially valuable for future technologies. These recent findings from Google DeepMind were published in the journal Nature.

    Some of the calculations from GNoME were utilized in conjunction with data from the Materials Project to test A-Lab, a facility at Berkeley Lab where artificial intelligence guides robots in creating new materials. The first paper from A-Lab, published in Nature, demonstrated that the autonomous lab can rapidly discover new materials with minimal human involvement.

    During 17 days of independent operation, A-Lab successfully produced 41 new compounds out of 58 attempts – a rate of over two new materials per day. In comparison, it can take a human researcher months of trial and error to create a single new material , if they are able to achieve it at all.

    To create the novel compounds forecasted by the Materials Project, A-Lab’s AI generated new formulas by analyzing scientific papers and using active learning to make adjustments. Data from the Materials Project and GNoME were utilized to assess the predicted stability of the materials.

    Gerd Ceder, the principal investigator for A-Lab and a scientist at Berkeley Lab and UC Berkeley, stated, “We achieved an impressive 71% success rate, and we already have several methods to enhance it. We have demonstrated that combining theory and data with automation yields remarkable results. We can create and test materials more rapidly than ever before, and expanding the data points in the Materials Project allows us to make even more informed decisions.”

    The Impact and Future of the Materials Project

    The Materials Project stands as the most widely accessed open repository of information on inorganic materials globally. The database contains millions of properties on hundreds of thousands of structures and molecules, with data primarily processed at Berkeley Lab’s National Energy Research Science Computing Center.

    Over 400,000 individuals are registered as users of the site, and on average, over four papers citing the Materials Project are published each day. The contribution from Google DeepMind represents the most significant addition of structure-stability data from a group since the inception of the Materials Project.

    Ekin Dogus Cubuk, lead of Google DeepMind’s Materials Discovery team, expressed, “We anticipate that the GNoME project will propel research into inorganic crystals forward. External researchers have already verified over 736 of GNoME’s new materials through concurrent, independent physical experiments, demonstrating that our model’s discoveries can be realized in laboratories.”

    This one-minute time-lapse illustrates how individuals across the globe utilize the Materials Project over a four-hour period. The data dashboard showcases a rolling one-hour window of worldwide Materials Project activity, encompassing the number of requests, the users’ country , and the most frequently queried material properties. Credit: Patrick Huck/Berkeley Lab

    The Materials Project is currently processing the compounds from Google DeepMind and integrating them into the online database. The new data will be freely accessible to researchers and will also contribute to projects such as A-Lab that collaborate with the Materials Project.

    “I’m thrilled that people are utilizing the work we’ve conducted to generate an unprecedented amount of materials information,” said Persson, who also serves as the director of Berkeley Lab’s Molecular Foundry. “This is precisely what I aimed to achieve with the Materials Project: not only to make the data I produced freely available to expedite materials design worldwide, but also to educate the world on the capabilities of computations. They can efficiently and explore rapidly large spaces for new compounds and properties more efficiently and rapidly than experiments alone.”

    By pursuing promising leads from data in the Materials Project over the past decade, researchers have experimentally confirmed valuable properties in new materials across various domains. Some exhibit potential for use:

    – in carbon capture (extracting carbon dioxide from the atmosphere)
    – as photocatalysts (materials that accelerate chemical reactions in response to light and could be employed to break down pollutants or generate hydrogen)
    – as thermoelectrics (materials that could aid in harnessing waste heat and converting it into electrical power)
    – as transparent conductors (which could be beneficial in solar cells, touch screens, or LEDs)

    However, identifying these potential materials is just one of many stages in addressing some of humanity’s significant technological challenges.

    “Developing a material is not for the faint-hearted,” Persson remarked. “It takes a long time to transition a material from computation to commercialization. It must possess the right properties, function within devices, scale effectively, and offer the appropriate cost efficiency and performance. The objective of the Materials Project and facilities like A-Lab is to leverage data, enable data-driven exploration, and ultimately provide companies with more viable opportunities for success.”

    The researchers at Google DeepMind claim they have increased the number of stable materials known by ten times. Some of these materials could have applications in various fields such as batteries and superconductors, if they can be successfully produced outside the laboratory.

    The robotic chefs were busy in a crowded room filled with equipment, each one performing a specific task. One arm selected and mixed ingredients, another moved back and forth on a fixed track tending to the ovens, and a third carefully plated the dishes.

    Gerbrand Ceder, a materials scientist at Lawrence Berkeley National Lab and UC Berkeley, heard in approval as a robotic arm delicately closed an empty plastic vial—a task that he particularly enjoyed observing. “These robots can work tirelessly all night,” Ceder remarked, giving two of his graduate students a wry smile.

    Equipped with materials like nickel oxide and lithium carbonate, the A-Lab facility is designed to create new and intriguing materials, particularly those that could be valuable for future battery designs. The outcomes of the experiments can be unpredictable.

    Even a human scientist often makes mistakes when trying out a new recipe for the first time. Similarly, the robots sometimes produce a fine powder, while other times the result is a melted sticky mess or everything evaporates, leaving nothing behind. “At that point , humans would have to decide what to do next,” Ceder explained.

    The robots are programmed to do the same. They analyze the results of their experiments, adjust the recipe, and try again, and again, and again. “You give them some recipes in the morning, and when you come back home, you might find a beautifully made soufflé,” said materials scientist Kristin Persson, Ceder’s close collaborator at LBNL (and also his spouse). Or you might return to find a burnt mess. “But at least tomorrow, they will make a much better soufflé.”

    Recently, the variety of materials available for Ceder’s robots has expanded significantly, thanks to an AI program developed by Google DeepMind. This software, called GNoME, was trained using data from the Materials Project, a freely accessible database of 150,000 known materials overseen by Persson .

    Using this information, the AI ​​system generated designs for 2.2 million new crystals, out of which 380,000 were predicted to be stable—unlikely to decompose or explode, making them the most feasible candidates for synthesis in a lab. This has expanded the range of known stable materials almost tenfold. In a paper published in Nature, the authors state that the next solid-state electrolyte, solar cell materials, or high-temperature superconductor could potentially be found within this expanded database.

    The process of discovering these valuable materials starts with actually synthesizing them, which emphasizes the need to work quickly and through the night. In a recent series of experiments at LBNL, also published in Nature, Ceder’s autonomous lab successfully created 41 of the theorized materials over 17 days, helping to validate both the AI ​​​model and the lab’s robotic techniques.

    When determining if a material can be synthesized, whether by human hands or robot arms, one of the initial questions to ask is whether it is stable. Typically, this means that its atoms are arranged in the lowest possible energy state. Otherwise, the crystal will naturally want to transform into something else.

    For thousands of years, people have been steadily adding to the list of stable materials, initially by observing those found in nature or discovering them through basic chemical intuition or accidents. More recently, candidate materials have been designed using computers.

    According to Persson, the issue lies in bias: Over time, the collective knowledge has come to favor certain familiar structures and elements. Materials scientists refer to this as the “Edison effect,” based on Thomas Edison’s rapid trial-and-error approach to finding a lightbulb filament, testing thousands of types of carbon before settling on a variety derived from bamboo.

    It took another decade for a Hungarian group to develop tungsten. “He was limited by his knowledge,” Persson explained. “He was biased, he was convinced.”

    The approach by DeepMind aims to overcome these biases. The team started with 69,000 materials from Persson’s database, which is freely accessible and supported by the US Department of Energy. This was a good starting point, as the database contains detailed energetic information necessary to understand why certain materials are stable and others are not.

    However, this was not enough data to address what Google DeepMind researcher Ekin Dogus Cubuk calls a “philosophical contradiction” between machine learning and empirical science. Similar to Edison, AI struggles to generate genuinely new ideas beyond what it has already encountered.

    “In physics, you never want to learn something you already know,” Cubuk stated. “You almost always want to make generalizations outside of the known domain”—whether that involves discovering a different class of battery material or a new theory of superconductivity.

    GNoME uses active learning, where a graph neural network (GNN) nests the database to identify patterns in stable structures and minimize atomic bond energy in new structures across the periodic table. This process generates numerous potentially stable candidates.

    These candidates are then verified and adjusted using density-functional theory (DFT), a quantum mechanics technique. The refined results are incorporated back into the training data, and the process is repeated.

    Through multiple iterations, the approach was able to produce more complex structures than those in the original Materials Project data set, including some composed of five or six unique elements, surpassing the four-element cap of the training data.

    However, DFT is only a theoretical validation, and the next step involves the actual synthesis of materials. Ceder’s team selected 58 crystals to create in the A-Lab, considering the lab’s capabilities and available precursors. The initial attempts failed, but after multiple adjustments , the A-Lab successfully produced 41 of the materials, or 71 percent.

    Taylor Sparks, a materials scientist at the University of Utah, notes the potential of automation in materials synthesis but emphasizes the impracticality of using AI to propose thousands of new hypothetical materials and then pursuing them with automation. He stresses the importance of tailoring efforts to produce materials with specific properties rather than blindly generating a large number of materials.

    While GNNs are increasingly used to generate new material ideas, there are concerns about the scalability of the synthesis approach. Sparks mentions that the challenge lies in whether the scaled synthesis matches the scale of the predictions, which he believes is currently far from reality.

    Only a fraction of the 380,000 materials in the Nature paper are likely to be feasible for practical synthesis. Some involve radioactive or prohibitively expensive elements, while others require synthesis under extreme conditions that cannot be replicated in a lab.

    This practicality challenge extends to materials with potential for applications such as photovoltaic cells or batteries. According to Persson, the bottleneck consistently lies in the production and testing of these materials, especially for those that have never been made before.

    Furthermore, turning a basic crystal into a functional product is a lengthy process. For instance, predicting the energy and structure of a crystal can help understand the movement of lithium ions in an electrolyte, a critical aspect of battery performance. However, predicting the electrolyte’s interactions with other materials or its overall impact on the device is more challenging.

    Despite these challenges, the expanded range of materials opens up new possibilities for synthesis and provides valuable data for future AI programs, according to Anatole von Lilienfeld, a materials scientist at the University of Toronto.

    Additionally, the new materials generated by GNoME have piqued the interest of Google. Pushmeet Kohli, vice president of research at Google DeepMind, likes GNoME to AlphaFold and emphasizes the potential for exploring and expanding the new data to address fundamental problems and accelerate synthesis using AI .

    Kohli stated that the company is considering different approaches to directly engage with physical materials, such as collaborating with external labs or maintaining academic partnerships. He also mentioned the possibility of establishing its own laboratory, referring to Isomorphic Labs, a spinoff from DeepMind focused on drug discovery, which was founded in 2021 after the success of AlphaFold.

    Researchers may encounter challenges when attempting to apply the materials in practical settings. The Materials Project is popular among both academic institutions and businesses because it permits various types of use, including commercial activities.

    Google DeepMind’s materials are made available under a distinct license that prohibits commercial usage. Kohli clarified, “It’s released for academic purposes. If individuals are interested in exploring commercial partnerships, we will evaluate them on a case-by-case basis.”

    Several scientists working with new materials observed that it’s unclear what level of control the company would have if experimentation in an academic lab leads to a potential commercial application for a GNoME-generated material. Generating an idea for a new crystal without a specific purpose in mind is generally not eligible for a patent, and it could be challenging to trace its origin back to the database.

    Kohli also mentioned that although the data is being released, there are currently no plans to release the GNoME model. He cited safety concerns, stating that the software could potentially be used to generate hazardous materials, and expressed uncertainty about Google DeepMind’s materials strategy. ” It’s difficult to predict the commercial impact,” Kohli said.

    Sparks anticipates that his colleagues in academia will be displeased with the absence of code for GNoME, similar to biologists’ reaction when AlphaFold was initially published without a complete model. “That’s disappointing,” he remarked.

    Other materials scientists are likely to want to replicate the findings and explore ways to enhance the model or customize it for specific purposes. However, without access to the model, they are unable to do so, according to Sparks.

    In the interim, the researchers at Google DeepMind hope that the discovery of hundreds of thousands of new materials will be sufficient to keep both human and robotic theorists and synthesizers occupied. “Every technology could benefit from improved materials. It’s a bottleneck,” Cubuk remarked “This is why we need to facilitate the field by discovering more materials and assisting people in discovering even more.”

  • Malicious mobile applications pose a danger to both a user’s device and the network

    Malicious mobile applications pose a danger to both a user’s device and the network

    Over the past few months, researchers at Zscaler have found that more than 90 harmful mobile apps have been downloaded over 5.5 million times from the Google Play store. These apps are distributing various types of malware, including the Anatsa banking Trojan.

    Zscaler’s blog post from yesterday revealed that the apps, which act as decoys for the malware, consist of PDF and QR code readers, file managers, editors, and translators.

    The Anatsa Trojan, also known as Teabot, is a complex Trojan that initially uses seemingly harmless second-stage dropper applications to trick users into installing the payload. Once installed, it utilizes various tactics to secretly gather sensitive banking credentials and financial information.

    According to Zscaler, Anatsa is one of the most impactful malwares currently being distributed on Google Play, alongside others such as the Joker fleeceware, the credential-stealing Facestealer, and various types of adware. Zscaler has also detected the Coper Trojan in the mix.

    Analysis by Zscaler indicates that the most commonly used apps to conceal malware on the mobile app store are tools similar to those where Anatsa is present, followed by personalization and photography apps.

    The behind Anatsa, which can extract data from over 650 financial apps, were previously focused on targeting Android users in Europe. However, Zscaler reports that the malware is now actively targeting banking apps in the US and UK. Additionally, the targets have expanded to financial institutions in more European countries, as well as South Korea and Singapore.

    Despite Google’s significant efforts to prevent malicious apps from entering its mobile app store, Anatsa utilizes an attack vector that can bypass these protections, according to Zscaler. It accomplishes this through a dropper technique that makes the initial app appear clean upon installation.

    Anatsa was observed to distribute two malicious payloads via apps impersonating PDF and QR code reader applications. These types of apps often attract a large number of installations, further deceiving victims into believing they are genuine.

    Anatsa infects a device by using remote payloads retrieved from command-and-control (C2) servers to carry out additional malicious activity. Once installed, it initiates a dropper application to download the next-stage payload.

    The researchers noted that the Trojan employs deceptive tactics in its attack vector to avoid detection. It checks the device environment and type before executing, likely to detect sandboxes and analysis environments. It then only loads its third stage and final payload if it determines the coast is clear.

    After loading, Anatsa requests various permissions, including SMS and accessibility options, and establishes communication with the C2 server to carry out activities such as registering the infected device and retrieving a list of targeted applications for code injections.

    To steal user financial data, Anatsa downloads a target list of financial apps from the C2 and checks if they are installed on the device. It communicates this information back to the C2, which then provides fake login pages for the installed apps to deceive users into providing their credentials, which are then sent back to the attacker-controlled server.

    Despite Google’s best efforts, it has been challenging for the company to prevent malicious Android apps from appearing on the Google Play store. Ascriminals continue to develop malware with increasingly evasive tactics, the Zscaler researchers emphasized the importance of organizations implementing proactive security measures to protect their systems and sensitive financial information.

    To help corporate mobile users avoid compromise, organizations should adopt a “zero trust” architecture that focuses on user-centric security and ensures that all users are authenticated and authorized before accessing any resources, regardless of their device or location, as advised by the researchers .

    Android users can also protect corporate networks by refraining from downloading mobile applications when connected to an enterprise network, or by using appropriate security measures.

    Eliminate Malware from Your Android Device

    If you have previous experience with a desktop computer, you have likely encountered viruses and malware that can infect your computer, causing relatively various issues. Some viruses are easy to eliminate and only result in a computer slowdown. However, other types of viruses and malware can cause significant harm to a computer and compromise your data.

    The best approach to prevent these issues is to use trustworthy antivirus software and take precautionary measures. However, once a virus infiltrates your device, your primary objective should be to remove it as soon as possible.

    Similar to desktop computers, Android devices can also fall victim to malware and other forms of viruses. This guide will walk you through the various steps involved in removing malware from Android devices.

    What is Malware?

    Malware refers to any type of malicious software that infiltrates a computer, network, or computer server. Malware is a broad term that encompasses worms, viruses, and any harmful computer programs. The intent of malware is to directly harm computing devices and gain access to sensitive information, which could include anything from credit card details to the passwords used for bank and social media accounts. While all viruses are considered malware, not every piece of malware is classified as a virus. The three primary types of malware that may infect your Android devices include worms, viruses, and Trojans.

    What Is a Worm?

    A worm is a piece of malware that spreads from one device to another by replicating itself. Worms are particularly dangerous because they can operate independently and do not require a host file or a hijack code to spread.

    What is a Virus?

    A virus is a simple computer code that infiltrates a device’s program and forces it to carry out a malicious action that can either damage the device or steal information. Many modern viruses are equipped with a “logic bomb,” which means that the virus will not execute until specific conditions are met. Some viruses are sophisticated, making it challenging to detect them before it’s too late and without expert assistance.

    What is a Trojan?

    A Trojan is a type of malicious software that the user of the Android device can activate. These programs cannot replicate themselves but can mimic normal functions that the user would likely click on. Once the Trojan is activated, it spreads and begins to damage the device Similar to a regular application, Trojans typically request administrator access. If you click on the “agree” button, the Trojan will have extensive access to your computing device.

    What Malware Can Do to Android Phones

    After infecting Android phones, malware can carry out numerous actions, as its purpose is to generate revenue for cybercriminals. Malware on Android devices can download malicious applications, open unsafe web pages, send costly SMS text messages, and steal information, including passwords, personal information, location, and contact list.

    Once a hacker gains access to your Android device, they can either sell or use your information on the dark web. More complex and sophisticated malware may manifest as ransomware, which can lock your phone and encrypt some of your data and documents. You will then be given time to pay a fee if you want to have your files and data restored.

    How Do I Know If My Android Phone Has Malware on It?

    While external damage to a phone is easy to identify, malware can cause internal damage that is more challenging to detect. In many cases, malware will consume significant resources on your Android phone, leading to slowdowns and other issues that suggest the presence of malware. Therefore, it is important to determine if your phone has a virus or malware. Here are some indicators that your phone has been infected by malware:

    • Your phone has slowed significantly down without an obvious cause.
    • Your battery is depleting at a faster rate than usual.
    • Applications are taking longer to load.
    • The phone is consuming more data than expected.
    • Pop-up ads are appearing frequently.
    • You notice applications on your phone that you do not remember downloading.
    • Your phone bills are higher than they should be.

    How Can I Detect Malware on My Android Device?

    To identify malware on your Android device, there are several steps you can take, the most important of which is running a standard antivirus scan. There are various antivirus scans and programs available for your phone, both free and paid. Keep in mind that the most expensive antivirus software may not always be the best. Therefore, make sure to choose a program that offers comprehensive functionality rather than just a quick scan feature.

    Quick scans can help identify common areas of your device for viruses, while full scans are essential for a thorough check of your Android phone. Relying solely on a quick scan may give you a false sense of security.

    How Do I Remove Malware from Android Completely?

    After detecting malware on your Android phone, you can eliminate it by following five simple steps.

    Step 1: Turn Off Your Phone Immediately and Conduct Research

    When you detect malware, turn off your device completely while you conduct research. Turning off the device can prevent the problem from worsening and may prevent the malware from spreading to other networks nearby.

    If you know the name of the infected application or program, use this time to research more about it. If you don’t know the name, consider researching the symptoms you’ve observed on another computer. Identifying the infected app is crucial to removing malware from an Android phone.

    Step 2: Boot the Phone in Safe Mode or Emergency Mode

    Once you have identified the application that needs to be uninstalled, boot your phone in safe mode or emergency mode. Most Android devices allow you to enter safe mode by turning on the device, holding down the power button for a few seconds, and tapping the power-off option.

    In safe mode, you should be presented with “power” options such as reboot and safe mode. Activating safe mode is important to prevent the malware from spreading while you uninstall the infected program.

    Step 3: Access Device Settings to Locate the Malicious App

    While in safe mode, navigate to the “settings” section on your Android phone. You can access this mode by tapping the gear-shaped icon on the screen or searching for the “settings” section on your device. In the settings, scroll down until you find the “apps” option, which you should select. This will display a list of the applications installed on your phone.

    Look through the list until you find the infected app that needs to be uninstalled. If the application is a core app, you may not be able to delete it. Instead, you may have the option to disable the app. However, it is unlikely that a core app is the source of a virus or malware.

    Step 4: Uninstall the Infected Application

    Uninstalling an application is a simple process that begins with selecting the app, which will provide you with options like “force stop,” “force close,” or “uninstall.” Choose the uninstall option to remove the problematic application. In some cases, you may be unable to delete the application properly if your phone has been infected with ransomware.

    In such a scenario, the ransomware may have gained access to your administrative settings, preventing the app from being deleted. You can resolve this issue by going to the main settings menu and selecting the “security” section. From there, look for the ” phone device administrators” area, where you should be able to adjust your administrator settings and delete the app.

    Step 5: Perform a Factory Reset

    If you are willing to part with the current media and content on your Android phone, a factory reset is an effective way to remove malware. This process will eliminate viruses and malware, but more potent malware may survive. A thorough antivirus scan may help detect as much malware as possible.

    Step 6: Install Malware Protection

    Once you have successfully eliminated the malware, focus on installing malware protection and educating yourself on removing malware from Android devices. Make sure to use a program that can delete unnecessary files, safeguard your data, and scan for viruses. Regularly check for updates to keep the antivirus program equipped with the latest protection.

    Advice for Preventing Malware on Your Android Device

    Knowing how to eliminate malware from an Android device is useful, but it’s better to keep it from infecting your phone. You can take these simple steps to prevent viruses and other malware from affecting your device:

    • Ensure that you invest in reliable and strong security software.
    • Avoid clicking on links in text messages or emails that you don’t recognize.
    • Keep your software and operating system up to date.
    • Use complex passwords.
    • Avoid using unsecured WiFi connections. Consider using a VPN when accessing public networks.
    • Only download applications from trusted sources such as the Google Play Store.
    • Final Remarks

    Malware can harm your phone and potentially steal your information if you don’t take proactive measures to remove it once it’s been detected. You can avoid these issues altogether by using robust antivirus software like McAfee Mobile Security, seeking assistance from experts, and staying informed about modern cyber threats and the risks they present.

    Do you suspect that your smartphone might be infected with a virus or malicious app? This guide explains how to identify a smartphone virus and also discusses methods for cleaning up your smartphone.

    The Issue of Smartphone Viruses & Malware

    We’ve all heard about computers and laptops getting infected with viruses. But have you ever considered the possibility of your phone getting a virus?

    Currently, globally, six times as many smartphones are sold as computers and laptops. Many people spend more time using a smartphone than a computer. We also use our smartphones for online banking and input personal data such as contacts and payment information. Additionally, smartphones are highly personal devices that collect various data, including GPS location. With all this data stored on a smartphone, it’s no surprise that criminals have been attempting to exploit it.

    In recent years, malicious apps have emerged as a concern for smartphone users. There are now apps designed to stealthily steal your personal data, as well as smartphone viruses that use your phone to make premium-rate calls and send text messages.

    In general, you can keep your smartphone safe by following some basic security guidelines. Stick to the official app store, avoid jailbreaking your phone, steer clear of pirated software, and double-check app permissions before installing. But what should you do if you suspect your phone has a virus?

    Android Users Beware

    Currently, it’s estimated that 99% of mobile malware targets Android smartphones. This doesn’t mean that Android is less secure than iOS and Windows Phone; it simply means that Android is more permissive when it comes to installing applications. With iOS and Windows Phone , apps can only be installed from the official app store.

    Apps are also reviewed before they are available for download. With Android, apps are not reviewed, and they can be installed from sources outside of Google Play. Therefore, extra caution should be exercised with Android to avoid malicious apps.

    As malware mainly affects Android, some of the tips provided in this article are specific to Android devices. However, iOS and Windows Phone users can follow most of the same instructions to identify a malicious application.

    How to Avoid Buying Counterfeit Smartphones Loaded with Malware

    As high-end smartphones continue to gain popularity, counterfeit smartphones are being imported. They look and feel exactly like the real ones, but they are often filled with malware.

    Short Story

    I fell for it and purchased a counterfeit Galaxy Note 7 online. The low price should have raised red flags, but I was excited about the “great” deal. They managed to replicate every feature of the Galaxy Note 7. It worked well for a few weeks, but then I noticed that the phone was overheating and downloading unfamiliar apps. Shortly after, I started receiving alerts of login attempts from foreign countries.

    Some of my friends and relatives mentioned that I was sending them texts asking for money to be sent via Western Union. They know me well and realized that I wouldn’t make such a request. Unfortunately, a few of them actually sent the money, so I had to reimburse them. I had to change all my passwords, and it was a very stressful situation.

    Here’s how to spot a counterfeit smartphone

    Don’t purchase a phone without being familiar with the features you should expect from the genuine product. Have a good understanding of the user interface and unique bundled apps. Criminals are skilled at copying, but they still make mistakes.

    iPhones are difficult to replicate. The easy way to check if an iPhone is real is to click on the app store icon. If you are taken to the Android store, it’s fake. In case scammers find a way to replicate the app store user interface , search for Apple-exclusive apps such as Keynote, Numbers, and Pages. If you can’t find them, the iPhone is counterfeit.

    Android phones are the simplest to replicate. However, a clear sign is the cost. Criminals who sell copied smartphones aim to sell them quickly, so they offer them at a much lower price than the market rate. If you come across a fantastic deal on a new smartphone, question why it is so inexpensive.

    While it may be exciting to own the most recent smartphone, some people knowingly purchase counterfeit smartphones just to flaunt them. However, this means you miss out on having a valid warranty. Additionally, your sensitive information could be stolen, and you might even lose friends. It’s just not worth it.

    Fortunately, giffgaff offers new and pre-owned smartphones on the giffgaff store, all of which are thoroughly inspected and authenticated. Additionally, the bloggers use and share their honest opinions on the giffgaff blog.

    Be cautious of scareware.

    Firstly, if you’ve landed on this page after encountering a pop-up message about a virus on your phone, don’t panic. If you’re browsing the internet, it’s likely that you’ve come across fake anti-virus pop -ups (refer to the image below for an example of what a fake pop-up might look like).

    If you encounter a virus alert while browsing the internet, it’s probable that your phone is actually safe. You’ve likely encountered scareware: a web-based scam that attempts to convince you that your phone is infected. Do not download any of the software linked from the pop-up; the supposed anti-virus app is likely to be malicious.

    Simply close the webpage and restart your smartphone’s web browser. Also, be cautious not to provide any payment details, as they could be used to make fraudulent charges on your account.

    After closing the webpage, follow the advice in the remainder of this article. You’ll want to double-check for viruses on your phone. For added peace of mind, you can also use a reputable anti-virus application to scan your phone for viruses (never use the app advertised in the pop-up).

    If you’re browsing the internet, you may encounter a scareware scam. You’ll receive a pop-up message stating that your phone is infected. Don’t panic; it’s likely that your phone is actually safe. Close the webpage and follow the advice in the remainder of this article. For additional reassurance, you can also scan your phone with trusted anti-virus software.

    Malicious apps have been known to generate revenue by sending premium rate text messages or making premium rate phone calls. It’s always a good idea to review your itemized phone bill to check for unexpected charges. If you notice unusual charges, it’s possible that your smartphone has a virus. Alternatively, you might have unintentionally subscribed to a premium rate text service.

    For premium rate numbers in the UK, visit the PhonePayPlus website. PhonePayPlus is the regulator for premium rate telephone services in the UK. You can use their NumberChecker service to look up the company associated with a premium rate number. Either unsubscribe from the service or file a complaint if you believe the charges are related to a virus.

    Are you encountering intrusive advertisements (eg pop-up ads and push notifications)?

    If you’ve observed intrusive advertisements on your smartphone (eg frequent pop-up messages or push ads appearing in the notification bar), you might have adware on your phone. This doesn’t necessarily mean you have a virus – at best, adware is simply an annoyance, but in some cases, it may also contain malicious code.

    Android users experiencing ads in the notification bar can utilize AirPush Detector to identify the problematic app.

    NB If you encounter an anti-virus pop-up while browsing the internet, it’s likely scareware. Scareware isn’t directly harmful to your device – simply close the webpage immediately. Avoid clicking on links in the pop-up (refer to the scareware section earlier in this article).

    Have your friends received strange texts or emails from your address?

    If your friends and family complain about receiving odd text messages from your phone number (eg spam messages), it’s probable that your phone has a virus. A malicious app might be using your phone number to send out spam texts.

    For spam emails originating from your address, it’s also possible that your email account has been compromised. Alternatively, there could be a virus on another one of your devices (eg a laptop).

    Have new apps suddenly appeared on your phone?

    If so, they might have come as an update to existing apps. If you weren’t anticipating the new apps, there’s a possibility that you have a malicious app installed. This malicious app could be downloading new apps in the background.

    Are there specific apps using unusually large amounts of data?

    Malicious apps often require internet access to communicate with their source. Most smartphone operating systems allow you to view a list of apps that have used the internet and the amount of data they’ve consumed. On Android, navigate to the Settings menu and select ” “Data Usage”. Keep an eye out for apps that seem out of place. For example, if your flashlight app is using the internet, it might be doing more than its intended function.

    Also, check the “Wi-Fi Data Usage” tab. Malicious apps are not limited to using 3G for communication; they can also operate solely on Wi-Fi to avoid detection, as many people only monitor mobile data usage.

    Have you noticed a significant decrease in battery life after installing a new app?

    Viruses can cause an increase in power consumption, leaving a noticeable impact on battery life. If you observe a significant drop in battery life after installing a new app, be cautious. It doesn’t necessarily mean the app is malicious, but it could be buggy or poorly developed. Consider uninstalling the app to improve battery life.

    If your phone has been infected by a malicious app, start by checking the list of recently installed apps in Google Play (tap on “All” to sort the apps by date). For each app in the list, conduct a quick review. Watch out for apps with a low number of downloads, consistently low ratings, or negative feedback from other users. If in doubt, uninstall the app to see if it resolves the issue.

    If you’ve installed apps from sources other than Google Play, go to Settings > Application Manager to view the full list.

    iOS and Windows Phone users should follow similar steps in their respective app stores (iTunes and Windows Phone Marketplace).

    Check for Drive-By Downloads (Android only)

    In the past, compromised websites have been used to deliver drive-by downloads to Android devices, dropping an APK file in the download folder. If opened, the APK would install a malicious app on your device. Look for .apk files in your smartphone’s download folder to check for drive-by downloads. Do not install any of these apps; delete them immediately and ensure the app is removed from your system.

    Use the ‘Permissions Explorer’ App (Android only)

    Android users who have identified symptoms of a virus can often pinpoint the problematic app using the Permissions Explorer app (free).

    Permissions Explorer displays a complete list of apps on your phone allowed to perform specific activities. To identify the problematic app, match the permissions with the observed malware symptoms. For example, if you’re experiencing unexpected charges for premium-rate text messages, look for apps allowed to send SMS.

    Pay attention to apps requesting excessive permissions, such as a flashlight app with access to your phone book. This could indicate that the app is performing additional functions beyond its advertised purpose.

    Install an antivirus scanner

    There are numerous free antivirus apps available, but I can personally recommend only two: Kaspersky Mobile Antivirus (Android) and Lookout Mobile Security (iOS). These apps provide an additional layer of security for your phone.

    Final Thoughts

    In this article, we’ve explored ways to determine if your smartphone has a virus. Beware of scareware pop-ups, as they often aim to deceive you into installing fake antivirus software. Instead, look for clear signs such as unfamiliar apps or strange pop-up messages. If you suspect a virus, double-check the recently installed apps. Conduct a permissions audit and perform an antivirus scan.

    Malicious applications are widespread and can be extremely annoying by bombarding you with advertisements or stealing your personal information.

    Deceptive practices are common with Android malware. For instance, a mobile app named Ads Blocker claimed to eliminate irritating ads from your phone, which tend to pop up and cover your screen at the most inconvenient times. However, users quickly discovered that the app was actually malware designed to display even more ads, as reported by security researchers.

    This is just one example of malware that can frustrate Android users by inundating them with ads, for which the creators are paid to display, even when using unrelated apps. Malware often also generates fake clicks on the ads, increasing the profits for the creators.

    Nathan Collier, a researcher at internet security company Malwarebytes, who helped identify the fraudulent ad blocker in November, stated, “They’re making money, and that’s the name of the game.”

    According to researchers, adware like Ads Blocker is the most prevalent type of malware found on Android devices. An adware infection can make your phone so challenging to use that you may feel inclined to Hulk out and crush it. However, Android malware can do much worse, such as stealing personal information from your phone.

    Malware can be disorienting, disrupting your normal phone usage and making you feel uneasy, even if you are unsure of the source of the problem. Additionally, it is quite common. Malwarebytes reported discovering nearly 200,000 instances of malware on their customers’ devices in May and again in June.

    So, how can you identify if your phone has malware and prevent it? Here are some insights from mobile malware experts on what you can do.

    How malware affects your phone

    Mobile malware typically employs one of two methods, as explained by Adam Bauer, a security researcher for mobile security company Lookout. The first type of malware tricks you into granting permissions that allow it to access sensitive information.

    This is where the Ads Blocker app comes in, as many of the permissions it requested seemed reasonable for a genuine ad blocker. However, these permissions also allowed the app to operate continuously in the background and display ads to users even when they were using other apps.

    The second type of malware exploits vulnerabilities in phones by gaining privilege access to sensitive information through administrators. This reduces the need to prompt users to grant permissions, making it easier for malware to run without being noticed by users.

    Indications of malware on your Android phone

    If you notice the following occurring, your phone may be infected:

    1. You constantly see ads, regardless of the app you are using.
    2. You install an app, but the icon disappears immediately.
    3. Your battery is depleting much faster than usual.
    4. You notice unfamiliar apps on your phone.

    These signs are cause for concern and require further investigation.

    Ransomware on Android phones

    Another form of malware is ransomware. Victims typically find their files locked and inaccessible. Usually, a pop-up demands payment in Bitcoin to regain access to the files. According to Bauer, most Android ransomware can only lock files on external storage, such as photos.

    What mobile malware can do to your phone

    Apart from bombarding you with incessant ads, mobile malware can access private information. Common targets include:

    • Your banking credentials
    • Your device information
    • Your phone number or email address
    • Your contact lists

    Hackers can utilize this information for various malicious activities. For instance, they can commit identity theft using your banking credentials. The Anubis banking Trojan achieves this by tricking users into granting it access to an Android phone’s accessibility features.

    Once the permission is granted, the malware’s activities become completely invisible on the screen, with no indication of any malicious activity as you log into your accounts.

    Hackers can also use malware to gather and sell your device and contact information, resulting in an influx of robocalls, texts, and, of course, more ads. Furthermore, they can send links for more malware to everyone in your contacts list.

    If you suspect that your information has been caught up in the robocall system, you can explore the options offered by your phone carrier to minimize the annoyance of such calls. For example, T-Mobile, Sprint, and MetroPCS customers have access to Scam Shield , a free app introduced in July.

    How to prevent malware on your Android phone

    If you suspect your Android device has malware or want to safeguard it, there are specific measures you can implement.

    To begin with, ensure that your phone’s software is consistently up to date. Security experts emphasize the importance of keeping your OS and apps updated as one of the most crucial steps to protect your devices and accounts. Upgrading to a current OS version, such as Android 10 or the upcoming Android 11, can address vulnerabilities and restrict access for existing malware. Additionally, updates can prevent malware from functioning in the first place.

    Review the permissions granted to your apps. For instance, does a game app have unnecessary permissions such as sending SMS messages? This could be a warning sign. Keep this in mind when installing apps in the future.

    Removing apps suspected of being malicious can be challenging. Sometimes, you can revoke an app’s permissions, uninstall the app, and be done with it. However, certain malicious apps may have administrator privileges, requiring additional steps for deletion. If you encounter difficulties removing a specific app, consider researching online for successful removal methods.

    Consider using antivirus apps. While these services may impact your phone’s performance and require elevated access to detect malicious behavior, it’s important to choose a trusted option. Opting for the paid version can unlock premium features and minimize advertisements. These apps can alert you to potential malware on your phone and provide customer support when dealing with malware. Consider using well-known programs like Malwarebytes, Norton, Lookout, or Bitdefender to scan your device if you suspect malware is present.

    Avoid or uninstall Android apps obtained from third-party app stores. These apps bypass Google’s review process and can more easily introduce malware to your phone. While Google doesn’t catch every malicious app before it reaches your device, sticking to the official Google Play Store provides an additional layer of defense and direct channels to report encountered issues.

    Just hours after many people unwrapped new smartphones during the holiday season, a timely reminder of the potential threats and the need for personal responsibility in securing our devices emerge. This underscores the importance of not solely relying on Google and Apple for device security.

    Shortly after Android users were alerted to check their devices for the dangerous “SpyLoan” malware-infected apps, a new backdoor called “Xamalicious” has surfaced through multiple apps on Google’s Play Store.

    According to McAfee, “Android/Xamalicious trojans are apps related to health, games, horoscope, and productivity.” While Google removed the apps from its store before publication, McAfee warns that “most of these apps are still available for download in third- party marketplaces.”

    These apps are designed to deceive users into granting accessibility privileges, allowing them to take control of normally restricted device features. Of all the warnings in this report, this should be of utmost concern.

    This marks the second accessibility warning for Android users within a week. The other warning pertains to the resurgence of the “Chameleon” trojan, which manipulates accessibility settings and dynamic activity launches, circumventing Android’s improved “restricted settings,” and potentially compromising a device’s biometric security to steal financial information.

    ThreatFabric, which identified this latest iteration, warns that the new Chameleon is a sophisticated Android malware strain. However, it remains harmless unless users grant access for its sophisticated malware to infect their devices.

    For Xamalicious, the apps from the Play Store that you should remove right away are listed below. Keep in mind that if an app is banned from the Google Play Store, it doesn’t automatically get deleted from your device. Even though this warning has has been issued while the download numbers are still in the hundreds of thousands, rather than millions, there will likely be many more installations from third-party stores for those who decide to take that risk.

    Xamalicious Apps to Remove:

    – Essential Horoscope for Android
    – 3D Skin Editor for PE Minecraft
    – Logo Maker Pro
    – Auto Click Repeater
    – Count Easy Calorie Calculator
    – Sound Volume Extender
    – LetterLink
    – Numerology: Personal Horoscope & Number Predictions
    – Step Keeper: Easy Pedometer
    – Track Your Sleep
    – Sound Volume Booster
    – Astrological Navigator: Daily Horoscope & Tarot
    – Universal Calculator

    Xamalicious uses a simpler method to gain its privileges, which it then exploits to communicate with its command and control server. Once it’s installed, Xamalicious will send back all the device information needed to assess the likelihood of a successful attack, including hardware, operating system , installed apps, location, and network.

    At this point, it will be directed to download and install the malicious code it will use to take over the device or initiate background activity.

    While the newly discovered Chameleon variant takes a different approach by presenting itself to users as a Google Chrome app, it still involves the abuse of accessibility privileges to carry out account and device takeovers. This trojan prevents the device from requesting biometric authentication and instead pushes for a PIN, allowing it to steal user account credentials.

    “Although the victim’s biometric data remains inaccessible,” ThreatFabric explains, “the trojan forces the device to switch to PIN authentication, thereby bypassing biometric protection entirely.”

    The full details of the attack approach can be found in the reports (1,2), but in reality, these specific details are much less important than the social engineering tactics that both trojans rely on to attack devices. In reality, if you’ re likely to grant accessibility privileges to a horoscope or calorie counting app, you’re unlikely to notice other signs of compromise on your smartphone.

    As Google warns Android users, “harmful apps might request changes to settings that could put your device or data at risk. To protect you from harmful apps, some device settings may be restricted when you install an app. These settings cannot be changed unless you allow it.”

    The solution here is very simple — do not grant such privileges to ANY app unless it’s from a reputable brand like Apple, Google, or Microsoft and logically requires such access, considering your limited movements or senses when using such an app.

    Google is more open than Apple when it comes to app permissions on devices and the availability of apps beyond its official store. Its less restrictive approach also means there is more Play Store malware than what is found on Apple’s App Store.

    In Google’s view, it comes down to user choice. “We’re trying to strike a balance,” Sundar Pichai explained last month, “We believe in choice.” But with this choice comes responsibility. While this includes being very aware of the access being requested by apps, it also extends to the nature of the apps you allow on your smartphone and, as a result, into your life in general.

    My advice to Android users is to regularly check this. In ‘Settings’, go to ‘Privacy’ and then ‘Accessibility Special Access’. Make sure you’re familiar with any app listed, and if not, tap on the app to remove its access.

    In the same ‘Settings’ screen, you can also check other permissions you’ve granted. It’s always good practice to do a sweep once in a while — you never know what might have slipped in.

    Your smartphone likely has access to your financial accounts, work email, private thoughts, and messages. It knows where you live and work, who you love and who you don’t, even your kids and their schools.

    As tempting as it might be to install a flashlight or AI aging app, every app you install increases the risk of compromise. Just take a moment to consider whether you really need the app and, when the app requests access to data and device features, think about what that app truly needs to know.

    Zscaler, a security research group, recently announced the discovery of more than 90 malicious Android apps on the Play Store. These apps collectively had over 5.5 million installations and were linked to the ongoing Anatsa malware campaign, which has targeted over 650 apps associated with financial institutions.

    By February 2024, Anatsa had infected at least 150,000 devices using various decoy apps, many of which were marketed as productivity software. While the identities of most of the apps involved in this recent attack are unknown, two apps have been identified: PDF Reader & File Manager, and QR Reader & File Manager. At the time of Zscaler’s investigation, these two apps had amassed over 70,000 installations combined.

    How these malicious apps infect your phone

    Despite Google’s app review process for the Play Store, stealthy malware campaigns like Anatsa can employ a multi-stage payload loading mechanism to evade detection. In essence, these apps pose as legitimate applications and only initiate a covert infection after being installed on the user’s device .

    You may believe you are downloading a PDF reader, but once installed and launched, the “dropper” app establishes a connection to a C2 server to retrieve the necessary configurations and strings. Subsequently, it downloads a DEX file containing the malicious code and activates it on your device. Following this, the Anatsa payload URL is downloaded through a configuration file, and the DEX file installs the malware payload, completing the process and infecting your phone.

    Fortunately, all identified apps have been removed from the Play Store, and their developers have been banned. However, if you have downloaded these apps, they will remain on your smartphone. If you have either of these two apps on your phone, it is crucial to uninstall them immediately. Additionally, consider changing the passcodes for any banking apps you may have used on your phone to prevent unauthorized access by the threat actors behind Anatsa.

    How to avoid malware apps

    While malicious developers can be cunning with their attacks, there are certain guidelines you can follow to ascertain the legitimate of an app on the Play Store. Firstly, carefully scrutinize the app’s listing: Examine its name, description, and images. Do they align with the service the developers are promoting? Is the content well-written or riddled with errors? The less professional the appearance, the more likely it is to be fraudulent.

    Only download apps from reputable publishers. This is especially important when downloading popular apps, as malware apps may impersonate high-profile apps on various devices. Double-check the developer behind the app to ensure their authenticity.

    Also, review the app’s requirements and permissions. It is advisable to avoid anything that requests accessibility, as this is a common method used by malware groups to bypass security measures on newer devices. Other permissions to be cautious of include apps requesting access to your contact list and SMS. If a PDF reader requests access to your contacts, this should raise a red flag.

    Additionally, read through the app’s reviews. Be wary of apps with few ratings or those with overly positive reviews that seem suspicious.

    The app’s support email address can also provide insight. Many malware apps use a random Gmail account (or other free email account) for their support email. While not every app will have a professional support email, you can usually discern if something seems dubious based on the information provided.

    Unfortunately, there is no foolproof method to avoid malware apps unless you refrain from installing apps altogether. However, by being mindful of the apps you install and paying attention to permissions, developers, and other critical information, you can typically discern whether an app is suspicious.

    There has been an increase in malware scams targeting Android device users, leading the Singapore Police Force to issue public warnings in recent months.

    In some instances, scammers trick victims into clicking on social media posts advertising food items for sale, then persuade them to download a harmful application to complete a purchase.

    In another scam variation, certain individuals received unsolicited text messages instructing Android users to download a fake “anti-scam” app.

    According to the police advisory, once victims install the app containing malware, the scammers can remotely access the victims’ devices and steal stored passwords.

    To address these risks and protect your devices, CNA interviews cyber and mobile security experts to get the answers.

    Why are scammers more inclined to target Android users?

    According to Mr. Steven Scheurmann, the regional vice president for ASEAN at cybersecurity company Palo Alto Networks, the open nature of the Android platform allows for greater flexibility and customization, making it easier for malicious actors to create and distribute fake app stores or unauthorized apps .

    Mr. Scheurmann also highlighted that Android users can download apps from sources other than the official Google Play Store, which increases the likelihood of fraudulent or malicious apps.

    Furthermore, the diversity of governance for each type of Android device adds to the complexity of securing the device.

    Threat actors are constantly attempting to exploit vulnerabilities in systems.

    For example, there has been a surge of malware for the Android platform attempting to impersonate the ChatGPT app, as reported by Palo Alto Networks’ Unit 42.

    Does this mean Apple’s operating system is safer?

    In contrast, users of Apple’s iOS are only permitted to install approved apps from the official App Store, giving Apple greater control over the apps available to users and reducing the chances of malware being distributed through alternative sources.

    However, Mr. Paul Wilcox, the vice president for Asia Pacific and Japan of IT security company Infoblox, cautioned that although iOS does have some security advantages over Android, it does not make the Apple system “bulletproof.”

    Agreeing that no system is entirely foolproof, Mr. Scheurmann noted that Palo Alto Networks’ Unit 42 has identified various malware in recent years that were able to bypass the iOS code review process.

    User behavior is also crucial in guarding against a potential security breach.

    “In fact, from what I have seen, iPhone owners seem to be much more lax in their approach to securing their devices as they believe that iPhones are ‘safe,’ and the likelihood of them installing security software is extremely low,” Mr. Wilcox said.

    He added, “The days of any mobile device user feeling impenetrable are over, and all users should embrace the same diligent attitude, not just to online malware, but scammers and fake websites.”

    What has Google done to combat malicious apps?

    According to a spokesperson, Google does not allow any apps on its Play Store that are deceptive, malicious, or intended to misuse any network, device, or personal data.

    Google has also implemented built-in malware protection, Google Play Protect, which uses machine learning models to automatically scan over 100 billion apps on Android devices daily for fraud and malware.

    Google Play Protect is automatically enabled.

    Additionally, Google stated that in 2022, it prevented 1.43 million policy-violating apps from being published on Google Play through a combination of security features, continued investment in machine learning systems, and its app review process.

    “When we find that an app has violated our policies, we take appropriate action,” Google said.

    In response to inquiries about addressing links to malicious Android apps on Google’s search engine, the tech company stated that it utilizes automated systems to detect pages containing scammy or fraudulent content and prevent them from appearing in Google Search results.

  • OpenAI has raised $6.6 billion in a round led by Thrive Capital

    OpenAI has raised $6.6 billion in a round led by Thrive Capital

    OpenAI announced on Thursday that it has obtained a new $4 billion revolving credit line, shortly after closing a $6.6 billion funding round, solidifying its position as one of the most valuable private companies globally.

    The new credit line will increase OpenAI’s liquidity to $10 billion, enabling the company to purchase expensive computing capacity, including Nvidia chips, in its competition with tech giants like Google, which is owned by Alphabet.

    OpenAI’s finance chief, Sarah Friar, stated, “This credit facility further strengthens our balance sheet and provides flexibility to seize future growth opportunities.”

    The credit line involves JPMorgan Chase, Citi, Goldman Sachs, Morgan Stanley, Santander, Wells Fargo, SMBC, UBS, and HSBC.

    Following the latest funding round, OpenAI is now valued at nearly $157 billion, with returning venture capital investors such as Thrive Capital and Khosla Ventures, as well as major corporate backer Microsoft and new investor Nvidia participating in the form of convertible notes.

    The conversion to equity is contingent on a successful structural change into a for-profit company and the removal of the cap on returns for investors.

    Despite recent executive changes, including the departure of Chief Technology Officer Mira Murati, most investors remain optimistic about significant growth based on CEO Sam Altman’s projections.

    OpenAI is projected to generate $3.6 billion in revenue this year, despite losses surpassing $5 billion. It anticipates a substantial revenue increase to $11.6 billion next year, according to sources familiar with the figures.

    Additionally, OpenAI is offering Thrive Capital the potential to invest another $1 billion next year at the same valuation if the AI ​​firm achieves a revenue goal, as reported by Reuters last month.

    The recent funding round also involved Altimeter Capital, Fidelity, SoftBank, and Abu Dhabi’s state-backed investment firm MGX.

    Following the funding, OpenAI’s Chief Financial Officer, Sarah Friar, informed employees that the company will offer liquidity through a tender offer to buy back their shares, although details and timing have yet to be confirmed.

    Thrive Capital, which committed approximately $1.2 billion, negotiated the option to invest another $1 billion next year at the same valuation if the AI ​​firm meets a revenue goal.

    Apple, who was reportedly in discussions to invest in OpenAI, did not ultimately join the funding, according to sources.

    The funding was provided in the form of convertible notes, with the conversion to equity dependent on a successful structural change to a for-profit entity and the removal of the cap on returns for investors.

    Most investors remain optimistic about OpenAI’s growth, despite recent personnel changes, and have secured protections as the company undergoes a complex corporate restructuring.

    OpenAI has experienced a rapid increase in both product popularity and valuation, capturing the world’s attention. Since the launch of ChatGPT, the platform has amassed 250 million weekly active users. The company’s valuation has soared from $14 billion in 2021 to $157 billion, with revenue growing from zero to $3.6 billion, surpassing Altman’s initial projections.

    The company has indicated to investors that it remains committed to advancing artificial general intelligence (AGI), aiming to develop AI systems that surpass human intelligence, while also focusing on commercialization and profitability. OpenAI has successfully concluded a widely-watched funding round, securing $6.6 billion from investors such as Microsoft, Nvidia, and venture capitalists.

    The funding round has placed OpenAI’s valuation at $157 billion, with Thrive Capital alone contributing $1.2 billion, alongside investments from Khosla Ventures, SoftBank, and Fidelity, among others. This marked Nvidia’s first investment in OpenAI, while Apple, despite previous speculations, did not participate in the funding round.

    In a statement confirming the raise, OpenAI expressed that the funding will enable them to further establish their leadership in frontier AI research, expand compute capacity, and continue developing tools that facilitate problem-solving.

    This investment follows a week of significant changes for OpenAI, including restructuring as a for-profit company, with CEO Sam Altman expected to gain a substantial equity stake. Additionally, the company experienced departures from key personnel, raising concerns among some AI observers. , the successful funding round has alleviated such concerns, at least for the time being.

    Notably, Thrive Capital has the option to invest an additional $1 billion next year at the same valuation, contingent on OpenAI achieving an undisclosed revenue goal. On the other hand, some investors have clauses that allow them to renegotiate or retract funds if specific restructuring changes are not completed within two years, according to a source.

    OpenAI has reported that 250 million individuals utilized ChatGPT on a weekly basis. Sarah Friar, the chief financial officer, highlighted the impact of AI in personalizing learning, accelerating healthcare breakthroughs, and driving productivity, emphasizing that this is just the beginning.

    Reports indicate that OpenAI set conditions for investors, requesting them not to fund five competing firms, including Anthropic, xAI, and Safe Superintelligence. These firms develop leading large language models, directly competing with OpenAI. SoftBank and Fidelity have previously funded xAI, but it is understood that OpenAI’s terms are not retroactive.

    The funding arrives at a crucial time for OpenAI, as the company requires significant capital to sustain its operations, especially considering the substantial computing requirements for AI and the high salaries of top AI researchers. Reports earlier this year suggested that OpenAI’s costs for training and inference could exceed $7 billion in 2024, with an additional $1.5 billion spent on staff, well above rival Anthropic’s $2.7 billion.

    Furthermore, OpenAI continues to invest in developing artificial general intelligence (AGI), while also striving to maintain a competitive edge in AI for business applications. Although OpenAI is projected to generate $3.6 billion in revenue this year, it is expected to incur a loss due to costs exceeding $5 billion. Sources from Reuters suggest that the company anticipates generating over $11 billion in revenue next year.

    An additional challenge for OpenAI is the return on investment, as it remains uncertain how much companies will benefit from utilizing these costly technologies. Despite the unclear ROI, CIOs are not deterred. However, if prices rise to support the AI ​​industry and encourage further investment, it could potentially hinder adoption.

    OpenAI shift to for-profit company

    OpenAI’s decision to transition to a for-profit company could lead to potential safety issues, according to a whistleblower. William Saunders, a former research engineer at OpenAI, expressed concerns about the company’s reported change in corporate structure and its potential impact on safety decisions. He also raised worries about the possibility of OpenAI’s CEO holding a stake in the restructured business. Saunders emphasized that the governance of safety decisions at OpenAI could be compromised if the non-profit board loses control and the CEO gains a significant equity stake.

    OpenAI, initially established as a non-profit organization committed to developing artificial general intelligence (AGI) for the benefit of humanity, is now facing scrutiny over its shift to a for-profit entity. Saunders, who previously worked on OpenAI’s superalignment team, highlighted his apprehensions about the company’s ability to make responsible decisions regarding AGI and its alignment with human values ​​and goals.

    Saunders pointed out that the transition to a for-profit entity may contradict OpenAI’s original structure, which aimed to limit profits for investors and employees, with the surplus being directed back to the non-profit for the betterment of society. He expressed concerns that a for-profit entity might not prioritize giving back to society, especially if its technology leads to widespread unemployment.

    Although OpenAI has made recent changes, such as establishing an independent safety and security committee and considering restructuring as a public benefit corporation, concerns remain about the potential impact of the company’s transition. Reports about the CEO possibly receiving a stake in the business and the company seeking significant investment have sparked debate about the company’s direction and its commitment to its original mission.

    Additionally, OpenAI’s decision to delay the release of its Voice Engine technology aligns with its efforts to minimize the risk of misinformation, particularly during a crucial year for global elections. The AI ​​lab has deemed the technology too risky for general release, emphasizing the need to Mitigating potential threats of misinformation in the current global political landscape.

    Voice Engine was initially created in 2022, and a first version was utilized for the text-to-speech feature integrated into ChatGPT, the primary AI tool of the organization. However, its full potential has not been publicly disclosed, partially due to OpenAI’s careful and well-informed approach towards its broader release.

    OpenAI mentioned in an unsigned blog post that they aim to initiate a discussion on the responsible implementation of synthetic voices and how society can adjust to these new capabilities. The organization stated, “Based on these discussions and the outcomes of these small-scale tests, we will make a more informed decision regarding whether and how to deploy this technology on a larger scale.”

    In their post, the company provided instances of real-world applications of the technology from various partners who were granted access to integrate it into their own applications and products.

    Age of Learning, an educational technology company, utilizes it to produce scripted voiceovers. Meanwhile, the “AI visual storytelling” app HeyGen enables users to generate translations of recorded content that are fluent while retaining the original speaker’s accent and voice. For example, using an audio sample from a French speaker to generate English results in speech with a French accent.

    Notably, researchers at the Norman Prince Neurosciences Institute in Rhode Island employed a low-quality 15-second clip of a young woman presenting at a school project to “restore the voice” she had lost due to a vascular brain tumor.

    OpenAI stated, “We have chosen to preview but not widely release this technology at this time,” in order “to strengthen societal resilience against the challenges posed by increasingly realistic generative models.” In the near future, the organization encouraged actions such as phasing out voice-based authentication as a security measure for accessing bank accounts and other sensitive information.

    OpenAI also advocated for the exploration of “policies to safeguard the use of individuals’ voices in AI” and “educating the public about understanding the capabilities and limitations of AI technologies, including the potential for deceptive AI content.”

    OpenAI mentioned that Voice Engine generations are watermarked, enabling the organization to trace the source of any generated audio. Currently, it added, “our agreements with these partners necessitate explicit and informed consent from the original speaker, and we do not permit developers to create methods for individual users to generate their own voices.”

    While OpenAI’s tool is distinguished by its technical simplicity and the minimal amount of original audio required to create a convincing replica, competing products are already accessible to the general public.

    Companies such as ElevenLabs can produce a complete voice clone with just “a few minutes of audio”. To mitigate potential harm, the company has introduced a “no-go voices” protection mechanism designed to identify and prevent the creation of voice clones that mimic political candidates actively involved in presidential or prime ministerial elections, starting with those in the US and the UK.

    AI systems could be taught to collaboratively solve important business issues

    Ever since AI emerged in the 1950s, games have been utilized to assess AI progress. Deep Blue excelled at Chess, Watson triumphed over Jeopardy’s top players, AlphaGo defeated a world Go champion 4-1, and Libratus outplayed the best Texas Hold’Em poker players. Each victory marked a significant advancement in AI history. The next frontier is real-time, multiplayer strategy games.

    OpenAI, a non-profit research group based in San Francisco, achieved a breakthrough earlier this year, joining the race alongside other AI researchers and organizations. In a benchmark game in August, OpenAI Five, a team of five neural networks, learned to cooperate and won a best-of-three against a team of professional players in a simplified version of Dota 2.
    Dota 2, one of the most popular eSport games globally, has seen 966 tournaments with over $169 million in prize money and more than 10 million monthly active users as of July 2018. In this game, each player is part of a 5-player team , controls a “hero” with specific strengths and weaknesses, and battles opposing teams to destroy the “Ancient,” a structure in the opposite team’s base. Collaboration and coordination among players are crucial for success.

    Games like Dota 2 pose challenges for AI programmers due to several reasons:

    – Continuous action space: Each hero can make thousands of decisions within fractions of a second.

    – Continuous observation space: Each hero can encounter various objects, teammates, or enemies, with over 20,000 observations per fraction of a second.

    – Long-time horizons: Short-term actions have minor impacts, requiring a focus on long-term strategy for success.

    – Incomplete information: Each hero has limited visibility and must explore the hidden battlefield.

    – The need for collaboration: Unlike one-on-one games like Chess or Go, Dota 2 requires high levels of communication and collaboration.

    The fact that an AI system was able to challenge and win against professionals in this environment is a remarkable achievement. However, it does not signify that Artificial General Intelligence (AGI) is imminent.

    OpenAI Five’s spectacular were achieved under restricted rules, significantly altering the game in its favor. After the last major game restriction was lifted, OpenAI Five lost two games against top Dota 2 players at The International in August. The matches lasted about an hour and were considered “vigorous Dota matches.”

    While OpenAI Five had an advantage in precision and reaction time, it fell behind in long-term planning and connecting events minutes apart. Connecting cause and effect in indirect scenarios proved to be challenging for the AI. The robots’ tendency to play aggressively, even when not warranted, highlighted their shortcomings. The teams that defeated OpenAI Five exploited this weakness and learned to quickly outmanoeuver the AI.

    Despite these defeats, the progress made by OpenAI Five in just a few weeks is impressive and promising. The hope is that these superhuman skills will contribute to building advanced systems for real-life challenges in the future.

    Could this superhuman skill acquired on the battlefield be applied to business?

    Although OpenAI has not yet commercialized any of its AI technology, the potential applications are fascinating. Psychologists and management scientists have identified a key human limitation known as Bounded Rationality, which refers to the fact that we often make decisions under time constraints and with limited processing power, preventing us from fully understanding all available information.

    For example, when investing in the stock market, it is impractical for individuals to process and access all the information for each stock. As a result, humans often rely on heuristics or seek advice from others when making investment decisions.

    However, an algorithm capable of making decisions under incomplete information, in real-time, and with a long-term strategic focus has the potential to overcome this significant human constraint. Many business tasks, such as product launches and negotiations, require these abilities. It could be argued that a majority of business tasks involve collaboration, incomplete information, and a long-term focus.

    Over time, AI systems could serve as partners that enhance managers’ capabilities. Rather than replacing or competing with managers, these systems could be taught to collaboratively solve important business issues. The combination of nearly unlimited rationality from AI processing power, combined with the intuition and judgment of skilled managers, could be an unbeatable combination in business.

    The future of AI raises an urgent question: Who will control it? The rapid progress in artificial intelligence forces us to consider what kind of world we want to live in. Will it be a world where the United States and its allies advance a global AI that benefits everyone and provides open access to the technology? Or will it be an authoritarian world where nations or movements with different values ​​​​​use AI to strengthen and expand their power? There is no third option, and the time to choose a path is now.

    Currently, the United States leads in AI development, but maintaining this leadership is not guaranteed. Authoritarian governments around the world are willing to invest significant resources to catch up and surpass the US Russian President Vladimir Putin has ominously stated that the country leading the AI ​​​​​​race will “become the ruler of the world,” and the People’s Republic of China has announced its aim to become the global leader in AI by 2030.

    These authoritarian regimes and movements will tightly control the scientific, health, educational, and societal benefits of AI to solidify their own power. If they take the lead in AI, they may compel US companies and others to share user data, using the technology for surveillance or developing advanced cyberweapons.

    The first chapter of AI has already been written. Systems like ChatGPT and Copilot are already functioning as limited assistants, such as by generating reports for medical professionals to allow more time with patients or assisting with code generation in software engineering. Further advancements in AI will mark a critical period in human society.

    To ensure that the future of AI benefits the greatest number of people, a US-led global coalition of like-minded countries and an innovative strategy are needed. The United States must get four key things right to shape a future driven by a democratic vision for AI.

    First, American AI firms and industry must establish strong security measures to maintain the lead in current and future AI models and support innovation in the private sector. These measures should include cyberdefense and data center security innovations to prevent theft of crucial intellectual property like model weights and AI training data.

    Many of these defenses can benefit from the power of AI, making it easier and faster for human analysts to identify risks and respond to attacks. The US government and private sector can collaborate to develop these security measures as quickly as possible.

    Second, infrastructure plays a crucial role in the future of AI. The early deployment of fiber-optic cables, coaxial lines, and other broadband infrastructure allowed the United States to lead the digital revolution and build its current advantage in AI. US policymakers must work with the private sector to establish a larger physical infrastructure, including data centers and power plants, that support AI systems.

    Establishing partnerships between the public and private sectors to construct essential infrastructure will provide American businesses with the computational capabilities necessary to broaden the reach of AI and more equitably distribute its societal advantages.

    The development of this infrastructure will also generate fresh employment opportunities across the country. We are currently witnessing the emergence and progression of a technology that I consider to be as significant as electricity or the internet. AI has the potential to serve as the cornerstone of a new industrial foundation, and it would be prudent for our nation to embrace it.

    In addition to traditional physical infrastructure, we must also make substantial investments in human capital. As a nation, we must support and cultivate the next generation of AI innovators, researchers, and engineers. They represent our true strength.

    Furthermore, we need to formulate a coherent commercial diplomacy strategy for AI, which includes providing clarity on how the United States plans to enforce export controls and foreign investment regulations for the global expansion of AI systems.

    This will involve establishing guidelines for the types of chips, AI training data, and other sensitive code — some of which may need to remain within the United States — that can be housed in the data centers being rapidly constructed around the world to localize AI information .

    Maintaining our current lead in AI, especially at a time when nations worldwide are competing for greater access to the technology, will facilitate the inclusion of more countries in this new coalition. Ensuring that open-source models are readily accessible to developers in those nations will further strengthen our advantage. The question of who will take the lead in AI is not solely about exporting technology; it is about exporting the values ​​​​​​that the technology embodies.

    Finally, we must think innovatively about new approaches for the global community to establish standards for the development and deployment of AI, with a specific emphasis on safety and ensuring the participation of the global south and other nations that have historically been marginalized. As with other Globally significant issues, this will require us to engage with China and maintain an ongoing dialogue.

    I have previously discussed the idea of ​​​​​​creating an entity similar to the International Atomic Energy Agency for AI, but that is only one potential model. One possibility could involve connecting the network of AI safety institutes being established in countries such as Japan and Britain and creating an investment fund from which countries committed to adhering to Democratic AI protocols could draw to enhance their domestic computing capabilities.

    Another potential model is the Internet Corporation for Assigned Names and Numbers, which was established by the US government in 1998, less than a decade after the inception of the World Wide Web, to standardize the navigation of the digital world. ICANN is now an independent nonprofit organization with representatives from around the world dedicated to its fundamental mission of maximizing access to the internet in support of an open, interconnected, and democratic global community.

    While identifying the appropriate decision-making body is crucial, the fundamental point is that Democratic AI holds an advantage over authoritarian AI because our political system has empowered US companies, entrepreneurs, and academics to conduct research, innovate, and construct. We will not be able to develop AI that maximizes the technology’s benefits minimizing while its risks unless we strive to ensure that the democratic vision for AI triumphs.

    If we desire a more democratic world, history teaches us that our only option is to formulate an AI strategy that will contribute to its creation, and that the nations and technologists who have an advantage have a responsibility to make that choice — now.

    AGI To Outperform Human Capability

    OpenAI is said to be monitoring its advancement in creating artificial general intelligence (AGI), which refers to AI that can surpass humans in most tasks. The company uses a set of five levels to assess its progress towards this ultimate goal.

    According to Bloomberg, OpenAI believes its technology is approaching the second level out of five on the path to artificial general intelligence. Anna Gallotti, co-chair of the International Coaching Federation’s special task force for AI and coaching, referred to this as a “super AI” scale on LinkedIn, envisioning potential applications for entrepreneurs, coaches, and consultants.

    Axios reported that AI experts are divided on whether today’s large language models, which excel at generating text and images, will ever be capable of comprehensively understanding the world and adapting flexibly to new information and circumstances. Disagreement implies the existence of blind spots, which in turn present opportunities.

    Setting aside expert opinions, how much AI are you currently utilizing in your business? What is in the pipeline, and what actions are you taking today? Here are the five steps and their implications for you.

    OpenAI’s Metrics: The 5 Steps towards Artificial General Intelligence

    Level one: conversational AI

    At this stage, computers can engage in conversational language with people, such as customer service support agents, AI coaches, ChatGPT, and Claude assisting with team communication and social media content creation. Hopefully, you are currently implementing something at this level.

    Since its launch in November 2022, ChatGPT has attracted 180.5 million users, including many entrepreneurs. Three million developers utilize OpenAI’s API to build their tools, and ChatGPT consulting is one of the highest-paid roles in AI. This marks the beginning.

    Level two: reasoning AI

    Reportedly forthcoming, this stage involves systems (referred to as “reasoners”) performing basic problem-solving tasks to comparable a human with a doctorate-level education but without access to any tools.

    According to a Hacker News forum, the transition from level one to level two is significant as it entails a shift from basic and limited capabilities to a more comprehensive and human-like proficiency. This transition presents possibilities and opportunities for all businesses, but it is not yet fully realized.

    Level three: autonomous AI

    At level three, AI systems known as “agents” can operate autonomously on a user’s behalf for several days. Imagine having such agents in your business while you take a vacation. Currently, automations are not flawless and require monitoring. Technology is progressing towards a reality where they rarely fail, and when they do, they can self-repair without human intervention.

    Similar to team members, but at a fraction of the cost. Similar to suppliers, but operating strictly on rules and processes without deviation. How much more could your business accomplish at level three of AI?

    Level four: innovating AI

    Referred to as “Innovators,” these AI systems can independently develop innovations. They do not just run your processes, but also enhance them. They do not just follow rules and make predictions, but critically think about how to improve performance and achieve the goal more effectively or efficiently.

    How many individuals in your business are actively contemplating its improvement right now? Could you benefit from an AI tool that comprehends your objectives and provides ideas? Currently, you can prompt ChatGPT to help you significantly improve your business, but it will not do so autonomously . This would represent a substantial leap in the capabilities and applications of AI.

    Level five: organizational AI

    Known as “organizations,” this final stage of super AI involves artificial intelligence capable of performing the work of an entire organization. Every function currently carried out by human personnel can be executed by agents working together, making enhancements, and managing all required tasks without human involvement.

    Sam Altman, CEO of OpenAI, anticipates reaching level five within ten years, while some in the field believe it might take up to fifty years. The precise timeline remains uncertain, but the rapid pace of AI development is undeniable.

    Achieving Artificial General Intelligence: OpenAI’s Five-Step Process

    The more you comprehend what AI can do for your business, the more you will be able to achieve with fewer resources at each stage. Implementing stage one now will position you for success as the technology progresses.

    This applies to everyone, including you. In the future, some people will take action now, while others will be left behind, thinking they can catch up but never do. From conversational to reasoning, then autonomous, innovating and organizational AI, each has Significantly different implications for how you operate your business and live your life.

    If OpenAI is on the brink of AGI as suggested, why do prominent individuals keep departing?

    OpenAI recently undertaken significant changes in leadership as three key figures announced major transitions over the past week. Greg Brockman, the president and co-founder of the company, will be on an extended sabbatical until the end of the year. Another co-founder, John Schulman, has departed for rival Anthropic, while Peter Deng, VP of Consumer Product, has also left the ChatGPT maker.

    In a post on X, Brockman mentioned, “I’m taking a sabbatical through end of year. First time to relax since co-founding OpenAI 9 years ago. The mission is far from complete; we still have a safe AGI to build. ”

    These changes have led some to question how near OpenAI is to a long-rumored breakthrough in reasoning artificial intelligence, considering the ease with which high-profile employees are departing (or taking extended breaks, in the case of Brockman). As AI developer Benjamin De Kraker stated on X, “If OpenAI is right on the verge of AGI, why do prominent people keep leaving?”

    AGI refers to a hypothetical AI system that could match human-level intelligence across a wide range of tasks without specialized training. It’s the ultimate goal of OpenAI, and company CEO Sam Altman has mentioned that it could emerge in the “reasonably close-ish future .” AGI also raises concerns about potential existential risks to humanity and the displacement of knowledge workers. However, the term remains somewhat ambiguous, and there’s considerable debate in the AI ​​community about what truly constitutes AGI or how close we are to achieving it.

    Critics such as Ed Zitron view the emergence of the “next big thing” in AI as a necessary step to justify the substantial investments in AI models that aren’t yet profitable. The industry is hopeful that OpenAI, or a competitor, has a secret breakthrough waiting in the wings that will justify the massive costs associated with training and deploying LLMs.

    On the other hand, AI critic Gary Marcus has suggested that major AI companies have reached a plateau of large language model (LLM) capability centered around GPT-4-level models since no AI company has yet made a significant leap past the groundbreaking LLM that OpenAI released in March 2023.

    Microsoft CTO Kevin Scott has challenged these claims, stating that LLM “scaling laws” (which suggest LLMs increase in capability proportionate to more compute power thrown at them) will continue to deliver improvements over time, and that more patience is needed as the next generation (say, GPT-5) undergoes training.

    In the grand scheme of things, Brockman’s move seems like a long-overdue extended vacation (or perhaps a period to address personal matters beyond work). Regardless of the reason, the duration of the sabbatical raises questions about how the president of a major tech company can suddenly be absent for four months without impacting day-to-day operations, especially during a critical time in its history.

    Unless, of course, things are relatively calm at OpenAI—and perhaps GPT-5 won’t be released until at least next year when Brockman returns. However, this is mere speculation on our part, and OpenAI (whether voluntarily or not) sometimes surprises us when we least expect it. (Just today, Altman hinted on X about strawberries that some people interpret as a hint of a potential major model undergoing testing or nearing release.)

    One of the most significant impacts of the recent departures on OpenAI might be that a few high-profile employees have joined Anthropic, a San Francisco-based AI company established in 2021 by ex-OpenAI employees Daniela and Dario Amodei.

    Anthropic provides a subscription service called Claude.ai, which is similar to ChatGPT. Its most recent LLM, Claude 3.5 Sonnet, along with its web-based interface, has quickly gained favor over ChatGPT among some vocal LLM users on social media, although it likely does not yet match ChatGPT in terms of mainstream brand recognition.

    In particular, John Schulman, an OpenAI co-founder and key figure in the company’s post-training process for LLMs, revealed in a statement on X that he’s leaving to join rival AI firm Anthropic to engage in more hands-on work: “This decision stems from my desire to deepen my focus on AI alignment and to start a new chapter of my career where I can return to hands-on technical work.” Alignment is a field that aims to guide AI models to produce helpful outputs.

    In May, Jan Leike, an alignment researcher at OpenAI, left the company to join Anthropic while criticizing OpenAI’s handling of alignment safety.

    According to The Information, Peter Deng, a product leader who joined OpenAI last year after working at Meta Platforms, Uber, and Airtable, has also left the company, although his destination is currently unknown. In May, OpenAI co-founder Ilya Sutskever departed to start a competing startup, and prominent software engineer Andrej Karpathy left in February to launch an educational venture.

    De Kraker raised an intriguing point, questioning why high-profile AI veterans would leave OpenAI if the company was on the verge of developing world-changing AI technology. He asked, “If you were confident that the company you are a key part of, and have equity in equity, is about to achieve AGI within one or two years, why would you leave?”

    Despite the departures, Schulman expressed optimism about OpenAI’s future in his farewell note on X. “I am confident that OpenAI and the teams I was part of will continue to thrive without me,” he wrote. “I’m incredibly grateful for the opportunity to participate in such an important part of history and I’m proud of what we’ve achieved together. I’ll still be rooting for you all, even while working elsewhere.”

    Former employees of OpenAI, Google, and Meta frustrated before Congress on Tuesday about the risks associated with AI reaching human-level intelligence. They urged members of the Senate Subcommittee on Privacy, Technology, and the Law to advance US AI policy to protect against harms caused by AI.

    Artificial general intelligence (AGI) is an AI system that achieves nearly human-level cognition. William Saunders, a former member of technical staff at OpenAI who resigned from the company in February, absent during the hearing that AGI could lead to “catastrophic harm” through autonomously conducting cyberattacks or assisting in the creation of new biological weapons.

    Saunders suggested that while there are significant gaps in AGI development, it is conceivable that an AGI system could be built in as little as three years.

    “AI companies are making rapid progress toward building AGI,” Saunders stated, citing OpenAI’s recent announcement of GPT-o1. “AGI would bring about significant societal changes, including drastic shifts in the economy and employment.”

    He also emphasized that no one knows how to ensure the safety and control of AGI systems, which means they could be deceptive and conceal misbehaviors. Saunders criticized OpenAI for prioritizing speed of deployment over thoroughness, leaving vulnerabilities and increasing threats such as theft of the US’s most advanced AI systems by foreign adversaries.

    During his time at OpenAI, he observed that the company did not prioritize internal security. He highlighted long periods in which vulnerabilities could have allowed employees to bypass access controls and steal the company’s most advanced AI systems, including GPT-4.

    “OpenAI may claim they are improving,” he said. “However, I and other resigning employees doubt that they will be ready in time. This is not only true for OpenAI. The industry as a whole has incentives to prioritize rapid deployment, which is why a policy response is imperative.”

    AGI and the lack of AI policy are top concerns for insiders

    Saunders urged policymakers to prioritize policies that mandate testing of AI systems before and after deployment, require sharing of testing results, and implement protections for whistleblowers.

    “I resigned from OpenAI because I no longer believed that the company would make responsible decisions about AGI on its own,” he stated during the hearing.

    Helen Toner, who served on OpenAI’s nonprofit board from 2021 until November 2023, continuing that AGI is a goal that many AI companies believe they could achieve soon, making federal AI policy essential. Toner currently serves as director of strategic and foundational research grants at Georgetown University’s Center for Security and Emerging Technology.

    “Many top AI companies, including OpenAI, Google, and Anthropic, are treating the development of AGI as a serious and attainable goal,” Toner stated. “Many individuals within these companies believe that if they successfully create computers as intelligent as or even more intelligent than humans, the technology will be extraordinarily disruptive at a minimum and could potentially lead to human extinction at a maximum.”

    Margaret Mitchell, a former research scientist at Microsoft and Google who now serves as chief ethics scientist at the AI ​​startup Hugging Face, emphasized the need for policymakers to address the numerous gaps in AI companies’ practices that could result in harm. David Harris, senior policy advisor at the University of California Berkeley’s California Initiative for Technology and Democracy, stated during the hearing that voluntary self-regulation on safe and secure AI, which multiple AI companies committed to last year, is ineffective.

    Harris, who was employed at Meta working on the teams responsible for civic integrity and responsible AI from 2018 to 2023, mentioned that these two safety teams no longer exist. He highlighted the significant reduction in the size of trust and safety teams at technology companies over the past two years.

    Harris pointed out that numerous AI bills proposed in Congress offer strong frameworks for ensuring AI safety and fairness. Although several AI bills are awaiting votes in both the House and the Senate, Congress has not yet passed any AI legislation.

    During the hearing, Senator Richard Blumenthal (D-Conn.), chair of the subcommittee, expressed concern that we might repeat the same mistake made with social media by acting too late. He emphasized the need to learn from the experience with social media and not rely on big tech to fulfill this role.

    Makenzie Holland, a senior news writer covering federal regulation and big tech, joined TechTarget Editorial after working as a general assignment reporter for the Wilmington StarNews and as a crime and education reporter at the Wabash Plain Dealer.

    Companies that fail to utilize AI are at risk of falling behind their competitors. While the concept of AI as a fundamental business principle is not new, businesses must ensure they fully exploit the potential of AI as new advancements emerge. Technology-driven businesses use AI to foster innovation, maintain quality control, and monitor employee productivity. Additionally, AI can serve as a valuable tool for enhancing cybersecurity and providing personalized consumer experiences.

    Businesses recognize that AI is the future, but integrating it into existing infrastructure poses a common challenge for business decision-makers, as indicated by an HPE survey. Addressing skill and knowledge gaps during implementation and justifying costs are also obstacles to achieving success with AI. Overcoming these challenges is crucial for businesses seeking to leverage new AI technology.

    Businesses require a scalable AI-optimized solution that can adapt to heavy AI workloads while ensuring security and ease of management. This solution should also be capable of proactively addressing fluctuating data demands and infrastructure maintenance needs.

    The Advancement of AI in the Data Center

    As the pace of AI innovation and advancement continues, data centers must keep pace with this evolution. AI not only supports operations but also drives strategic business decisions by using analytics to provide insights. Integrating AI enterprise-wide creates operational efficiencies, positioning businesses ahead of their competitors and delivering significant productivity gains. These efficiencies include time savings, accelerated ideation, and new insights to automate and simplify workflow and processes.

    Like any technological advancement, it is crucial to consider potential challenges alongside the benefits. Complete transparency is vital, and when implementing AI in the business, various factors must be taken into account. It is important to carefully plan and assess, considering long-term strategies and providing training and development for employees.

    Understanding potential challenges is essential. Traditional data centers designed for CPU-intensive tasks face specific obstacles; for instance, GPUs require more physical space and higher power for operation and cooling. By planning for these challenges and other likely hurdles, businesses can set themselves up for success.

    The benefits of AI for any enterprise are extensive and continually expanding. By building an in-house AI ecosystem with pre-trained models, tools, frameworks, and data pipelines, businesses can power new AI applications that drive innovation and expedite time-to- value. Leveraging AI allows data centers to maintain control of their data and ensure more predictable performance for their enterprise.

    This places businesses and AI practitioners in control of navigating their AI journey, giving them a competitive edge. While implementing and scaling AI for production is challenging, the right partner and technology stack can mitigate risks and streamline operations to facilitate success.

    Using solutions specifically engineered and optimized for AI in the data center mitigates risks and simplifies IT operations. The HPE ProLiant DL380a Gen11 Server with Intel® Xeon® Scalable Processors is an ultra-scalable platform for AI-powered businesses. It serves as an ideal solution for AI infrastructure within the data center and can support generative AI, vision AI, and speech AI initiatives.

    The HPE ProLiant DL380a Gen11 server is designed for fine tuning and inference, featuring leading Intel® Xeon® Scalable Processors and NVIDIA GPUs.

    The role of AI in modern business is constantly evolving. Integrating AI into the data center presents an opportunity for growth, business success, and operational efficiency. Businesses seeking exceptional processing power, performance, and efficiency to support their AI journey can benefit from solutions like the HPE ProLiant DL380a Gen11 server with Intel® Xeon® Scalable Processors. With AI-driven automation and insights, intelligent businesses can become more resilient, secure, and responsive to market needs.

    OpenAI recently introduced a five-tier system to assess its progress toward developing artificial general intelligence (AGI), as reported by Bloomberg. This new classification system was shared with employees during a company meeting to provide a clear framework for understanding AI advancement. However, the system describes hypothetical technology that does not currently exist, and it may be seen as a move to attract investment.

    OpenAI has previously stated that AGI, referring to an AI system capable of performing tasks like a human without specialized training, is its primary goal. The pursuit of technology that can replace humans at most intellectual work has generated significant attention, even though it could potentially disrupt society.

    OpenAI CEO Sam Altman has expressed his belief that AGI could be achieved within this decade. Much of the CEO’s public messaging has focused on how the company and society might handle the potential disruption brought about by AGI. Therefore, a ranking system to communicate internal AI milestones on the path to AGI makes sense.

    OpenAI’s five levels, which it plans to share with investors, range from current AI capabilities to systems that could potentially manage entire organizations. The company believes its technology, such as GPT-4o that powers ChatGPT, currently falls under Level 1, encompassing AI capable of engaging in conversational interactions. Additionally, OpenAI executives have informed staff that they are close to reaching Level 2, known as “Reasoners.”

    OpenAI is not the only entity attempting to quantify levels of AI capabilities. Similar to levels of autonomous driving mapped out by automakers, OpenAI’s system resembles efforts by other AI labs, such as the five-level framework proposed by researchers at Google DeepMind in November 2023 .

    OpenAI’s classification system also bears some resemblance to Anthropic’s “AI Safety Levels” (ASLs) published by the maker of the Claude AI assistant in September 2023. Both systems aim to categorize AI capabilities, although they focus on different aspects.

    While Anthropic’s ASLs are explicitly focused on safety and catastrophic risks, OpenAI’s levels track general capabilities. However, any AI classification system raises questions about whether it is possible to meaningfully quantify AI progress and what constitutes an advancement. The tech industry has a history of overpromising AI capabilities, and linear progression models like OpenAI’s potentially risk fueling unrealistic expectations.

    There is currently no consensus in the AI ​​​​research community on how to measure progress toward AGI or even if AGI is a well-defined or achievable goal. Therefore, OpenAI’s five-tier system should be viewed as a communications tool to attract investors, showcasing the company’s aspirational goals rather than a scientific or technical measurement of progress.

  • Musk says robotaxis are key to Tesla’s future profits

    Musk says robotaxis are key to Tesla’s future profits

    An autonomous Tesla being used for Uber collided with an SUV at a crossroads in suburban Las Vegas in April, sparking concerns about the safety of self-driving “robotaxis” operating in a regulatory gray area in US cities.

    Elon Musk, the CEO of Tesla, plans to unveil his vision for a robotaxi, a self-driving vehicle designed for ride-hailing services, on October 10. He has long considered creating a Tesla-operated taxi network using autonomous vehicles owned by individuals .

    Despite its limitations, many ride-hail drivers utilizing Tesla’s Full Self-Driving (FSD) software, which costs $99 per month, find it beneficial as it reduces stress, allowing them to work longer hours and earn more money.

    Reuters was the first to report on the Las Vegas accident and the subsequent investigation by federal safety officials, as well as the widespread use of Tesla autonomous software among ride-hail drivers.

    While self-driving cabs with human backup drivers from companies like Alphabet’s Waymo and General Motors’ Cruise are heavily regulated, authorities state that Tesla drivers are solely responsible for their vehicles, regardless of whether they use driver-assist software. Unlike Waymo and Cruise, Tesla’s FSD is categorized as requiring driver oversight rather than being fully autonomous.

    The other driver involved in the April 10 Las Vegas accident, who was hospitalized, was found to be at fault for not yielding the right of way, according to the police report. The Tesla driver, Justin Yoon, claimed in a YouTube video that the Tesla software failed to decelerate his vehicle even after the SUV appeared from a blind spot caused by another vehicle.

    Yoon, known for his “Project Robotaxi” YouTube channel, was seated in the driver’s seat of his Tesla with his hands off the wheel when the incident occurred in a suburban area of ​​Las Vegas, as shown in the car’s footage. The Tesla on FSD was traveling at 46 mph (74 kph) and initially did not detect an SUV crossing the road in front of Yoon. Yoon took control at the last moment and steered the car to avoid a direct collision, as seen in the footage .

    “It’s not perfect, it’ll make mistakes, it will probably continue to make mistakes,” Yoon mentioned in a video following the crash. Both Yoon and his passenger sustained minor injuries, and the car was declared a total loss.

    Yoon had discussed using FSD with Reuters before publicly sharing videos of the accident but did not respond to requests for comment afterward.

    Tesla did not provide a comment in response to requests. Reuters was unable to reach the Uber passenger and the other driver for comment.

    Ride-hailing companies Uber and Lyft, when asked about FSD, emphasized that drivers are accountable for safety.

    Uber, in touch with the driver and passenger involved in the Las Vegas accident, referenced its community guidelines, stating that drivers are expected to maintain a safe environment for riders, even if their driving practices are within the legal bounds.

    Uber also highlighted Tesla’s instructions, which advise drivers using FSD to keep their hands on the wheel and be prepared to take control at any moment.

    Lyft stated, “Drivers agree that they will not engage in reckless behavior.”

    Musk has ambitious plans for the FSD product, envisioning it as the basis for the robotaxi software. He aims to establish a Tesla-operated autonomous ride service using customers’ vehicles when not in use.

    However, drivers speaking to Reuters also pointed out significant issues with the technology, such as sudden unexplained acceleration and braking. Some have chosen to stop using it in challenging scenarios like airport pickups, navigating parking lots, and construction zones.

    “I do use it, but I’m not completely comfortable with it,” said Sergio Avedian, a ride-hail driver in Los Angeles and a senior contributor on “The Rideshare Guy” YouTube channel, an online community of ride-hailing drivers with nearly 200,000 subscribers.

    Avedian avoids using FSD when carrying passengers. However, based on his conversations with fellow drivers on the channel, he estimates that 30% to 40% of Tesla ride-hail drivers across the US regularly use FSD.

    FSD falls under the federal government’s classification as a form of partial automation that necessitates the driver’s full engagement and attentiveness while the system handles steering, acceleration, and braking. It has attracted increased regulatory and legal attention following at least two fatal accidents involving the technology. However, utilizing it for ride-hail services is not prohibited by law.

    “Ride-share services permit the use of these partial automation systems in commercial environments, and this is something that should be subjected to significant scrutiny,” remarked Jake Foose, an analyst at Guidehouse Insights.

    The US National Highway Traffic Safety Administration acknowledged Yoon’s crash and had contacted Tesla for further details, but did not provide specific answers regarding additional regulations or guidelines.

    Authorities in California, Nevada, and Arizona, which oversee the operations of ride-hail and robotaxi companies, stated that they do not regulate the use of FSD and similar systems, as they fall outside the scope of robotaxi or AV regulation. They refrained from commenting on the crash.

    Uber recently updated its software to transmit passenger destination details to Tesla’s dashboard navigation system – a modification that benefits FSD users, according to Omar Qazi, a prominent user with 515,000 followers who posts under the handle @WholeMarsBlog and frequently receives public responses from Musk on the platform.

    “This will simplify Uber rides on FSD even more,” Qazi stated in a post.

    Industry experts noted that Tesla, Uber, and Lyft lack mechanisms to determine if a driver is both working for a ride-hailing company and using FSD.

    While almost all major automakers offer some form of partial automation technology, most are limited in their capabilities and approved for use solely on highways. In contrast, Tesla claims that FSD enables the vehicle to drive itself nearly anywhere with active driver supervision but minimal intervention.

    “I’m pleased that Tesla is implementing this and accomplishing it,” remarked David Kidd, a senior research scientist at the Insurance Institute for Highway Safety. “However, from a safety perspective, it has raised numerous concerns.”

    Kidd suggested that instead of new regulations, NHTSA should consider issuing fundamental, nonbinding guidelines to prevent the misuse of such technologies.

    Missy Cummings, director of the George Mason University Autonomy and Robotics center and a former NHTSA advisor, emphasized that any federal oversight would necessitate a formal investigation into how ride-hail drivers utilize all driver-assistance technology, not just FSD.

    “If Uber and Lyft were wise, they would get ahead of this and prohibit it,” she remarked.

    Meanwhile, ride-hail drivers are expecting more from Tesla. Kaz Barnes, who has completed over 2,000 trips with passengers using FSD since 2022, expressed anticipation for the day when he could exit the car and let Musk’s network send it to work.

    “It would be like removing the training wheels,” Barnes stated. “I hope to be able to do that with this car one day.”

    Elon Musk Reveals Futuristic Details of the Tesla RoboTaxi

    Elon Musk envisions a revolutionary future for Tesla’s RoboTaxi. He sees a world where cars are completely transformed, no longer needing human intervention, steering wheels, or pedals. Musk emphasizes that the goal is not simply to improve existing automotive technology but to completely redefine it .

    Musk stated that they can easily create a car without steering wheels or pedals and can delete parts to accelerate the process if needed, highlighting Tesla’s commitment to designing vehicles for autonomy from the start.

    Musk describes the concept of a dedicated RoboTaxi as futuristic, reflecting advanced underlying technology. He mentioned that it would have a unique design unlike anything seen on the road today, embodying the technological leap that Tesla is making.

    Musk doesn’t just aim to create a single model but envisions a fleet of these vehicles, numbering in the millions, operating globally. He predicted that by next year, Tesla would have over a million cars on the road with full self-driving hardware He argues that this massive scale is essential not only for Tesla’s business model but also for transitioning the world to sustainable energy.

    Musk also sees this technological shift as a catalyst for broader societal change. He believes that the convenience and safety of autonomous vehicles will make them the preferred choice for consumers, ultimately leading to a world where human-driven cars are considered relics of a less safe era.

    Musk boldly claimed that in the future, consumers will want to outlaw people driving their own cars because it is unsafe. He believes that autonomous driving technology will become so advanced and reliable that the idea of ​​​​humans controlling vehicles will seem antiquated and Hazardous.

    Musk has made it clear that the technology and infrastructure to support this transformation are already in place. He stated that Tesla is uniquely positioned to lead this charge, with vehicle design and manufacturing, in-house computer hardware, in-house software development, and AI capabilities.

    In essence, Musk’s vision for Tesla’s RoboTaxi is about creating a new paradigm for transportation, one that is safer, more efficient, and fundamentally different from anything that has come before. He concluded that the impact of Tesla’s RoboTaxi could extend far beyond the automotive industry , shaping the future of how we move and live.

    Elon Musk’s vision for full autonomy in Tesla vehicles is based on his belief that artificial intelligence will be the key to unlocking a future where cars can navigate the world without human intervention. He believes that artificial intelligence will profoundly change the world. For Tesla, achieving full autonomy is not just about technological advancements but also about creating a comprehensive system that can learn, adapt, and ultimately drive more safely than any human could.

    Tesla’s approach to achieving full autonomy relies heavily on digital neural networks and cameras, rather than more traditional methods like LIDAR or radar. Musk explained that Tesla’s technology mirrors the way humans drive, relying on visual input to navigate the environment. He emphasized that humans are biological neural nets and use eyes to drive, while the analog for digital is cameras and digital neural nets, making the case that Tesla’s technology is a natural progression from how we’ve driven for over a century.

    Tesla faces a major challenge in achieving full autonomy, which involves ensuring that its vehicles can handle the complexities of real-world driving scenarios. According to Musk, the key to this is making the car “fully intelligent.” This goes beyond teaching the vehicle to follow traffic laws or recognize objects; it requires the ability to interpret and react to unpredictable situations, such as understanding the intentions of other drivers and pedestrians. Musk emphasized the sophistication required for true autonomy.

    Musk is confident in Tesla’s progress toward full autonomy. He believes that Tesla is “very close” to achieving a level of autonomy where the car could navigate from point A to point B without any human input. For instance, he mentioned that he is currently in Austin and the car could take him to the airport without any interventions, showcasing the current capabilities of Tesla’s AI-driven system.

    Nevertheless, the journey to full autonomy is not without challenges. Regulatory approval remains a significant barrier to the widespread deployment of fully autonomous vehicles. Musk acknowledges this but remains positive, stating that Tesla is working closely with regulators to ensure that its RoboTaxis can be legally and safely deployed.

    Musk shared that the company will have the first operating RoboTaxis next year with no one in them, although not in all jurisdictions due to the lack of regulatory approval everywhere. He expressed confidence that at least some regions will grant regulatory approval, indicating that while the technology is nearly ready, full deployment will require cooperation with global regulatory bodies.

    Ultimately, Musk envisions a future where Tesla’s autonomous vehicles are not only a technological marvel but also a ubiquitous part of everyday life, providing safer and more efficient transportation. The potential implications of this shift are significant, as it could dramatically reduce traffic accidents and transform how we perceive car ownership and mobility.

    Musk noted that the average use of a passenger vehicle is only about 10 hours per week out of 168 hours, suggesting that autonomous vehicles could operate for a far greater portion of the day, increasing their utility and efficiency.

    As Tesla moves closer to realizing full autonomy, the company continues to push the boundaries of what is possible in automotive technology. Musk emphasized the scale and ambition of Tesla’s quest to lead the world into a new era of autonomous driving.

    Economic and Social Impact

    The introduction of Tesla’s RoboTaxi fleet is set to bring about significant economic and social changes, reshaping industries and daily life. Elon Musk has frequently emphasized the transformative potential of autonomous vehicles, not just as a technological leap, but as a catalyst for broad societal shifts .

    Musk has declared that the company will achieve truly massive scale, surpassing any scale achieved in the history of humanity, underlining the ambitious scope of Tesla’s plans. The economic implications of this scale are extensive, with the potential to revolutionize the transportation sector and create new avenues for economic growth.

    One of the most immediate impacts of Tesla’s RoboTaxis could be on the cost of transportation. Musk has suggested that the widespread adoption of autonomous vehicles could significantly reduce the cost of travel, making it more accessible to a wider population.

    Musk has stated that it is financially illogical to buy anything other than a Tesla, highlighting the cost-efficiency of Tesla’s autonomous technology. By reducing the need for personal car ownership and making transportation as simple as summoning a RoboTaxi, Tesla could substantially lower household transportation expenses, freeing up income for other uses.

    Furthermore, the economic model Musk envisions for the RoboTaxi network could provide new income opportunities for individuals and businesses. Tesla owners will have the option to add their vehicles to the RoboTaxi fleet, earning money when they’re not using their cars.

    Musk explained that any customer will be able to add or remove their car to the Tesla Network, drawing a comparison to a blend of Uber and Airbnb. This could democratize access to income generation, transforming personal vehicles into revenue-generating assets.

    The broader economic impact of Tesla’s autonomous vehicles could also extend to the labor market. While the shift to autonomous driving will likely displace some jobs, particularly in the transportation sector, it could also create new opportunities in areas like vehicle maintenance, software development, and fleet management.

    Elon Musk has recognized the possibility of job displacement but contends that the overall advantages to society, such as improved safety and efficiency, outweigh the drawbacks. He has forecasted that consumers will eventually seek to prohibit human driving due to safety concerns, indicating that the society shift towards automation is unavoidable.

    On a societal level, the emergence of self-driving vehicles could result in significant alterations in urban planning and human interaction with their surroundings. With RoboTaxis decreasing the necessity for individual car ownership, urban areas might witness a reduction in the need for parking spaces, enabling the creation of more green spaces, pedestrian zones, and other community-driven developments.

    Moreover, the enhanced mobility provided by RoboTaxis could enhance transportation access for underserved communities, contributing to greater social fairness. Musk highlighted the potential for RoboTaxis to make transportation more efficient and widely accessible, noting that the usefulness of a passenger vehicle could increase substantial.

    By transitioning more vehicles to electric and autonomous operation, Tesla could aid in reducing carbon emissions and finally supporting global efforts to combat climate change. Musk has emphasized that this transition is necessary to move the world sustainable energy, positioning towards Tesla’s autonomous vehicle initiative as a crucial element of the broader drive towards sustainability.

    In summary, the economic and social implications of Tesla’s RoboTaxis are expected to be extensive and diverse. From reducing transportation expenses and creating new revenue streams to reshaping urban landscapes and advancing sustainability, the effects of this technology will be felt across numerous sectors and aspects of life. As Musk himself has stated, “This is gigantic,” and the world is only beginning to comprehend the full extent of the changes that Tesla’s autonomous vehicles will bring.

    Challenges and Doubts

    Despite Elon Musk’s ambitious vision for Tesla’s RoboTaxi fleet, substantial obstacles and skepticism persist. Tesla must surmount technical, regulatory, and societal hurdles on the path to full autonomy before its vision can materialize. Musk has acknowledged these challenges, remarking, “It turns out that in order to use this technology, the car has to really be quite fully intelligent.” Creating an autonomous vehicle capable of navigating the complexities of real-world driving is a considerable undertaking, and Musk himself has admitted that progress has been slower than initially expected.

    One of the main challenges facing Tesla is the technical intricacy of achieving full autonomy. While the company has made significant progress in developing its Full Self-Driving (FSD) software, there are still numerous exceptional scenarios in driving that the system must handle flawlessly. Musk explained that the vehicle must learn to interpret various situations and assess the intentions of drivers and pedestrians, emphasizing the need for the vehicle to possess situational awareness and decision-making capabilities comparable to or surpassing those of a human driver.
    Apart from the technical challenges, Tesla must also navigate a complex and frequently inconsistent regulatory environment. Laws governing autonomous driving vary widely between countries and even within different states or regions, creating a patchwork of regulations that Tesla must adhere to.

    Musk has expressed confidence that Tesla will soon receive regulatory approval for its RoboTaxis in certain jurisdictions, stating, “I’m confident we will have at least regulatory approval somewhere literally next year.” However, widespread approval is likely to take much longer, as regulators grapple with the ethical and safety implications of allowing fully autonomous vehicles on the road.

    There are also doubts about the timeline and feasibility of Tesla’s ambitious objectives. Musk’s previous predictions regarding the timeline for full autonomy have experienced delays, leading some industry experts and analysts to question whether the technology is as close to deployment as Musk suggests.

    The inherent unpredictability of software development, particularly in a complex field like autonomous driving, means that even minor setbacks can result in significant delays. Musk has stated, “I think we’re quite close to having the car be fully autonomous,” but there remains uncertainty about when this milestone will be achieved.

    Additionally, there is doubt about whether consumers will completely accept the concept of autonomous vehicles, especially in the initial phases of implementation. While Musk imagines a future where human driving is considered unsafe and outdated, convincing the public to have faith in and adopt this new technology will involve overcoming significant psychological barriers.

    The notion of giving up control to a machine, especially in a life-or-death situation like driving, is daunting for many individuals. Musk has recognized this challenge, likening it to the initial skepticism surrounding elevators: “Elevators used to be operated on a big lever… now you do not have elevator operators.”

    Tesla will need to manufacture vehicles at an unprecedented scale while ensuring that finally each car meets the strict requirements for full autonomy. This will necessitate not only substantial investment but also flawless execution in manufacturing, software development, and customer support.

    Musk has highlighted the extent of Tesla’s ambitions, stating frequently, “We’re going to move to just truly massive scale—scale that no company has ever achieved in the history of humanity.”

    In summary, while the vision of Tesla’s RoboTaxi fleet is unquestionably daring and potentially transformative, the path ahead is anything but easy. Technical challenges, regulatory obstacles, consumer skepticism, and the sheer scale of the endeavor all present significant hurdles that Tesla must overcome.

    However, as Musk himself has demonstrated time and again, he is undeterred by challenges, viewing them as part of the journey toward a revolutionary future.

    A Bold New Future

    Elon Musk’s vision for the Tesla RoboTaxi isn’t solely about transportation; it’s about redefining the very concept of mobility and questioning the established norms of how society perceives vehicles. As Musk sees it, the introduction of fully autonomous vehicles signifies a pivotal moment in human history—one that could fundamentally alter our cities, economies, and daily lives. “We’re going to move to just truly massive scale—scale that no company has ever achieved in the history of humanity,” Musk has said, emphasizing the magnitude of his aspirations.

    One of the most radical aspects of Musk’s vision is the notion that, in the future, owning a non-autonomous car will appear as outdated as operating a horse-drawn carriage. “In the future, consumers will want to outlaw people driving their own cars because it is unsafe,” Musk has predicted. This statement reflects his belief that once autonomous vehicles become the norm, human driving will be viewed as an unnecessary risk—a sentiment that could lead to significant cultural and legal shifts.

    Musk’s plans for the RoboTaxi network also suggest a broader transformation in how we think about car ownership and urban planning. By enabling vehicles to operate almost continuously, rather than sitting idle for most of the day, Tesla’s RoboTaxis could significantly reduce the number of cars needed on the road. “The average use of a passenger vehicle is only about 10 hours per week out of 168 hours,” Musk explained. With autonomous vehicles, he envisions that usage could increase significantly, leading to fewer cars on the road, less congestion , and more efficient use of urban space.

    This shift could have profound economic implications as well. Musk has frequently highlighted the financial advantages of a RoboTaxi system, both for Tesla and for individual owners. “The probable gross profit from a single RoboTaxi could be something on the order of $30,000 per year, ” Musk has estimated, suggesting that vehicle owners could generate significant income by sharing their cars on the network. For Tesla, this model represents a potential new revenue stream that could surpass traditional car sales, positioning the company at the forefront of a new era in mobility.

    Furthermore, the broader adoption of RoboTaxis could contribute significantly to Tesla’s overarching mission of accelerating the world’s transition to sustainable energy. By reducing the total number of vehicles required and optimizing their usage, the RoboTaxi network could lower global energy consumption and greenhouse gas emissions.

    Musk’s vision of a fleet of electric, autonomous vehicles operating at scale is not just about convenience; it’s a critical component of his strategy to combat climate change. “Massive scale full self-driving has to happen in order to transition the world to sustainable energy ,” Musk has asserted, linking the success of the RoboTaxi network directly to his broader environmental goals.

    As Tesla continues to push the boundaries of what’s possible, Musk’s vision for the future of transportation remains both ambitious and thought-provoking. While challenges undoubtedly lie ahead, the potential rewards—both for Tesla and for society as a whole—are enormous. If Musk’s predictions hold true, the advent of RoboTaxis could mark the beginning of a bold new future, one where transportation is safer, more efficient, and more sustainable.

    Tesla is planning to unveil its anticipated highly robotaxi later this year. Here’s what we know so far about the “Cybercab.”

    Recently, Tesla CEO Elon Musk has shown a growing disinterest in the car business. He emphasizes that Tesla’s future depends on artificial intelligence and robotics rather than selling more Teslas. A key part of this vision centers around self-driving cars that can be used as “robotaxis” to completely eliminate the need for human drivers.

    A test prototype of what seems to be Tesla’s upcoming robotaxi was spotted by a Reddit user who claims to work at the Warner Bros. studio in Los Angeles. The photo below shows a heavily camouflaged two-door bright yellow prototype.

    The anticipated highly Robotaxi will be revealed on October 10. Sources have revealed that the unveiling will occur at the Warner Bros. Discovery movie studio in the Los Angeles area. The studio, situated in Burbank, is a historic film and television production set where iconic TV shows like Friends and films like Batman were filmed.

    However, Musk doesn’t want to rely on standard Model 3 sedans and Model Y SUVs for his Uber competitor. Tesla is working on a dedicated robotaxi vehicle, which Musk hinted may be named the “Cybercab” during a recent earnings call.

    This is an extremely ambitious plan, representing the ultimate extension of Tesla’s longstanding focus on its Autopilot and Full Self-Driving systems. However, it is also unproven and depends on aggressive development of new technologies, uncertain consumer support, non-existent regulations, and the ability of Autopilot and FSD to withstand legal challenges and even a federal criminal investigation.

    In other words, this is Musk’s biggest and riskiest move to date, and its success is far from certain. Nevertheless, let’s review what we know based on the company’s various statements and concept artwork that has been seen.

    What do we know about the Tesla Robotaxi?

    Musk has been talking for at least a decade about the imminent arrival of self-driving capability in Teslas. He has suggested over the years that autonomous Teslas could generate significant income for their owners by transporting passengers when they would otherwise be parked. However, none of this has materialized yet.

    In recent years, Tesla executives have also discussed the concept of a purpose-built Tesla robotaxi, designed from the ground up for autonomous driving rather than just being capable of autonomous driving on occasion.

    The robotaxi plan has taken precedence over more conventional, and arguably more sensible, projects at Tesla. In April, Reuters reported that Tesla had abandoned plans for an affordable mass-market vehicle, known informally as the Model 2, in favor of focusing entirely on the robotaxi. (Musk has suggested that this more affordable model is still in the pipeline, but it doesn’t appear to be a priority.)

    When will the Tesla Robotaxi be revealed?

    In April, Musk stated in a post on X that Tesla would showcase the robotaxi on August 8. However, in July, Bloomberg reported that Tesla intended to postpone the event until October. According to the report, Tesla’s teams required more time to develop additional robotaxi prototypes.

    During Tesla’s Q2 2024 earnings call, CEO Elon Musk announced that the Robotaxi’s reveal date would be October 10th. However, Tesla is known for its delays, so this date may change.

    When will the Tesla Robotaxi be released?

    So far, Tesla has not achieved autonomous fully driving in its current vehicles. It sells a feature marketed as “Full Self-Driving,” but this system requires driver supervision and is far from flawless.

    Before deploying robotaxis without steering wheels, Tesla would need to deliver reliable self-driving technology, and it is uncertain when or if this will happen.

    On Tuesday’s Q2 2024 earnings call, Musk stated in response to an investor question that Tesla would not be able to offer rides to customers until Full Self-Driving can be used without supervision. In its earnings report issued on Tuesday, Tesla stated that the ” timing of Robotaxi deployment depends on technological advancement and regulatory approval.”

    The fact that the robotaxi will be unveiled this summer does not necessarily mean that it is anywhere close to being ready for production. Tesla unveiled the Cybertruck pickup in late 2019, and the first trucks did not reach customers until a full five years later. design for an upcoming supercar, the Tesla Roadster, was revealed in 2017 and has still not been released.

    What will the appearance of the Tesla Robotaxi be?

    We are expecting to receive specific details about the vehicle’s design in August or possibly October. In the meantime, some strong hints have emerged.

    In 2022, Musk stated that the robotaxi will not have a steering wheel or pedals and described its design as “futuristic.” Musk’s biographer, Walter Isaacson, mentioned that an early design for the vehicle had a “Cybertruck futuristic feel.” This could suggest a more angular, polygonal aesthetic compared to the sleek Model 3 and Model Y.

    An illustration in Elon Musk’s book depicts a small, two-seat vehicle with a teardrop shape. In April, Musk referred to the robotaxi as the Cybercab. Although it’s uncertain if that will be the model’s actual name, it would make sense given the reportedly Cybertruck-like styling.

    In a recent video posted to X, Tesla appears to have released some additional hints. The clip shows what could potentially be the robotaxi’s front bumper and white interior.

    Previously, Tesla stated that it would construct the robotaxi using its lower-cost, next-generation vehicle platform. However, Tesla recently announced that it’s expediting new-vehicle projects by combining its current and next-generation technologies. It’s unclear which technology will form the basis of the robotaxi.

    How will Tesla’s Uber competitor function?

    During an April earnings call, Musk described Tesla’s taxi service as a blend of Airbnb and Uber. The concept is that Tesla’s fleet will include both its own robotaxis and the vehicles of Tesla owners who choose to participate – meaning you own the car and when you ‘re not using it, you can “rent” it out for robotaxi duty.

    This is something that Musk has promised in various forms for years. In 2019, he stated that up to a million Model 3s on US roads would be deployable as fully autonomous (SAE Level 5) robotaxis by 2020. As you might have noticed, that did not happen.

    Nevertheless, Tesla is clearly moving in that direction. In its earnings report, the automaker shared some representations of what a Tesla ride-hailing app might look like.

    How is Tesla’s Robotaxi different from Waymo, Cruise, and Zoox?

    Waymo and Cruise, autonomous taxi companies owned by Alphabet and General Motors, respectively, both utilize modified versions of off-the-shelf electric vehicles for their operations. Waymo uses Jaguar I-Paces, while Cruise uses Chevrolet Bolts.

    As they have developed their self-driving technology on public roads, both companies have utilized safety drivers who can monitor and take over if something goes awry. Cruise, which temporarily halted operations after striking a pedestrian late last year, is gradually reintroducing its vehicles with people in their drivers’ seats.

    Zoox, Amazon’s self-driving startup, is developing a taxi service that utilizes purpose-built, pod-like vehicles without steering wheels. However, it is still in the testing phase and has not yet launched commercial operations.

    Unlike any of those companies, Tesla asserts that it can achieve reliable self-driving capability using only cameras. Other autonomous-driving endeavors rely on more sensors, including LiDAR units that utilize lasers to create a three-dimensional image of the world. Many autonomous -vehicle experts question whether Tesla’s streamlined, vision-only approach will be successful.

    What obstacles are preventing this from moving forward?

    How much time do you have? First and foremost, the plan hinges on Tesla “solving” the issue of fully autonomous driving, something that many other experts have cautioned is decades away, not years, if it ever materializes. Additionally, Tesla has traditionally technologies for autonomy avoided that other automakers support, such as LIDAR. Instead, it’s attempting to train AI using cameras, sensors, and supercomputers.

    Moreover, the US is not equipped for a widespread robotaxi network of any kind. While robotaxi testing and pilot programs are underway in approximately 10 states, comprehensive federal regulations for them do not exist. There are unresolved issues of accident liability and other matters that must be addressed first.

    As we hinted at earlier, Tesla’s existing FSD and Autopilot technologies have been plagued by high-profile accidents, legal action, state investigations, and even a Department of Justice inquiry examining whether the automaker misled investors and consumers about the capabilities of its driver-assistance systems.

    So why does the Tesla Robotaxi matter?

    Tesla, its enthusiastic investors, and highly optimistic Wall Street analysts believe that autonomous driving will enable the automaker to generate substantial revenue over time. This is partly why Tesla is so valued.

    As of now, it is valued at $544 billion, approximately ten times the market caps of competitors like Ford and General Motors. A functional robotaxi will be crucial if Tesla wants to live up to the expectations set by its lofty stock price.

    What hardware features make a robotaxi, and what will Tesla do?

    Several companies have created or announced intentions to develop a custom-designed robotaxi. While this is fascinating, the design of the vehicle is only the third most crucial element for a robotaxi service. The first essential component is a reliable and safe self-driving software system, in addition to the sensor suite.

    The second crucial element is the infrastructure, which encompasses maps (or working with minimal maps), depots, storage, charging, cleaning, management, communications, customer service, remote assistance, a user app, crash management and insurance, maintenance, pricing model , and local regulator relations.

    Following these aspects, there is the physical vehicle, which is actually the most visible aspect of the service from a customer standpoint. Tesla has announced plans for a robotaxi and aims to unveil concepts for the physical form in October. Their initial goal was to do so in August, but they have postponed the release. There is currently no set timeline for when Tesla will have operational software, and they have shown minimal progress on infrastructure.

    Most companies have modified existing vehicles, at least for their initial projects. Some have designed custom vehicles with varying degrees of innovation. GM’s Cruise had a custom vehicle called the Origin, but it has recently put it on hold, at least for the time being . Waymo has been testing some of their relatively basic custom vehicles. Amazon’s Zoox has been focused on the custom vehicle from the beginning; it is their reason for existence.

    While most Chinese companies have made modest modifications to standard cars, Baidu Apollo has unveiled a $28,000 custom vehicle. There is a wide range of opinions on what a vehicle should look like now and in the future.
    Starting with a standard car provides significant economic advantages. Many companies are well-versed in manufacturing cars similar to those on the market today. It involves minimal risk. Additionally, companies can begin with much cheaper off-lease cars and resell them if they are not suitable or become too old for robotaxi service. Custom vehicles are tailored for life and entail a significant design risk.

    Waymo and others have typically acquired an existing model with minor modifications, such as places for mounting sensors and hardware. These vehicles come 90% complete from the factory, and the robotaxi company finishes the rest. Contract manufacturing of this nature is easy to arrange. Some modifications, such as redundant brakes and steering motors, and full drive-by-wire, may be specific to the robotaxi model.

    No steering wheel and pedals

    Early modified cars still retain these, and passengers are not allowed to sit in that seat. They are utilized by safety drivers testing the vehicles and rescue drivers recovering them. The first major step in creating a robotaxi is to take a standard car design and eliminate all the components used by drivers. This entails removing numerous items, not just the wheel and pedals. It also includes most of the elements on the dashboard, as well as mirrors, adjustable driving positions, and much more.

    In fact, the cost savings from the removed items may exceed the added expense of sensors and computing power! A seamless windshield is not even necessary, though front-facing passengers prefer it. Review the brochure for any modern high-quality car and peruse the complete list of features. Most of them are solely for the driver.

    Most of these models still allow an authorized individual to manually drive the car using a plug-in (or even wireless) set of controls, similar to a video game steering wheel. This enables them to be manually moved when necessary.

    Face-to-Face Seating

    When traveling in a social group, this provides a distinctly different experience (although it is less favorable for groups of strangers). A minority of individuals find rear-facing seats uncomfortable, but they can opt for the front-facing ones. Rear-facing seats are actually safer in a forward crash. Face-to-face seating may require slightly more space and could limit recline. Some vehicles may include front seats that rotate to offer a choice.

    Easy Entry

    All taxis aim to facilitate easy entry and exit. Taller designs (particularly with automatic sliding doors) are popular. The Origin and Zoox feature double side sliding doors into an open area. Some designs may even resemble public transit styles, with standing room and luggage placed on the floor. This can also provide better access for passengers with limited mobility. Additionally, it is possible to create custom roll-on robotaxis specifically designed for wheelchair users.

    Many options are being considered for this, but in reality, most passengers prefer using their own phone for these functions. However, specific controls such as climate adjustments, “stop ride,” and “new destination” must be easily accessible. There should also be a way to contact customer service directly without using your phone.

    In the future, there might be a larger screen available, primarily used to display content from the phone for those interested in watching videos. Unlike traditional cars, most robotaxis do not have a real dashboard, only a small touchscreen.

    The Back Is The Front

    This is the unique feature of Zoox vehicles—there is no distinct front or back. This allows the vehicle to easily change direction by reversing, which is advantageous in tight spaces, during passenger pickups, and when needing to turn around. The Origin model looks almost the same from both the front and back but not entirely. Legally, conventional “one-way” car designs could also drive in reverse if they have the appropriate lights, but it’s not ideal for extended periods as it may surprise other drivers.

    Privacy Dividers

    Current robotaxi plans involve transporting individuals in small groups. In a future scenario where rides are shared, robotaxis may have compartments with individual doors to provide privacy and security from other passengers. In larger vehicles, there might be 2-seat compartments on one side.

    Efficiency, or not

    Robotaxis designed for short urban trips do not need to achieve high speeds. Therefore, they do not require a highly aerodynamic design. This allows for taller designs with easier entry, and some may have boxy “trapezium” shapes. Robotaxis intended for highway travel need to be lower and have a more teardrop-shaped design to reduce costs and increase electric range.

    Sleeping

    In the future, robotaxis may offer the option for passengers to lie down and sleep. There might even be models dedicated solely to this purpose, which would be lower in height and more efficient. Being able to sleep during long commutes and overnight trips provides a seamless travel experience where the journey does not consume waking hours, which is highly beneficial.

    Half-Width or Two-Breast

    The majority of trips today involves a single person on short journeys, while most of the remaining trips involve two people. As a result, vehicles with half-width offer significantly lower construction and operating costs, require less parking space, and may even have the ability to split in certain areas. In the future, they may have access to special lanes. Some companies prefer a short 2-seater model, such as the Rimac.

    Cost

    For early robotaxis, cost is not a major consideration. In the initial years, with limited competition, fares will be higher. As the market develops and competition increases, cost will become a crucial factor, leading to the use of smaller, efficient vehicles aimed at delivering more value for the cost.

    Updatable Interior And Hardware

    Electric vehicles may have long lifespans, particularly if batteries can be replaced. Interiors will wear out over hundreds of thousands of miles of use. Therefore, they may be designed to be modular and easily replaceable and upgradable. similarly, hardware like computers and sensors may receive field upgrades due to the rapid pace of innovation in this field.

    A robotaxi must be easy to clean, ideally with automatic cleaning capabilities. Additionally, the ability for automatic charging is important. Since the vehicle is autonomous, the charging stations only need minimal robotics to use standard plugs. More innovative ideas could involve battery swapping for quick mid-day turnaround and a smaller battery pack, as well as an ice chamber to store cooling “energy” more cost-effectively than lithium batteries.

    Tesla?

    While Tesla has not made any official announcements, there has been speculation about a two-seater vehicle similar to the Verne, positioned side by side without a steering wheel, or a vehicle that can function as both a personal car and a robotaxi. To achieve the latter, it would need to have a fully or partially retractable wheel, as the wheel would be unnecessary during robotaxi operations, but essential for supervised driving outside the service area. (Even a car capable of driving on most roads still requires a limited service area as a robotaxi.) At present, it is unlikely that we will see anything like the Zoox or Zeekr.

    We’ve been hearing about Tesla’s Robotaxi concept for several years, and it seems that we may finally be approaching the realization of this vehicle. Here is all the information we have about the Robotaxi.

    Official Announcement

    Musk made an official announcement on X yesterday, revealing that the Tesla Robotaxi will be unveiled on August 8th, 2024. The last time Tesla unveiled a new vehicle was in November 2019 with the debut of the Cybertruck. Prior to that, the Roadster 2.0 and the Tesla Semi were unveiled at the same event in 2017, making these occasions quite special, occurring only once every few years.

    While there is a possibility that Tesla may need to change the Robotaxi’s unveiling date, it is exhilarating to consider that Tesla could be just four months away from revealing this next-generation vehicle.

    Robotaxis and Next-generation Vehicle

    Another detail about the Robotaxi emerged yesterday when Musk responded to a post by Sawyer Merritt. Sawyer mentioned that Tesla’s upcoming “$25k” vehicle and the Robotaxi would share the same platform, and that the Robotaxi would essentially be the same vehicle without a steering wheel . Musk replied to the post with a simple “looking” emoji.

    It’s not surprising that two of Tesla’s upcoming smaller vehicles will be based on the same platform, but it’s more intriguing that Musk chose to respond with that emoji when the post the Robotaxi being the “Model 2” without a steering wheel. This raises the possibility of Tesla not only unveiling the Robotaxi at the August 8th event but also its upcoming next-generation car.

    Production Timeline

    During Tesla’s Q1 2022 earnings call, Musk briefly discussed the timeline for Tesla’s Robotaxi, stating that they intend to announce the vehicle in 2023 and commence mass production in 2024.

    aiming for a 2023 unveiling, a late 2024 date now seems feasible for Tesla. However, it now appears that the Initially Robotaxi and the next-generation vehicle will share many similarities, suggesting that the production date for the Robotaxi could align with that of the next-generation vehicle, which is currently scheduled to begin in “late 2025”.

    The challenge in introducing an autonomous taxi, as the Robotaxi is intended to be, lies in the self-driving aspect. While Tesla has made significant progress with FSD v12, the first non-beta version, it remains a level-2 system that requires active driver supervision. Achieving a fully vehicle represents a substantial leap from where autonomous Tesla’s FSD currently stands, but as demonstrated by the transition from FSD v11 to v12, a lot can change in the next 18 to 24 months.

    While we anticipate Tesla’s continued focus on bringing its more affordable, next-generation vehicle to market ahead of potential competitors, the production date for the Robotaxi may continue to shift in line with Tesla’s progress on FSD.

    However, in April 2022, during the inauguration of Tesla’s new factory in Austin, Texas, Musk made waves by announcing that the company would be developing a dedicated Robotaxi vehicle that would have a “quite futuristic-looking” appearance.

    Diverse Range of Robotaxis

    As we move towards a world of “robotaxis,” it makes sense to continuously evolve the vehicle’s interior to cater to customer needs, such as incorporating face-to-face seating, large sliding doors for easy access, 4-wheel steering, and easier cleaning.

    Tesla could potentially offer a range of Robotaxis tailored to specific needs. For instance, a vehicle better suited for resting, allowing passengers to sleep during the journey.

    Another vehicle could resemble a home office, equipped with multiple monitors and accessories, enabling occupants to start working as soon as they enter the vehicle. Features like these could significantly enhance the quality of life for some individuals, giving them an extra hour or more in their day.

    The variety of Robotaxis doesn’t have to stop there. There could be other vehicles designed specifically for entertainment, such as watching a movie, or those that facilitate relaxation and socializing with friends, similar to what one would expect in a limousine.

    Lowest Cost Per Mile

    Elon Musk mentioned during Tesla’s Q1 2022 financial results call that the focus of its robotaxi would be on achieving the lowest cost per mile, and it would be highly optimized for autonomy. This confirms that the robotaxi will not come equipped with a steering wheel.

    Musk stated, “There are several other exciting innovations around it, but its primary optimization is to achieve the lowest fully considered cost per mile or km when factoring in everything.”

    Tesla acknowledged during the call that its vehicles are generally not affordable for many people due to their high cost. Musk sees the introduction of Robotaxis as a way to offer customers “the lowest cost-per-mile of transport they’ve ever experienced.”

    The CEO is confident that the cost per mile of the vehicle will be even cheaper than a subsidized bus ticket. If Tesla can entire accomplish this, it could significantly transform the automotive industry and redefine car ownership. The question arises: Is Tesla’s future still in selling vehicles or in providing a robotaxi service?

    FSD Sensor Suite

    Tesla has not disclosed any details about the sensor suite intended for the robotaxi. However, given their extensive work in vision and advancements in FSD, it is anticipated to be similar or identical to the current suite, possibly with additional cameras or faster processing.

    In 2022, Musk issued a caution: “With respect to full self-driving, of any technology development I’ve been involved in, I’ve never really seen more false dawns or where it seems like we’re going to break through, but we don’t, as I’ve seen in full self-driving. And ultimately what it comes down to is that to sell full self-driving, you actually have to solve real-world artificial intelligence, which nobody has solved.”

    Musk added, “The entire road system is designed for biological neural nets and eyes. Therefore, to solve driving, we have to solve neural nets and cameras to a degree of capability that is on par with, or really exceeds humans. And I think we will achieve that this year.”

    With the Robotaxi reveal approaching, it may not be long before we learn more about Tesla’s future plans and its truly autonomous vehicles.

  • Lucid or Tesla, which do you prefer?

    Lucid or Tesla, which do you prefer?

    Lucid Motors, a startup based in California, has recently made a remarkable entrance into the EV market with its impressive and highly capable sedans.

    While the company has been in operation for many years, it only introduced its first consumer-ready vehicle in 2021. Although the company has primarily focused on sedans, it is set to launch an SUV, the Lucid Gravity, in the near future. Here’s an overview of the expanding EV company and a comparison with Tesla, the more established company led by Elon Musk.

    What is Lucid?

    Established in 2007, Lucid Motors began producing and selling its first (and currently only) model, the Air sedan, in late 2021, around the time when Tesla was still relatively young.

    The premium brand now offers multiple versions of the Air. The vehicles are manufactured at a factory in Arizona. Peter Rawlinson, Lucid’s CEO, previously held a senior engineering position at Tesla.

    About Lucid Group

    Lucid’s mission is to inspire the adoption of sustainable energy by creating advanced technologies and the most captivating luxury electric vehicles centered around the human experience. The company’s first car, the Air, is a state-of-the-art luxury sedan with a California- inspired design. Assembled at Lucid’s factories in Casa Grande, Arizona and King Abdullah Economic City (KAEC), Saudi Arabia, deliveries of Lucid Air are currently underway to customers in the US, Canada, Europe, and the Middle East.

    What is the price of a Lucid car compared to Tesla?

    Over time, Tesla has developed a range of vehicles, with prices varying from relatively affordable (such as the $39,000 Model 3 sedan) to quite expensive (like the nearly six-figure Model S Plaid, or around $121,000 for the Cybertruck Beast).

    Lucid primarily caters to the luxury market, with the entry-level Air Pure starting at $69,900. The pricing for the model extends up to $250,000 for the Air Sapphire, a high-performance luxury vehicle.

    As indicated by its pricing, Lucid prioritizes delivering substantial amounts of luxury, performance, and technology — and it certainly delivers. The $111,400 Air Grand Touring Performance, tested by Business Insider, was one of the most remarkable vehicles they had ever experienced.

    Externally, the long, low, and wide Air has a futuristic appearance that sets it apart from other vehicles on the road. The interior of the Lucid Air is equally stylish and distinctive, complementing the overall package. The vehicle they tested was adorned with supple leather and featured a vast windshield that extended beyond the driver’s head.

    Some Air models boast over 1,000 horsepower and exceptional performance. Additionally, Lucid offers the longest EV range on the market with the Air Grand Touring, which can travel 516 miles on a full battery charge.

    For years, Tesla led the market, but its flagship Model S sedan is no longer the electric car with the longest range, offering only 405 miles on a full charge.

    Tesla’s primary competitor to Lucid is its flagship Model S sedan, which also provides extensive range, impressive speed, and advanced technology at a high price — although not as high as Lucid’s top-of-the-line offering. Both vehicles provide some of the fastest charging available, although Tesla’s charging speed varies significantly based on the type of charging station used.

    Both brands eschew traditional buttons and switches in favor of touchscreen interfaces that control various functions, from air conditioning to door locks. Tesla’s interiors are minimalistic, while Lucid offers a more compelling interior and additional features, such as massaging seats.

    One drawback of Lucid Motors’ vehicles is that they cannot access Tesla’s extensive network of Supercharger stations, which Tesla owners benefit from.

    Neither brand operates franchised dealerships like traditional automakers. Instead, they own and operate their own showrooms and service centers. Customers have the option to order a vehicle online directly from the company, which will then be delivered to a showroom or service center. Depending on the location, it is also possible to have the vehicle delivered to a home or business address.

    Why The Lucid Air Is Better Than The Tesla Model S

    There was a time when the Tesla Model S was the sole choice for electric sedans, but the landscape has changed significantly. Today, there are numerous electric sedans available, offering different prices, power outputs, purposes, and all-electric ranges. also a time when the Model S led the way in terms of range and efficiency, but that is no longer the case.

    Many rival automakers have not only caught up with the technology that Tesla has worked hard to pioneer, but they are also beginning to surpass the California-based company in numerous ways, causing Tesla to lose ground in the sales battle.

    The Lucid Air currently reigns as the leader in the electric sedan market, as it provides more power and range and is also faster than the Model S. This alone is likely to pique the interest of most buyers, perhaps even why you are here. point is, however, that the Lucid Air offers much more than just superior power and range, and here are the main reasons why the Lucid Air outshines the Model S.

    On 29th October 2023, Lucid Air continues to present a strong challenge to the Tesla Model S. This electric vehicle offers competitive or even superior performance compared to the Model S, although at a slightly higher price. As the 2024 models for both Lucid Motors and Tesla is being introduced, the competition in the electric vehicle segment is intensifying.

    Many drivers perceive the Lucid Air as more sophisticated and traditionally luxurious than the Model S.

    A key reason why some people choose the Lucid Air luxury sedan over the Tesla Model S is its classic design and more conventional approach to luxury. The Lucid Air offers a timeless driving experience, in contrast to the Model S, which somewhat resembles the “fishbowl” ” ” design of the Model 3. However, some individuals prefer the rounded design and spaceship-like interior of Tesla, although for many luxury car buyers, the Lucid Air is more appealing.

    Historically, the Model S has been a sleek and sporty electric vehicle, featuring design elements not commonly found in luxury sedans. In contrast, the Lucid Air exudes charm and elegance, with numerous advanced features that outperform those of Tesla. While aesthetics are not the only consideration, they undoubtedly make a difference.

    The 2024 Lucid Air Sapphire model boasts 1,234 horsepower, a 118.0 kWh battery, and a 900-volt architecture.

    One of the most exciting announcements from Lucid Motors is the introduction of the 2024 Lucid Air Sapphire, an ultra-fast and luxurious sedan set to hit the market soon. The latest generation Lucid Air offers specifications such as 1,234 horsepower, a 118.0-kWh battery , and a 900-volt architecture, enabling it to recharge at rates of up to 300 kW at DC fast-charging stations.

    These details clearly outshine the current Model S, although Tesla has been reluctant to share any information about a next-generation redesign or update. In contrast, the media has been captivated by Lucid Air’s Sapphire model, which comes with a price tag of $250,000. Pricing a luxury sedan at such a high level is rare, positioning Lucid in a new realm within the global vehicle market.

    As of October 2023, customers can purchase a Lucid Air model for $77,400

    In October 2023, Lucid Motors is offering deals on the Air models, starting at $77,400 for the Lucid Air Pure RWD. Lease options begin at $749 per month, with an $8,069 initial payment. Although this may seem relatively high even for a lease, Lucid Motors is actively striving to attract more drivers to its vehicles, whether for the Air or any other model.

    Tesla has also reduced prices (more aggressively than Lucid Motors), with the Model 3 and Model Y receiving the greatest discounts. The Model S, being a higher-end option, has seen minimal changes in pricing, currently retailing for $74,990, excluding taxes and fees, which is still slightly lower than the Lucid Air. Once again, customers get more value for their money when purchasing through Lucid (for the time being), which is a justification for the higher-priced vehicle for many people.

    The 2024 Lucid Air model is available in 6 trim levels, with Canadian pricing ranging from $94,500 to $325,000 MSRP.

    The entry-level Pure model starts at $94,500 Canadian dollars and features an Electric: 321-kW motor & 88.00-kWh & Lithium Ion battery. The mid-range Pure Dual Motor trim is priced at $109,900 MSRP. The top-tier model, Sapphire , is priced at $325,000 Canadian dollars. Freight and PDI (Destination or Delivery charges) amount to $2,000 Canadian dollars, although this may vary between provinces and dealers.

    The Lucid Air is equipped with a 360-degree overhead camera system

    Similar to many luxury cars, the Lucid Air is equipped with a 360-degree camera system, providing drivers with a clear view of their surroundings while driving, parking, or even when inside the vehicle. This system operates similarly to Tesla’s camera technology, but with the Lucid Air, drivers get a more comprehensive view around their vehicle. The camera provides an ‘overhead’ view, displaying the car from a top-down perspective rather than showing all sides at once.

    Given the high price of the sedan, it is understandable for Lucid to develop this system, which is displayed on the driver’s infotainment screen. However, with Tesla actively working on a similar feature, it remains to be seen what will come next?.

    Tesla Model S achieves 0-60 MPH in 1.99 seconds, while Lucid Air achieves it in 1.89 seconds

    If you think back a few years, you can probably recall when the electric car making headlines was the Tesla Model S Plaid. This electric vehicle is essentially a speed demon on wheels. Today, we are witnessing a vehicle that can accelerate from 0 to 60 MPH in just 1.99 seconds and complete a quarter-mile sprint in about 9.23 seconds.

    Surprisingly, these are no longer the fastest production EV times, as the Lucid Air can accomplish the same in 1.89 seconds and 8.95 seconds, respectively. These are figures that used to be unheard of without significant modifications to an ICE engine, but now can be achieved by a combination of electric motors and advanced technologies.

    The flagship Lucid Air Sapphire is equipped with three electric motors and over delivers 1,200 horsepower

    The Lucid Air Sapphire, the top-tier model in the Air lineup, is powered by three electric motors that produce a combined 1,200 horsepower. In contrast, the Model S Plaid features the same tri-motor all-wheel-drive system but with an output of only 1,020 horsepower. According to the information we have from Lucid, the Sapphire can accelerate from 0 to 60 MPH in just 1.89 seconds, from 0 to 100 MPH in just 3.87 seconds, and reach a top speed of 205 MPH.

    On the other hand, the Model S Plaid takes 1.99 seconds to go from 0 to 60 MPH and can achieve a top speed of 200 MPH. The Air Sapphire also promises a quarter-mile time of under nine seconds, while the Plaid’s official quarter- mile time is 9.23 seconds with a 155-MPH trap speed. However, some enthusiasts have managed to achieve a time of just 8.94 seconds at 156 MPH.

    The Lucid Air is significantly lighter than conventional EV sedans, contributing to its performance

    One of the major challenges in the EV manufacturing industry has been the weight of the vehicles. The weight of the batteries needed to power an EV, as well as the motors that must be installed on each axle or tire for higher performance options, has been a significant concern. Lucid Air has revolutionized this by developing an all-new platform called the Lucid Electric Advanced Platform (LEAP).

    With this innovative approach, Lucid has created a smaller electric motor that weighs only 163 pounds, while offering more horsepower per motor than any others currently available on the market (up to 670 horsepower). To put this into perspective, according to reports from Lucid Motors, one of these electric motors could fit into a carry-on piece of luggage!

    Both the Lucid Air and the Model S utilize rapid-charging systems

    Advancements in battery technology have led to improved range for electric cars in a short period of time. In some parts of the world, including the United States, the transition to electric vehicles has been slower compared to other regions due to the disparity in range compared to a full tank of gasoline. When factoring in the time required to charge an EV versus refueling an ICE vehicle, there is a significant discrepancy.

    Fortunately, today’s innovative technology is enhancing the range and charging times, making the playing field more level. Lucid is one of the companies making significant strides in this area, now offering driving ranges of over 500 miles per charge. If Toyota succeeds in producing solid -state batteries with a range of over 900 miles and the ability to charge within 20 minutes, the shift to EVs will be more of a sprint than a leisurely walk.

    The Lucid Air offers a greater range than the Tesla Model S (520 miles versus 405 miles)

    There was a time when no one could match Tesla’s range, but those days are long gone as more manufacturers are pledging ranges of 400 – 500 miles, with Lucid Motors being one of them. Even in its base configuration, the Lucid Air promises more driving range than the Model S, so Tesla has lost significant ground in this segment.

    The Air Pure has a range of 410 miles, and the range only improves from there. The Touring model offers up to 425 miles, the Grand Touring up to 469 miles with 21-inch wheels, and 516 miles with 19-inch wheels. Dream Edition Range provides 481 miles of range with 21-inch wheels and 520 miles with 19-inch wheels.

    The Dream Edition Performance is rated at either 451 or 471 miles, depending on the wheels, while we are still awaiting the official figures for the Sapphire. As for the Model S, the range is 405 miles for the base model and 396 miles for the Model S Plaid.

    350 kW rapid charging can also be utilized with the Lucid Air

    In terms of charging speed, the Model S also falls short. While the Tesla Supercharger network is widely regarded as one of the best and most extensive in the world, it only provides charging speeds of up to 250 kW. Using one of the 45,000 Superchargers available in the United States, the Model S can gain up to 200 miles of range in just 15 minutes of charging, which is still quite impressive.

    However, the Lucid Air takes advantage of an ultra-high 900V+ electrical architecture and has access to an expanding 350kW fast-charging infrastructure being developed nationwide. The Air can achieve 20 miles of range for every minute of charging, making it the fastest-charging electric vehicle available.

    Upon purchase, Lucid Air models come with three years of fast charging at Electrify America Stations

    It goes without saying that the Lucid Air is not without its flaws and is far from being a perfect EV. This is why a solid warranty is essential. It enjoys a warranty identical to what Tesla offers: four-year/50,000-mile basic coverage and eight years/100,000 miles for the powertrain and battery.

    In fact, the Model S comes out on top in this aspect, as it includes an eight-year/150,000-mile warranty for the powertrain and battery. However, Lucid will provide Air owners with three years of free, fast charging at Electrify America stations, allowing for complimentary charging during this period.

    The Lucid Air offers superior driver assistance compared to nearly every other car on the market

    While Tesla has long boasted about its self-driving capabilities, they essentially consist of two optional functions that enable its models to operate on autopilot. On the other hand, the Lucid Air features Dream Drive – the most advanced driver assistance system in the world with autonomous driving capabilities that surpass those of Tesla.

    The latest version, called Dream Drive Pro, incorporates an extensive 32-sensor suite comprising 14 cameras, 1 lidar, 5 radar, and 12 ultrasonic units. Its software stack operates on the NVIDIA Drive platform, providing significant computing power for advanced features that enhance driver safety and aid in critical road decision-making. This feature is standard on Lucid Air Dream Edition and Lucid Air Grand Touring trims and is available as an option on all other Lucid models.

    The Lucid Air boasts a more sophisticated, timeless interior compared to the Model S

    With all the cutting-edge technology available in today’s market, every new car and SUV must incorporate as many modern features as possible without going overboard. Ensuring that the commute from home to the office, to the grocery store, and back home is as comfortable and enjoyable as possible, as well as turning those out-of-the-way drives into adventures rather than nightmares, is the ultimate goal.

    Both Tesla and Lucid are packed with state-of-the-art technology, and despite some complaints about the interiors of both not meeting expectations, both companies have significantly improved in recent years.

    The Model S has a somewhat uninspiring interior, mirroring the bland exterior. The most divisive feature is likely the steering yoke, which has garnered both praise and criticism. The sedan also features a 17-inch touchscreen that replaces all physical buttons, making simple commands more distracting while driving. Of course, this issue is prevalent in many cars these days, not just those from Tesla. The Air is no exception as it is equipped with a curved 34-inch Glass Cockpit panel that floats around the driver’s seat, providing easier access and visibility.

    A retractable central Pilot Panel also allows for deeper control of features. The rest of the cabin is characterized by high-quality craftsmanship and sustainability inspired by the natural landscapes of California.

    Customers have the option to select from high-quality Nappa leather or a sophisticated PurLuxe leather alternative, Alcantara, a sustainable alpaca and recycled yarn wool blend textile, or premium woods such as Silvered Eucalyptus, North American Walnut, or Carbon Oak.

    According to customer reviews, the Lucid Air has a more comfortable and spacious interior compared to the Model S

    The Model S can accommodate up to five people and offers synthetic leather upholstery, heated and ventilated front seats, and heated rear seats. While the front seats provide ample space and good support, the rear passengers may desire more legroom. Similarly, the Air can seat five adults, with the base model featuring synthetic leather upholstery and power-adjustable front seats.

    In contrast, higher trims of the Lucid Air come with genuine leather upholstery for the seats, which are also heated and ventilated as standard and offer optional massage functions. Unlike the Model S, the Air’s rear cabin provides generous space for three adults and offers the most legroom in its segment.

    The Lucid Air is considered a more elegant and stylish sedan compared to the Model S

    This four-door electric vehicle features a more sophisticated design, with the roof being its most distinctive feature. Tesla’s exterior design, which has been consistent across its models for years, has not seen significant improvements. This is one of the main reasons why the Model S feels outdated compared to the Air.

    The Air sports an exterior design inspired by aircraft, with clean and graceful lines. It is also one of the most aerodynamic cars globally, with a drag coefficient of 0.197.

    The Air also offers an optional Glass Canopy, which not only provides a beautiful view of the sky but also includes a protective interlayer to block out heat and sunlight. In early 2023, Lucid announced a Stealth Look exterior upgrade for the Air, which, for $6,000, modifies thirty-five external components of the car, giving it an even more distinctive appearance.

    Both the Lucid Air and the Model S boast a wide range of safety features and specifications

    The Lucid Dream Drive sets the Air apart from the Model S, with superior driver monitoring and detection. Safety is a crucial aspect of any vehicle, and both the Lucid Air and the Tesla Model S are equipped with some of the most advanced safety features available .

    Both vehicles include features such as backup assistance, cross-traffic alerts, self-driving modes, parking assistance, self-parking, lane-keeping assist, and more. What sets the Lucid Air apart is the Lucid Dream Drive, which offers some features that the Tesla does not. For instance, the Lucid Air is equipped with a lidar sensor (Light Detection and Ranging), a driver-monitoring system, and 32 different sensors.

    Tesla and Lucid Motors are recognized for producing some of the safest electric vehicles on the market

    Unfortunately, the NHTSA has not yet conducted comprehensive testing on either the Lucid Air or the Tesla Model S, preventing the release of crash test videos at this time. However, the Tesla Model S has achieved a 5-Star Euro NCAP Safety Rating twice since it entered production, a significant achievement.

    Most recently, in 2022, it received a 98% rating for driver assistance safety, 91% for child protection safety technology, and 94% for adult protection safety technology. On the other hand, the Lucid Air has not received as favorable ratings from the same organization, scoring 84% for driver assistance safety technology, 91% for child safety assistance technology, and 90% for adult safety technology assistance.

    Like their traditional internal combustion engine units, the Lucid Air and the Tesla Model S have a finite lifespan. While they are built to last as long as possible, it is important to understand that all moving parts eventually wear out and require repair or replacement, and these two electric vehicles are no exception.

    Reliability ratings for newer vehicles can be easily obtained by seeking input from current owners or consulting reputable sources such as Consumer Reports, which aggregates data from numerous owners.

    In this case, the predicted reliability of the 2023 Lucid Air is rated at 2/5, indicating that it has encountered issues with the electrical systems and the backup cameras. Similarly, the 2023 Tesla Model S has also received a 2/5 rating, with problems reported in the electrical system and front-facing camera angles.

    The Lucid Air offers a wider range of options compared to the Model S

    Although not all Lucid Air models are currently available for purchase, the company has announced plans for six trims: Pure, Touring, Grand Touring, Dream Edition Range, Dream Edition Performance, and Sapphire. In contrast, the Model S offers three trims: base , long-range, and plaid.

    With the exception of the Lucid Air Sapphire, all Air models are powered by two electric motors, providing power ranging from 480 horsepower and 443 pound-feet of torque in the Pure trim to 1,111 horsepower and 1,025 pound-feet of torque in the Dream Edition Performance.

    Lucid has also announced that the Air Pure will soon be available with a rear-wheel drive (RWD) configuration. For the Model S, the options are more straightforward: customers can choose between the dual motor setup with 670 horsepower or the tri-motor setup with 1,020 horsepower.

    The Lucid Air is designed with revolutionary battery and charging technology to make owning a sustainable vehicle more convenient and less complex

    Offering the longest range of any electric car currently available,¹ the Lucid Air provides the freedom to travel longer distances with confidence. During road trips, the Lucid Air minimizes charging stops with its advanced onboard technology, making it one of the fastest-charging vehicles on the market.

    For daily use, most Lucid owners prefer the convenience of charging their vehicles at home overnight. It’s akin to starting each day with a full tank of gas, without the hassle of stopping to refuel. Additionally, overnight home charging is more cost-effective and environmentally friendly, as it utilizes energy during off-peak times.²

    To ensure customers have the best charging experience, every new Lucid Air now comes with a $1,000 Charging Allowance ($1,300 CAD) that can be used towards the purchase of Lucid charging accessories, such as the Lucid Home Charger.

    Equipping your home with the leading charging technology from Lucid is possible with the Lucid Home Charger. This high-capacity 80 Amp/19.2 kW Lucid Connected Home Charger is the most powerful Level-2 home charger available, delivering up to 80 miles of range per hour in an eco-friendly manner. Its compact design and generous 24-foot cable make it easy to install and use indoors or outdoors. With Wi-Fi connectivity, the Lucid Connected Home Charging Station is designed to receive future updates to enhance performance and functionality.

    Lucid is launching its first-ever Sustainability Report

    Today, Lucid Group is releasing our inaugural Sustainability Report, outlining our commitment to accelerating the adoption of efficient, decarbonized technology for a sustainable future.

    Sustainability is at the core of Lucid Group as an electric vehicle (EV) manufacturer and technology company. It’s integral to our purpose and products, as we have a crucial role in shaping a more innovative, environmentally sustainable, and socially responsible future.

    The Need for Efficiency

    According to the US Environmental Protection Agency (EPA), the transportation sector is the largest emitter of greenhouse gases in the US Achieving a more environmentally sustainable future requires reducing emissions in the transportation sector through accelerated adoption of green technologies and improved efficiency.

    Efficiency involves using fewer materials and less energy to achieve more. It means providing drivers with the range they need to travel further and recharge less, addressing “range anxiety” and reducing the reliance on energy resources like fossil fuels, which are heavily used by much of our grid. Efficiency enables us to accomplish this using fewer batteries and precious minerals in the process.

    Not All EVs Are Alike

    While all EVs offer the advantage of zero tailpipe emissions, their overall environmental impact can vary significantly. Numerous factors influence this, primarily due to the wide-ranging differences in energy efficiency among EVs. The less electricity an EV consumes to drive each mile, the fewer carbon emissions are associated with each mile traveled.

    Lucid dedication to efficiency is evident in the EPA’s widely utilized miles per gallon equivalent (MPGe) rating, with the Lucid Air Pure achieving a rating of up to 137 MPGe in combined city and highway driving. This is the highest rating in the EPA’s Large Car category and the second-highest rating among any 2024 model-year vehicle as of the present.³

    Through Lucid focus on efficiency, different versions of the Lucid Air have managed to achieve the longest range (up to 500 miles on a single charge⁴) and one of the fastest-charging EVs (up to 20 miles added per minute⁵) with the highest efficiency measured in miles/kilowatt-hour (4.74 miles per kilowatt hour).

    Lucid commitment to efficiency also influences our battery approach. Lucid use fewer batteries and battery materials to maintain the safety, quality, and driving experience expected by Lucid’s customers. In contrast to prevailing trends in the market, where larger vehicles often require larger battery packs to achieve even a fraction of the Lucid Air’s range, Lucid focus on efficiency allows us to achieve impressive range with a smaller battery pack.

    Social Responsibility, Community Impact, and Human Rights

    As an EV company, Lucid naturally places a strong emphasis on the environment. However, Lucid believe that sustainability and social responsibility extend far beyond our efforts towards a lower emissions future.

    Lucid Air is a strong program for nurturing talent to support and promote the growth of our workforce. Lucid Air places a focus on diversity, equity, and inclusion (DEI) across its global locations through training and initiatives such as employee resource developing groups (ERGs) ).

    Lucid Air is dedicated to giving back to local communities through various activities, fundraising efforts, and volunteer opportunities that promote STEAM and DEI initiatives. Lucid Air proudly partners with the United Way of Pinal County in Arizona, supporting programs like GED scholarships, rental and utility aid, and childcare assistance, having raised over $130,000 in 2022 for this important cause.

    In addition, Lucid Air is committed to safeguarding human rights and is working to identify, prevent, mitigate, and address adverse human rights impacts throughout its operations and broader value chain, including those of its suppliers and business partners. In line with these efforts, Lucid Air has initiated the development of a comprehensive human rights program to address potential human rights risks in its operations, an initiative that spans across all aspects of Lucid Air’s global business operations.

    The Path Ahead

    Lucid Group possesses the technical expertise, innovative design, and pioneering mindset to create impactful products with reduced environmental impact.

    Lucid Air believes that true sustainability means never compromising when it comes to its environmental and social impact and is in the early stages of establishing a comprehensive approach in this area. As a relatively young company, Lucid is focusing on collecting data and setting a baseline on key sustainability focus areas.

    Today, Lucid unveiled a range of updates to the Lucid Air Pure model line, all aimed at enhancing the capabilities and enjoyment of the world’s best electric vehicles.

    One of the most exciting updates is that the new Lucid Air Pure offers an EPA-estimated range of 420 miles from a battery pack that is only 84 kilowatt-hours. Enabled by further advancements of Lucid’s groundbreaking technology, this makes the Lucid Air the first vehicle in the world to achieve a ratio of 5.0 miles of range per kilowatt hour (kWh) of energy. It has also earned the highest MPGe (miles per gallon equivalent) rating ever given to an EV with 146 MPGe from the EPA.

    With the advancements driven by Lucid, the Lucid Air Pure is the most energy-efficient and sustainably-powered vehicle ever made. Whatever journey you are embarking upon, the Air Pure effectively requires less energy to go from point A to point B than anything else available today. It is a major milestone in the global effort for sustainable electric transportation.

    To put this in some context, the new Lucid Air Pure is not only vastly more fuel-efficient than popular gasoline-powered vehicles like the Honda Civic and the Toyota Prius Hybrid, it is even significantly more efficient than other fully electric vehicles.

    The batteries used in electric vehicles are better than ever, but they remain large, heavy, and costly. The larger the battery pack in a vehicle, the bigger, heavier, and more expensive it will be. Indeed, this is why so many automakers have struggled to introduce appealing EVs.

    Efficiency is the key to electric vehicles without such compromises. Efficiency is what makes it possible to design electric vehicles that deliver the range that consumers expect without relying on an enormous battery pack and thus sacrificing affordability, passenger space, and a fun driving experience.

    How efficient an electric vehicle is when driving on the road also drives critical differences in how much time is spent off the road for charging. While much attention is paid – rightly – to public charging infrastructure, the fact remains that roughly 90% of EV charging is done at home, with Level 2 (AC) charging systems that deliver the same power regardless of what kind of EV is plugged in.

    This means that, when plugged in overnight, most EVs will receive roughly the same amount of energy. But how far an EV can travel with that energy is another matter entirely.

    When plugged in for 10 hours at home, the new Lucid Air Pure can add 380 miles of driving range, charging the vehicle from 0% to 90%. Over the same 10-hour period, a Tesla Model S would only reach 76%, for a total of 306 miles. Plugged in overnight, a Rivian R1S SUV would be only 51% charged, with just 204 miles of range available from its large battery pack.

    Increasing efficiency also makes it possible to manufacture more cars with a given amount of raw materials, helping mitigate the environmental impact of manufacturing. Lucid recently announced that its first sustainability report will be published this year and looks forward to sharing more on this topic as well .

    Lucid is proud of this recent achievement, but acknowledges that the work must continue to make EVs even more appealing and accessible to consumers around the world. While achieving 5 miles per kilowatt-hour is a significant milestone, the focus must continue on striving towards a future with even more efficient transportation, cleaner air, and a sustainable environment.

    Rumors of a Saudi Buyout

    Reports have recently suggested that Lucid Motors, an electric vehicle (EV) manufacturer based in California, might be subject to a significant acquisition. The speculation indicates that the Public Investment Fund (PIF), an investment firm controlled by the government of Saudi Arabia, is contemplating the purchase of Lucid Motors. In this article, we’ll delve into these speculations and consider the potential implications of a Saudi acquisition of Lucid Motors for the company’s future and the EV industry.

    Lucid Motors is emerging as a prominent contender in the electric vehicle market despite being a relatively new participant, gaining recognition for its innovative designs and state-of-the-art technology. The company’s debut vehicle, the Lucid Air, is scheduled for release later this year and has already garnered substantial attention due to its advanced features, high-end luxury, and exceptional performance.

    The speculation about a Saudi acquisition of Lucid Motors originates from the country’s recent ventures in the electric vehicle market. Saudi Arabia has been actively striving to reduce its reliance on oil and invest in alternative energy, including electric vehicles. The PIF has already made investments in various EV companies, including Tesla, and is rumored to be seeking further opportunities in the sector.

    If the speculations regarding a Saudi acquisition of Lucid Motors are accurate, it could entail a substantial infusion of capital for the company. This would enable Lucid Motors to expand its operations, enhance its production capacities, and continue advancing its cutting-edge technology. could also facilitate the company in realizing its objective of establishing a significant presence in the premium EV market.

    Nevertheless, a buyout by the Saudi government could raise certain apprehensions, particularly concerning the aspects of autonomy and authority. Some individuals may express concerns that the acquisition of Lucid Motors by a government-owned investment firm could compromise the company’s independence and restrict its ability to make decisions aligned with its values ​​and objectives.

    At present, the speculations about a Saudi acquisition of Lucid Motors are merely speculative. However, if the reports are substantiated, it could have a notable impact on the company’s future and the electric vehicle market. Lucid Motors has the potential to emerge as a prominent contender in the premium EV market, and a buyout by the Saudi government could equip the company with the necessary resources to accomplish its aspirations. The future of Lucid Motors remains to be seen, but for now, it’s certainly prudent to keep a close watch on the speculations about a Saudi acquisition.

  • Tesla’s cheapest Model 3 is no longer available

    Tesla’s cheapest Model 3 is no longer available

    Tesla has ceased sales of the most affordable version of the Model 3 sedan in the US. An analyst referred to it as a wise strategic move.

    Tesla quietly stopped selling the Standard Range Rear-Wheel Drive Model 3 in the US. This move was praised by a bullish analyst speaking to Business Insider. The electric carmaker, run by Elon Musk, removed this version from its website as of Wednesday, as reported by Reuters.

    The cheapest Tesla car in the US is now the Model 3 Long Range Rear-Wheel Drive, priced at $42,490. The discontinued model utilized lithium iron phosphate battery cells sourced from China, according to Reuters. Earlier this year, the US imposed tariffs on EVs imported from China, as well as on EV batteries and key minerals.

    Vehicles using Chinese-made components, such as the LFP batteries, are no longer eligible for the $7,500 federal tax credit. Dan Ives, an analyst at Wedbush and a long-time Tesla supporter, told Business Insider that discontinuing the model is “a smart strategic move” indicating a greater focus on the Long Range Model 3. Ives also noted that the tariffs on Chinese EVs reflect substantial tension in the US-China Cold Tech War, which benefits Tesla.

    He added that removing the Standard Range model does not significantly impact Tesla’s overall strategy, as the Model Y remains the company’s primary focus in the US EV market. The starting price for the Model Y is $44,990. Affordability has become a major concern for electric carmakers , as most EV options, including Tesla’s, are still more expensive than the average gas-powered car. Tesla has previously discussed plans to manufacture a cheaper car than the Model 3, but has not yet done so.

    Tesla’s decision to discontinue the cheapest version of the Model 3 came shortly after the company slightly exceeded analysts’ delivery expectations, with third-quarter sales reaching 462,890 vehicles. Ives told BI that Tesla’s sales were “a step in the right direction” to meet its yearly target of 1.8 million vehicle deliveries. Bargain Chinese EVs Adding to the price war are Chinese competitors like Xpeng, which recently introduced the Mona M03 at $16,800 — less than half the price of Tesla’s Model 3 in China.

    Nio and BYD are planning to launch a new, affordable brand called Onvo, which would directly compete with Tesla’s Model Y. Nio also has plans to launch a second EV brand named Firefly, which would retail for under $30,000, as reported by Reuters in May Chinese government policies, including scrapping and replacement subsidies, continue to fuel demand for more affordable EVs in the country, according to a September note from HSBC analysts.

    The bank anticipates that over 100 new models will be launched in China by the end of 2024, primarily by the country’s EV brands. Tesla did not respond to a request for comment sent by BI outside business hours.

    Tesla is no longer offering the sub-$40,000 rear-wheel drive Standard Range version of the Model 3 that has been in the company’s lineup since 2023. The most affordable trim is now the Model 3 RWD Long Range that starts at $42,490. This change was initially highlighted by Electrek and coincides with Tesla’s announcement of a year-over-year increase in vehicle deliveries in its third quarter of 2024.

    Tesla has adjusted prices numerous times over the past few years as it strives to maintain its leading position in the market. However, an increasing number of customers have turned to other vehicle brands, resulting in year-over-year sales declines for Elon Musk’s company .

    Tesla also discontinues certain trims occasionally, often without prior notice or fanfare. Earlier this year, the company ceased offering the $60,990 RWD Cybertruck, the cheapest version of its angular EV truck.

    The Model 3 Standard Range, which claimed a 272-mile range on a full charge, utilized more affordable lithium iron phosphate (LFP) cells produced in China. These cells are likely to become more expensive to import due to the Biden administration’s decision to raise tariffs on Chinese batteries, semiconductors, and critical minerals. Before incentives, it was the only model that came close to the short-lived and long-promised $35,000 Model 3.

    Tesla’s RWD Long Range costs $3,500 more than the discontinued Standard Range. This price difference isn’t substantial considering that the Long Range model is estimated to have a 363-mile range on a full charge, although Tesla has faced accusations of inflating its range estimates .

    Despite the Model 3 Standard Range no longer being available for order, Tesla is still working on a more affordable, yet-to-be-announced vehicle for the second half of 2025, which could either be a new car or a more basic version of the Model 3.

    Tesla’s ability to make its vehicles more affordable by simplifying them further is uncertain, especially considering that a more basic version of the Model 3 in Mexico turned out to be more expensive than US models, even though the newer models already lack drive and turn signal stalks .

    The production and delivery report for Tesla’s third quarter has been released.

    After two consecutive quarters of decline, Tesla’s vehicle sales are finally increasing.

    During the three-month period ending in September, Tesla produced 469,796 vehicles, marking a 9.1 percent increase compared to the third quarter of 2023. Additionally, the company delivered 462,890 vehicles to customers in Q3 of 2024, representing a 6.3 percent increase from Q2 2023 .

    These figures show improvement over the previous quarter, with production up 14.4 percent compared to the second quarter of 2024 and delivery up 5.8 percent. Tesla is producing and selling more vehicles than earlier this year.

    While the Cybertruck may be contributing to this growth, Tesla does not provide specific numbers for this electric truck. The majority of its production and delivery consists of Model 3 and Y vehicles, with 443,668 produced and 439,975 delivered in Q3. Other models, including Model S, X, Cybertruck, and Tesla Semi, fall under the “other models” category.

    However, Tesla faces challenges as overall electric vehicle (EV) sales are growing slower than in previous years, with customers showing more interest in hybrids rather than pure battery-electric vehicles. As Tesla exclusively produces battery electrics, it may be at a disadvantage compared to traditional automakers with more diverse lineups.

    The company also faces increased competition, both in the US and in China, where companies such as BYD and Geely are achieving record EV sales. Tesla’s regional sales numbers are not disclosed, making it difficult to pinpoint its specific strengths and weaknesses.

    Tesla’s full third quarter earnings will be reported on October 23rd. Before that, on October 10th, the company is expected to unveil its long-awaited “robotaxi,” with CEO Elon Musk making a strong pitch for Tesla’s future as an AI and robotics company .

    Tesla, founded in 2003 and named after inventor Nikola Tesla, gained prominence after Elon Musk joined the company a year later. Musk invested $30 million in Tesla, became the chairman of its Board of Directors, and later secured funding from Google’s founders.

    The prototype for Tesla’s first electric car, the Roadster, was revealed in 2006 and went into production in 2008. By June 2009, 500 Roadsters had been sold at a price of $98,000 each.

    In 2017, Tesla entered the mainstream market with the launch of its Model 3, which became the world’s most popular plug-in electric car in 2020, with approximately 501,000 unit sales in 2021. Between January and March 2022, Tesla set a new delivery record , surpassing 310,000 units.

    Tesla’s shares surged by over 7% on Tuesday, July 2, following the release of its latest quarterly delivery numbers, which exceeded consensus expectations. In the second quarter, Tesla produced around 410,831 vehicles and delivered approximately 443,956 vehicles, surpassing the analyst consensus of 439,302 deliveries. The quarter saw Tesla produce 386,576 Model 3/Y vehicles, with 422,405 deliveries. Tesla’s shares reached a high of $226.66 following this news.

    Thomas Monteiro, a senior analyst at Investing.com, remarked, “The better-than-expected Q2 deliveries are not only a breath of fresh air for Tesla’s margins but also for the EV market as a whole. Although these numbers were naturally boosted by strong auto demand in the US in general—GM’s sales, released shortly before, further attest to this—it indicates that the EV market is still alive, as several analysts were quick to point out a few months ago.

    “However, deliveries only offered limited support to the ongoing rally. The real focus for investors is on the technology front, with both the humanoid robot and the Robotaxi stories developing at an exciting pace. Both, particularly when combined, have the potential to become absolute game-changers for the company’s margins, meeting the expectations of Tesla shareholders.”

    What Is Tesla’s Annual Revenue?

    Tesla’s annual revenue in 2021 was $53.8 billion, marking a 70.64% increase from 2020 when it earned $31.5 billion in sales. In 2022, Tesla maintained its position as the leading EV manufacturer by revenue and market share, surpassing Volkswagen.

    During Q1 2022, Tesla sold over 310,000 vehicles, and its vehicle deliveries totaled 254,700 units in Q2 2022. For the quarter ending June 30, 2022, Tesla’s revenue was $16.934 billion, showing a 41.61% year-over-year increase. The revenue for the months ending June 30, 2022, was $67.166 billion, reflecting a 60.45% growth year-over-year.

    In the third quarter of 2022, Tesla’s revenue was $21.454 billion, and in the fourth quarter, it reached $24.32 billion, representing a 37.24% year-over-year increase. The revenue for the twelve months ending December 31, 2022, was $81.462 billion , indicating a 51.35% growth year-over-year. In 2022, Tesla’s hourly revenue was $8,703,704, compared to $13,981 per hour in 2012.

    In the second quarter of 2024, Tesla reported earnings with EPS of $0.52, falling short of the analyst estimate of $0.61. The revenue for the quarter was $25.5 billion, surpassing the consensus estimate of $24.33 billion.

    In 2022, Tesla achieved remarkable delivery statistics, with the company delivering 1,313,851 vehicles globally, marking a 40% increase from the previous year. It also increased its car production by 47% compared to 2021.

    Tesla’s sales in the US outperformed other luxury car brands, with 491,000 vehicles sold in 2022, securing its position as the leading luxury car sales leader. The company also made significant contributions to American car manufacturing by producing its cars in California and Texas under Elon Musk’s leadership.

    During 2022, Tesla expanded its international factories, commencing vehicle production at Giga Berlin in Germany and updating the factory in China to manufacture up to 750,000 Model 3 and Model Y electric cars additionally annually., Tesla enhanced its production capacity across all its factories to produce 100,000 Model S and X vehicles per year, along with 1.8 million Model Y and Model 3 vehicles.

    In late 2022, Tesla reduced the prices of its vehicles globally, making its Model 3 and Model Y cars more affordable in several countries. This move aimed to accelerate the world’s transition to sustainable energy by enabling more people to purchase Tesla’s cars.

    How Many Tesla Vehicles Are Sold Each Year?

    Tesla has been increasing its annual production steadily. In 2014, the company manufactured only 35,000 vehicles. In the first half of 2021, Tesla produced 386,759 cars, with 184,877 vehicles delivered in Q1 and 201,304 in Q2. Overall, in 2021, Tesla manufactured 930,422 EVs and delivered 936,222, setting a new record. These numbers indicate a year-over-year growth of 82.5% compared to 2020.

    There were 906,032 Model 3/Y vehicles produced, representing a 99% increase from 2020. In the same year, Tesla manufactured 24,390 Model S/X cars, showing a 56% decrease year-over-year. The deliveries of the latter model also decreased by 56% compared to 2020, amounting to 24,980 vehicles. Deliveries of the Tesla Model 3/Y reached 911,242, marking a 106% increase from 2020.

    In the first and second quarters of this year, Tesla produced 564,750 vehicles. Analysts anticipate that Tesla’s rapid growth may accelerate in the third quarter and beyond. During the annual shareholder meeting, Elon Musk discussed the company’s future production plans, that stating by the end of 2022, Tesla might achieve an annual production run rate of 2 million vehicles:

    “We’re aiming to achieve a 2 million vehicle run-rate by the end of the year… Thanks to the hard work of the Tesla team, we’ve already been able to achieve a 1.5 million unit annualized run rate. And depending on how the rest of this year goes, I think we might get close to, or will get approximately at the 1.5 million mark, and will be exiting the year at a 2 million-unit run-rate,” Musk said.

    In the fourth quarter of 2022, Tesla delivered nearly 405,300 units. Its quarterly deliveries grew by 17.87% during this quarter, compared to the same quarter of 2021. Between November and December 2022, Tesla’s deliveries crossed the 405,000 unit mark, setting a new record The deliveries in the third quarter of 2022 totaled 343,000. Overall, in 2022, Tesla delivered 1,313,851 and produced 1,369,611 units. Since 2018, Tesla has delivered 3,382,821 and has produced 3,429,532 cars.

    The company also stated that the factory in Shanghai enables it to manufacture 750,000 Model 3 and Model Y electric vehicles a year. The production capacity of Tesla’s factory in California allows it to annually produce 100,000 more expensive Model S and Model X cars, along with 550,000 of its Model 3 and Model Y vehicles. The company’s Texas factory can produce 250,000 Model Y vehicles annually, and so can its factory in Germany.

    Tesla’s 2022 Year-End Vehicle Production and Delivery

    Tesla began the new year by releasing its fourth-quarter vehicle production and 2022 delivery report on January 3, 2023. Total annual deliveries reached a new record of 1.31 million, increasing by 47% compared to the previous year. Total annual production reached 1.37 million . The most popular vehicle of the year proved to be the Model 3, comprising over 95% of produced and delivered cars.

    The Q4 deliveries and production could have been more remarkable. Their numbers fell short of analysts’ consensus, who expected Tesla to deliver around 427,000 cars. Contrary to analysts’ predictions, Tesla delivered 405,278 vehicles and produced 439,000 cars in the fourth quarter.

    The period ending December 31, 2022, was challenging for the company due to Covid outbreaks in China, leading to reduced production at its Shanghai factory. Yet Elon Musk sounded optimistic when he expressed his anticipation to achieve “50% average annual growth in vehicle deliveries over a multi-year horizon.”

    How Many Tesla Vehicles Are Sold by Country?

    Tesla sold and delivered the majority of its vehicles in China in 2021. Of these electric cars, 478,078 were made in Tesla’s production facility, Gigafactory Shanghai. From its American facilities, Gigafactory Texas and Gigafactory California, Tesla sold 301,998 vehicles.

    The company is gaining popularity in Europe. In 2021, it sold 169,507 vehicles in European countries, where the Tesla Model 3 was named Europe’s favorite electric vehicle. In 2019, the Netherlands purchased the largest number of Tesla cars among European countries – 30,911 vehicles. Norway and Germany followed the Netherlands’ example with 18,798 and 10,711 cars purchased respectively.

    In the United States, Tesla was the first manufacturer to reach 200,000 cumulative sales of electric vehicles, reaching the end of its government subsidy cap of $7,500 per sold car. In other words, since January 2020, no Tesla vehicle sold in the USA has received any subsidy.

    Tesla began producing its vehicles in China in 2018. In January 2020, Tesla’s Chinese Gigafactory launched the output of the Tesla Model 3 Sedan and batteries. The production of the Tesla Model Y began on the Chinese mainland in 2021

    How Many Tesla Charging Stations Exist?

    As of the start of 2022, there were a total of 3,724 Tesla Supercharger stations worldwide. These charging stations are strategically placed in urban and rural areas to allow Tesla owners to recharge their vehicles in just fifteen minutes. In response to increased sales, Tesla has Made significant efforts to expand the availability of charging stations. Between July 2018 and July 2021, Tesla added 1,652 new Supercharger stations.

    By the end of 2021, Tesla was operating 3,059 Supercharger stations in more than forty countries. The number of charging stations for Tesla electric vehicles grew by 86.07% from July 2019 to July 2021.

    In October 2021, the majority of Tesla Superchargers were situated in the United States and China, accounting for 65.53% of all Tesla charging stations. The USA had 1,159 Tesla charging facilities, representing 37.88% of all locations, while China had 846, making up 27.65% of all Superchargers. Canada had 125, or 4.08% of all Tesla Supercharger locations.

    Tesla manufactures electric vehicles in three countries: the United States, China, and Germany, utilizing a total of six manufacturing facilities. Four of these facilities have been fully operational for several years. In addition to the original Fremont Factory in California, Tesla has added three more operating manufacturing facilities: Gigafactory Nevada, Gigafactory New York, and Gigafactory Shanghai in 2016, 2017, and 2018, respectively. Across its Fremont and Shanghai locations, Tesla has installed an annual production capacity of 1.5 million cars.

    Gigafactory Berlin-Brandenburg in Germany was officially inaugurated on March 22, 2022. This factory is designed to produce batteries, battery packs, and powertrains for use in Tesla vehicles additionally, the first Model Y Performance with 2170-cells was manufactured in April 2022 at this location.

    Gigafactory Texas, near Austin, Texas, commenced limited production of the Model Y toward the end of 2021, with the first deliveries of electric vehicles from this factory occurring on April 7, 2022. Gigafactory Texas is also intended to be the primary site for the production of the Tesla Cybertruck and the Tesla Semi, as well as the location of Tesla’s corporate headquarters. It is the second largest factory in the United States by size and the second largest building in the world by volume.

    Tesla has updated the Model 3, which made its debut in late 2023. These updates consisted of a completely redesigned yet familiar exterior, new interior technology, and new technical enhancements at reduced production costs. This made the latest Model 3 a fantastic vehicle with new features and a price tag below $40,000, but that has recently and quietly come to an end.

    Although Tesla did not officially announce the discontinuation of any model, the base Standard model is no longer up for grabs. Instead, it has been substituted with the Model 3 Long Range Rear-Wheel-Drive with an MSRP of $42,500, which is $3,500 more than the previous Standard trim. Despite Tesla’s assertion that the Model 3 was manufactured at a reduced cost, it seems that the Standard Model would have ultimately become more expensive to produce.

    This model’s battery technology has been discontinued

    The Standard Model 3 utilized LFP (lithium iron phosphate) battery cells that were obtained from China. This allowed Tesla to manufacture their vehicles at a subsidized rate, but there was a downside to using these batteries. Vehicles with these batteries did not qualify for the $7,500 tax credit available to EV owners, and this included the Standard Model 3. Additionally, the Biden administration has imposed higher import tariffs on Chinese-made products, including batteries.

    These additional costs and the absence of incentives would have resulted in the cheapest Model 3 becoming quite expensive over time. It simply did not make sense, so Tesla decided to eliminate this model in favor of the next vehicle, which featured superior batteries and could easily qualify for EV tax incentives.

    While the Long Range Rear-Wheel Drive Model may be pricier, it provides significantly more value than the initial price suggests. Firstly, it offers 91 miles more range at 363 miles compared to 272 for the Standard model. This is due to the utilization of NCA (nickel cobalt aluminum) batteries in this model, which are produced in the United States. Not only does it offer much more range, but it also assists Tesla in keeping production costs low, enabling you to still own an affordable Tesla EV.

    The most significant advantage of this change is that the cheapest Model 3 is finally able to take advantage of the $7,500 government tax credit available to eligible EV owners. This means that you could purchase a new Model 3 Long Range at an MSRP of around $35,000, which is $4,000 less than the previous Standard model, but with additional range, performance, improved technology, and features. With this move, the Tesla Model 3 may very well maintain its position as the best value-packed EV on the market. Tesla’s sales have been thriving, and we might witness another surge in demand for the beloved Model 3.

    With the EV market expanding day by day, it has never been more challenging to select the right model for you. One safe bet is to consider one of Tesla’s models, as they not only offer a comprehensive EV package with a wide range of refinements and good range, but they are also competitively priced.

    Although they do not contain as many moving parts as vehicles with traditional internal combustion engines, there are still maintenance expenses to take into account in other areas. In fact, studies indicate that new EVs can be three times as problematic as ICE vehicles.

    These include the typical components found in any car, such as suspension, as well as the electric powertrain, which can still experience issues. CarBuzz decided to determine the Tesla model with the lowest maintenance costs in 2024.

    For those seeking to minimize their expenses on a new electric vehicle, the fact that Tesla’s most affordable model is also the most cost-effective to operate should be welcome news. The Model 3 was initially introduced in 2017 as Tesla’s entry-level offering, positioned below the Model S.

    It is spacious enough for medium-sized families to use on a daily basis. There are several versions available, with the most affordable being the long-range rear-wheel drive model. This is equipped with a single-motor powertrain producing 280 hp. This is sufficient to propel it to 60 mph in 4.9 seconds, which should be more than adequate for performance-oriented parents.

    If this seems somewhat underwhelming, Tesla also offers a long-range AWD version and the Performance variant. Both feature a dual-motor powertrain, with the latter boasting an impressive 510 hp while the former has 350 hp. The contrast is striking, with the Performance capable of accelerating to 60 mph in just 2.9 seconds, while the AWD achieves this in 4.2 seconds.

    When factoring in the US government EV grant, they are also reasonably priced in comparison to their closest competitors. The base RWD model can be purchased for just $35,000, while a Performance model will set you back a tempting $47,000. The standard AWD model falls between them at just under $40,000. The range is also impressive across the three models, with the RWD capable of covering 363 miles on a single charge.

    In late 2023, Tesla made updates to the Model 3, which included a redesigned exterior, new interior technology, and technical improvements at lower production costs. However, the latest Model 3 with a sub-$40,000 price tag has been quietly discontinued.

    Although Tesla did not officially announce the discontinuation of any models, the base Standard Model 3 is no longer available for purchase. It has been replaced by the Model 3 Long Range Rear-Wheel-Drive, priced at $42,500, which is $3,500 more than the previous Standard trim. Despite Tesla’s claim that the Model 3 was built at a reduced cost, the Standard Model’s production would have become more expensive over time.

    The Standard Model 3 used LFP (lithium iron phosphate) battery cells sourced from China, which allowed Tesla to build their vehicles at a subsidized rate. However, vehicles with these batteries did not qualify for the $7,500 tax credit available to EV owners, and the Biden administration imposed higher import tariffs on Chinese-made products, including batteries.

    These additional costs and lack of incentives would have made the cheapest Model 3 quite expensive over time. Therefore, Tesla decided to discontinue this model in favor of a new vehicle with better batteries that could easily qualify for EV tax incentives.

    Although the Long Range Rear-Wheel-Drive Model may be more expensive, it offers more value than its initial price suggests. It provides 91 miles more range at 363 miles compared to the 272 miles of the Standard model. This is due to the NCA (nickel cobalt aluminum) batteries used in this model, which are produced in America. Not only does it offer more range, but it also helps Tesla keep production costs low, making it an affordable Tesla EV.

    The most significant advantage of this change is that the cheapest Model 3 is now eligible for the $7,500 government tax credit available to eligible EV owners. This means you could get a brand-new Model 3 Long Range at an MSRP of around $35,000, which is $4,000 less than the previous Standard model but with more range, performance, better technology, and features. With this move, the Tesla Model 3 may very well retain its position as the best value-packed EV on sale. Tesla sales have been booming, and we may see another rise in demand for the beloved Model 3.

    With the EV market expanding, it’s becoming increasingly challenging to choose the right model for you. Tesla’s models offer a comprehensive EV package with a wide range of refinements, good range, and competitive pricing, making them a safe option.

    While EVs have fewer moving parts than vehicles with traditional internal combustion engines, they still incur maintenance costs in other areas, such as suspension and the electric powertrain. Studies show that new EVs can be three times as problematic as ICE vehicles.

    For those looking to spend as little as possible on a new electric vehicle, the fact that Tesla’s cheapest model is also the most affordable to run should be welcome news. The Model 3, introduced in 2017 as Tesla’s entry-level offering below the Model S, is large enough for medium-sized families to use daily. It has multiple versions available, with the cheapest being the long-range rear-wheel drive model, equipped with a single-motor powertrain producing 280 hp. This is sufficient to propel it to 60 mph in 4.9 seconds, suitable for performance-focused parents.

    For those seeking more power, Tesla also offers a long-range AWD version and the Performance variant, both featuring a dual-motor powertrain. The Performance variant boasts 510 hp, while the AWD version has 350 hp. The Performance can sprint to 60 mph in just 2.9 seconds, while the AWD takes 4.2 seconds.

    Taking into account the US government EV grant, these models are also affordable compared to their closest rivals. A base RWD model can be purchased for just $35,000, while a Performance variant will cost $47,000. The normal AWD version falls between them at just under $40,000. The range is also impressive across the three models, with the RWD capable of 363 miles on a single charge.

    The heavier AWD, now eligible for the $7,500 EV tax credit once more, can cover 341 miles, while the Performance model can travel 303 miles.

    If all this seems impressive, it gets even better when considering the projected maintenance expenses for the car. As per CarEdge, maintaining a Model 3 over a decade is estimated to cost only $5,381, which is more than $4,000 less than the class average. This cost comparison includes regular gas-powered vehicles in the luxury sedan category, demonstrating the significant savings that electric vehicles can offer when factoring in maintenance.

    This point is further emphasized by the Model 3’s low 13.45% likelihood of experiencing a major malfunction over this period, which is notably lower than its closest competitors by 13.12%. It also solidifies the Model 3 as the most cost-effective Tesla to maintain over extended periods, making it an ideal choice for families with limited disposable income.

    Regarding charging, the cost depends on whether you have a home charger and your location among the 50 states. Home charging not only revolutionizes the experience but also provides the most economical charging option. Hawaii is the most expensive state for charging an electric vehicle, but even there, substantial savings can be expected compared to a gas-powered car.

    The 2024 Model 3 has experienced minimal recalls thus far

    While the Model 3 does not have a large number of serious reliability issues, it has been subject to several recalls for relatively minor issues. Most of these have been addressed over the years through software updates or design adjustments. Some recalls were issued for the 2024 model, according to RepairPal. For instance, there was an issue with the hood latch assembly failing to register if the hood wasn’t shut properly, which was resolved through a software update.

    Another recall involved a single vehicle missing a gas deflector in the side airbag system, which could hinder its proper functioning in the event of a collision. Although this was fixed in April, concerned customers can reach out to Tesla’s service department for more details about the issue. Thus far, only one vehicle has been found to have this problem.

    While the projected figures are promising, firsthand reviews are invaluable. Fortunately, the majority of 2024 Tesla Model 3 owners validate the car’s strong reliability record. Kelley Blue Book, which aggregates feedback from car owners, reports that out of 186 Model 3 drivers who left reviews, 167 awarded it a four or five-star rating.

    A total of 17 drivers gave three stars or less. This resulted in the Model 3 receiving a reliability rating of 4.6 out of 5 stars. Most negative reviews focused on the Model 3’s build quality. For example, one owner mentioned experiencing a seat belt failure that had to be rectified by Tesla. Another reported three defects within a year of ownership, including one related to the glass roof.

    Another owner expressed frustration with the Tesla’s computer system, stating that their voice control stopped working, and the volume controls were unresponsive. A couple of drivers noticed uneven panel gaps around the car, while another encountered issues with wireless phone charging. Overall, these issues are likely isolated to specific cars, as most owners are satisfied with their Model 3’s quality.

    The Model 3 was one of the top cars in its class. Not only could it outperform most of its competitors in terms of performance, but it was also one of the most affordable options. Additionally, it offered several features not found in other brands. For the 2024 model year, unveiled in September 2023, Tesla claimed to have changed or improved over 50% of the vehicle. This was a bold statement for a car that appeared largely unchanged, yet substantial upgrades were made.

    Like any other facelift or mid-life cycle update, the car needed to have a distinct appearance. While the overall profile remained the same, as the 2024 Model 3 was built on the same platform and structure, there were other exterior details that set the new model apart from its predecessor. The American brand redesigned the headlights, making them slimmer and more stylish. Additionally, more prominent LED daytime running lights were added inside the headlights. The new headlights necessitated a redesigned bumper, which became slimmer and wider, and the fog lamps were removed.

    The car manufacturer deemed them ineffective. Tesla included new wheels in its lineup, available in sizes up to 19 inches and featuring a multi-spoke design. Interestingly, the taillights were positioned on the tailgate at the rear. Red fog lamps were also added to the lower bumper by the manufacturer, as they were required by law in most regions worldwide.

    Internally, the dashboard closely resembled the one in the pre-facelift version, but it was actually new. The design concept was similar, featuring a plain, flat area that now had an LED strip running along the base of the windshield, between the A-pillars. Additionally, a new steering wheel design with integrated buttons for turn signals and cruise control was introduced.

    In the 2024 Model 3, Tesla eliminated all the column-mounted stalks and even relocated the windshield wipers to the redesigned steering wheel. A new center console was installed between the front seats, providing a storage compartment and a pair of cup holders. Furthermore, the manufacturer upgraded the touchscreen in the center stack, maintaining the same size but offering improved image quality.

    Another significant enhancement was the introduction of a color touchscreen for rear-seated passengers, allowing them to control vents and the stereo system.

    Although the updated version retained the same engines and batteries, it was capable of achieving a range of up to 423 miles (678 km). Conversely, the base model allowed the vehicle to travel 346 miles (554 km) on a single charge.

    Unleashing the Future: How Electric Vehicle (EV) Batteries Are Revolutionizing Transportation

    Electric vehicle (EV) batteries are transforming the way we think about transportation. As the world shifts towards sustainable energy solutions, the development of advanced battery technologies is playing a pivotal role in driving this change.

    From lithium-ion to solid-state batteries, the continuous innovations in battery technology are not only extending the range of EVs but also enhancing their performance and reducing charging times. One of the most significant impacts of EV batteries is their contribution to reducing greenhouse gas emissions.

    By powering vehicles with clean energy, EV batteries are helping to mitigate the environmental impact of traditional gasoline-powered vehicles. This shift towards electrification is not only beneficial for the environment but also for the overall well-being of communities, as it reduces air pollution and promotes cleaner urban spaces.

    Moreover, the integration of smart grid technologies with EV batteries is opening up new opportunities for energy storage and grid stabilization. Through vehicle-to-grid (V2G) technologies, EVs can serve as mobile energy storage units, contributing to the stability of the overall energy grid.

    This bi-directional flow of energy enables EV owners to not only power their vehicles but also to supply energy back to the grid during peak demand, thus creating a more resilient and adaptable energy infrastructure. In addition, the evolution of EV batteries is driving advancements in renewable energy integration.

    By coupling EV charging stations with solar and wind power generation, it is possible to create interconnected systems that utilize clean energy sources for both transportation and electricity generation. This convergence of technologies is fostering a more sustainable and decentralized energy ecosystem, enabling individuals and communities to actively participate in the energy transition.

    As EV batteries continue to evolve, they are poised to have a profound impact on the future of transportation and energy. The ongoing research and development in battery technologies are paving the way for a more sustainable and efficient mobility landscape, where electric vehicles play a central role in shaping the future of transportation.

    From Zero to Hero: The Rise of EV Batteries in Sustainable Energy The increasing demand for electric vehicles (EVs) has driven the rapid development of EV batteries, making them a pivotal component in sustainable energy solutions. As the automotive industry shifts towards electrification, EV batteries have emerged as a game-changing technology, offering higher energy density, longer lifespan, and faster charging capabilities. This transformation has been instrumental in reducing carbon emissions and curbing reliance on fossil fuels, thus paving the way for a greener and more sustainable future.