I attempted to rely solely on the Vision Pro for my work for a week, and it surpassed my expectations once I connected it to my laptop.
The Apple Vision Pro is the most remarkable mixed reality headset I have ever utilized. It is enjoyable for gaming and watching movies, and it has impressive eye and hand tracking capabilities. However, at a price of $3,500, one would expect it to offer more than just entertainment.
Considering its cost is equivalent to that of a fully equipped MacBook, one would hope to be able to use it for productivity purposes.
I have been using the Vision Pro for a few months, and for the past week, I have been attempting to use it in lieu of my traditional PC setup to assess its productivity potential. The positive aspect is that the Vision Pro possesses the capability and adaptability to function as a virtual office.
The downside is that additional equipment, including a MacBook, is required to fully utilize its potential.
To facilitate productivity, the addition of a MacBook is necessary.
Initially, I attempted to work using the Vision Pro without any additional equipment. This appeared feasible since it shares similar power with the iPad Pro, the 2022 MacBook Pro, and the 2023 MacBook Air, and its visionOS is based on both iPadOS and macOS. However, its design and compatibility lean more towards the iPad.
The Vision Pro encounters similar challenges as the iPad Pro when it comes to serious work, and these challenges are even more pronounced on the headset. iPadOS presents difficulties with multitasking and managing multiple apps simultaneously.
Managing window placement, multitasking with multiple apps and desktops, and even simply knowing which apps are open is extremely challenging without a task manager or a macOS-like dock with indicators for running apps.
The iPad Pro has a dock without indicators, but the Vision Pro lacks a dock altogether; users need to access an iPhone-like app list to browse apps, and it does not indicate which apps are open.
In summary, I do not recommend relying solely on the Vision Pro for work purposes.
To simplify the process, I utilized a MacBook Air and tested the Mac Virtual Display feature. Connecting to the Mac via Mac Virtual Display is straightforward, although not as seamless as Apple claims.
By simply looking up while wearing the Vision Pro, the menu can be accessed, settings can be opened, and the Mac Virtual Display icon can be selected. If both the Mac and Vision Pro are on the same Wi-Fi network and logged into the same Apple account, the Mac can be selected and connected instantly.
The process is fast and simple, and there are no complaints about it. However, it is supposed to be even more streamlined with the Vision Pro displaying a large “Connect” button floating over the Mac when it is looked at.
I have seen the button appear a few times, but not consistently, and most of the time it does not appear. Nevertheless, manually connecting through the quick menu is almost as smooth.
Once connected, the Mac Virtual Display presents the Mac’s screen as a floating window that can be repositioned and resized within the headset. Although smart glasses like the Rokid Max and the Viture One, which cost a sixth of the price, offer similar functionality, the Vision Pro has distinct advantages.
Firstly, the Mac Virtual Display window can be moved and resized, and it will remain fixed in that position even when moving around. Whether you want it to float just above your MacBook or cover your wall like a large TV, it is easy to position and resize. It will remain in place even if you get up and move around.
The Vision Pro surpasses other smart glasses by allowing the use of apps while using Mac Virtual Display.
While multitasking on the Vision Pro alone is challenging, being able to manage all your essential tools in macOS on one large screen while simultaneously having a video window open to the left and a chat window open to the right makes it easy.
Keyboard and mouse control worked well when connected to the MacBook. I couldn’t use my mouse outside of the Mac Virtual Display window because the Vision Pro doesn’t support any form of mouse input.
However, the Magic Trackpad can be utilized between the MacBook screen and Vision Pro apps by swiping between them.
Importantly, physical keyboard input from the MacBook was translated to the Vision Pro. I could type in my MacBook apps and then switch to a separate app on the Vision Pro and start typing there with the same keyboard.
Using your eyes and fingers to type on the Vision Pro’s virtual keyboard is acceptable for a few words, but for longer sentences, a physical keyboard is necessary.
Coming from a PC setup with an ultrawide monitor and previously using two monitors, I was disappointed to discover a significant limitation in Mac Virtual Display: only one screen is available.
Even with multiple desktops through macOS’ Mission Control, they cannot be distributed to multiple windows on the Vision Pro. You can still set other apps around you and run them alongside the Mac Virtual Display window, but you’re limited to Vision Pro apps.
On the positive side, you can choose from various resolutions including 4K and 5K (5,120 by 2,880), surpassing the 2,560-by-1,440 screen of my MacBook Air.
Less significant but still somewhat irritating, the Mac Virtual Display connection doesn’t detect the Vision Pro’s Persona feature as a webcam feed. If you take a video call on the MacBook, others will only see your headset-covered face.
To use Persona for calls, you need a browser window or a videoconferencing app running on the Vision Pro itself.
It took some experimentation to figure out the best configuration for me, but I ultimately settled on the Mac Virtual Display in front of me, a Safari window behind it for taking video calls with Persona, a few Vision Pro communications apps to my right, and the Television app showing a virtual screen playing music to my left.
I really enjoyed working in this virtual office. Even with only one screen for my tools on the laptop, being able to make it as big as I wanted and place it anywhere around me was a huge advantage.
I could still run browsers, communications software, and other apps outside of the Mac Virtual Display window through the Vision Pro itself, and they all worked together very well.
Keyboard controls between apps were generally very smooth, and my clipboard was shared between the Vision Pro and the MacBook, allowing me to copy a URL from a message and drop it on my desktop (which came in handy for iCloud links with large Vision Pro recordings ).
The experience wasn’t perfect, and I encountered some hiccups. Occasionally, the Mac Virtual Display window would indicate that the connection was interrupted.
Interestingly, this didn’t prevent me from using the MacBook through the Vision Pro, but it did stop my keyboard inputs from registering in Vision Pro apps until the error message disappeared.
Chrome on the MacBook consistently crashed when I removed the Vision Pro, which didn’t happen when I physically closed the laptop or manually disconnected from it. These are relatively minor inconveniences that can be smoothed out over time.
One issue you’ll likely face when working on the Vision Pro is the discomfort of long-term use. While the Vision Pro can run indefinitely when plugged in and the MacBook can last a solid 16 hours without power, I could only tolerate wearing the headset for 90 minutes at a time.
Removing it after that duration left me with a bit of eye strain and a headache for a short period. The 20-20-20 rule of looking away from a screen at something 20 feet away for 20 seconds every 20 minutes is even more important for a view-replacing headset like the Vision Pro.
Following a demonstration lasting approximately 30 minutes that covered the key features available for testing, I left with the firm belief that Apple has introduced a significant advancement in the capabilities and implementation of XR, or mixed reality, with its new Apple Vision Pro.
To clarify, I am not asserting that it fulfills all its promises, introduces a genuinely new computing paradigm, or makes any other high-reaching claims that Apple aims to achieve upon its release. I will require ample time with the device beyond a guided demonstration .
However, I have experience with nearly every major VR headset and AR device since the Oculus DK1 in 2013 up to the most recent generations of Quest and Vive headsets. I have explored all the experiences and attempts to popularize XR.
I have witnessed both successful social, narrative, and gaming experiences such as Gorilla Tag, VRChat, and Cosmonius, as well as emotionally impactful first-person experiences created by Sundance filmmakers that shed light on the human (or animal) condition.
Nevertheless, none of them possess the advantages that Apple brings to the table with Apple Vision Pro, including 5,000 patents filed over the past few years and access to a vast pool of talent and capital.
Every aspect of this device reflects Apple-level ambition. Whether it will become the “next computing mode” remains uncertain, but the dedication behind each decision is evident. No corners have been cut, and full-fledged engineering is on display.
The hardware is impressive — with 24 million pixels spread across the two panels, significantly more than what most consumers have encountered with other headsets. The optics are superior, the headband is comfortable and easily adjustable, and there is a top strap for alleviating weight.
Apple has stated that it is still deliberating on which light seal (the cloth shroud) options to include when it officially launches, but the default one was comfortable for me. They intend to offer variations in sizes and shapes to accommodate different face shapes.
The power connector features a clever design as well, using internal pin-type power linkages with an external twist lock for interconnection.
For individuals with varying vision requirements, there is also a magnetic solution for some (but not all) optical adjustments. The onboarding experience includes automatic eye-relief calibration that aligns the lenses with the center of your eyes, eliminating the need for manual adjustments.
The main frame and glass piece look satisfactory, although it’s worth noting that they are quite substantial in size. Not necessarily heavy, but certainly noticeable.
If you have any experience with VR, you are likely aware of the two significant obstacles that most people encounter: nausea caused by latency and the sense of isolation during prolonged sessions wearing a device over your eyes.
Apple has directly addressed both of these challenges. The R1 chip, alongside the M2 chip, boasts a system-wide polling rate of 12ms, and I observed no judder or framedrops. While there was a slight motion blur effect in the passthrough mode, it was not distracting. The windows rendered sharply and moved swiftly.
Naturally, Apple’s ability to mitigate these issues stems from a plethora of entirely new and original hardware. Every aspect of this device showcases a new idea, a new technology, or a new implementation.
However, all these innovations come at a cost: at $3,500, it exceeds high-end expectations and firmly places the device in the power user category for early adopters.
Here’s what Apple has accomplished exceptionally well compared to other headsets:
The eye tracking and gesture control are nearly flawless. The hand gestures are detected from anywhere around the headset, including on your lap or resting low and away on a chair or couch. Many other hand-tracking interfaces require you to keep your hands raised in front of you, which can be tiring.
Apple has incorporated high-resolution cameras dedicated to the bottom of the device specifically to track your hands. Similarly, an eye-tracking array inside ensures that, after calibration, nearly everything you look at is precisely highlighted. A simple low-effort tap of your fingers and it works.
Passthrough plays a crucial role. It’s vital to have a real-time 4K view of the surrounding environment, including any people nearby, when using VR or AR for extended periods.
Most people have a primal instinct that makes them extremely uneasy when they can’t see their surroundings for an extended period.
Having the ability to see through an image should increase the likelihood of longer usage times. Additionally, there’s a clever mechanism that automatically displays a person approaching you through your content, alerting you to their presence.
The exterior eyes, which change appearance based on your activity, also serve as a helpful cue for those outside.
The high resolution ensures that text is easily readable. Apple’s positioning of this as a full-fledged computing device only makes sense if the text is legible.
Previous “virtual desktop” setups relied on panels and lenses that presented a blurry view, making it difficult to read text for an extended period.
In many cases, it was physically uncomfortable to do so. With the Apple Vision Pro, text is incredibly sharp and readable at all sizes and distances within your space.
There were several pleasantly surprising moments during my brief time with the headset. Apart from the display’s sharpness and the responsive interface, the entire suite of samples demonstrated meticulous attention to detail.
The Personas Play feature. I had serious doubts about Apple’s ability to create a functional digital avatar based solely on a scan of your face using the Vision Pro headset. Those doubts were unfounded.
I would say that the digital version it creates for your avatar in FaceTime calls and other areas successfully bridges the uncanny valley.
It’s not flawless, but the skin tension and muscle movement are accurate, the expressions are used to create a full range of facial movements using machine learning models, and the brief interactions I had with a live person on a call (and it was live, I verified by asking off-script questions) did not feel unsettling or strange. It worked.
It’s sharp. I’ll reiterate, it’s extremely sharp. It handles demos like the 3D dinosaur with incredible detail down to the texture level and beyond.
3D movies look great on it. Jim Cameron probably had a moment when he saw “Avatar: Way of Water” on the Apple Vision Pro.
This device is perfectly designed to showcase the 3D format — and it can display them almost immediately, so there will likely be a substantial library of 3D movies that will breathe new life into the format.
The 3D photos and videos you can capture directly with the Apple Vision Pro also look excellent, but I didn’t have the chance to capture any myself, so I can’t comment on the experience. Awkward? Hard to say.
The setup process is simple and seamless. A few minutes and you’re ready to go. Very Apple.
Yes, it’s as impressive as it looks. The output of the interface and the various apps is so remarkable that Apple used them directly from the device in its keynote.
The interface is vibrant and bold and feels present due to its interaction with other windows, casting shadows on the ground, and reacting to lighting conditions.
Overall, I’m cautious about making sweeping claims regarding whether the Apple Vision Pro will deliver on Apple’s promises about the advent of spatial computing.
I’ve had too little time with it, and it’s not even finished — Apple is still refining aspects such as the light shroud and various software elements.
However, it is undeniably well-executed. It represents the ideal XR headset. Now, we’ll have to wait and see what developers and Apple achieve over the next few months and how the public responds.
Recent leak suggests that mass production of the Apple Vision Pro 2 is in progress.
The Apple Vision Pro 2 is scheduled to commence mass production in 2025, despite previous reports indicating otherwise. The original Vision Pro, Apple’s AR headset, did not perform well in the market, with sales struggling to reach 100,000 units by July 2024.
Apple intends to introduce new features to enhance the popularity of the sequel. One of these features is a new M5 chipset, expected to enhance the headset’s performance.
Contrary to earlier rumors of production cessation due to low demand for the original Vision Pro, analyst Ming-Chi Kuo from TF International Securities believes that mass production of the new M5 chipset-equipped AR headset will begin in the second half of 2025. Apple aims to make the Vision Pro 2 more cost-effective, potentially appealing to a broader customer base.
Kuo also anticipates minimal, if any, changes to the design of the AR headset, which would reduce production costs. This strategic move would leverage the fresh and appealing design of the Vision Pro, featuring the innovative augmented reality display EyeSight and a modern futuristic high -end aesthetic.
New chip, new enhancements
According to Kuo, the M5 chipset will enhance the Apple Intelligence experience. The projected launch date of the Apple Vision Pro 2 suggests that the M5 chipset may utilize TSMC’s N3P node, although this is not confirmed.
In an effort to control production costs, Apple will not utilize its more advanced 2nm chipsets. These chipsets were initially expected to be used for manufacturing next-generation iPhone chips like the A19 and A19 Pro, but it appears that these products will also stick with Apple’s N3P node (3 nm).
While not as cutting-edge as the 2nm chipsets, the 3nm chipset is still efficient and powerful.
The high cost of the Apple Vision Pro, starting at $3,500 (£2,800, AU$5,300), is often cited as a reason for its low sales figures. Other reasons include a perceived lack of content for the device, as well as comfort, wearability , and the intuitiveness of the gesture-based control.
There is still much unknown about the specifications of the Apple Vision Pro 2, but if Apple can deliver the proposed M5 chipset in a more affordable headset, it could be a success for the company.
The Vision Pro 2 is reportedly set to be released by the end of next year, featuring an M5 chip and designed for AI ‘from the ground up’ (as Apple might say). This news is promising, and I believe it’s the right move for Apple.
It has been clear for some time that Apple’s vision for its Vision products is long-term.
AR and VR are still in the early stages of adoption. However, the challenge many tech companies face is how to develop the technology and platform without having devices in the market.
So, earlier this year, Apple released the Vision Pro. While it has not been a major success or significantly contributed to the company’s bottom line, it is a tangible product. Developers are creating applications for it, and technologies like visionOS, Immersive Video, and Spatial photos are expanding. Slowly, the Vision Pro is making a ‘spatial computing’ future more feasible.
The objective: appealing to the masses
Ultimately, Apple aims for its Vision products to become a major success and the next big thing. It wants spatial computing to become mainstream.
To achieve this goal, at the very least, a Vision product needs to be:
Lighter
More versatile
less expensive
Therefore, reports that Apple’s priority is not the Vision Pro 2, but instead a more affordable Vision device, make a lot of sense.
While Apple focuses on the non-Pro version of its Vision line, it is crucial to keep the Vision Pro at the forefront of innovation.
This is where the latest report becomes relevant.
The Vision Pro 2 is receiving the necessary upgrades, and perhaps more
Previously, I suggested that while Apple is concentrating on a less expensive Vision device, it should at least equip the current Vision Pro with an M4 and leave it at that.
It appears that this is precisely what will happen, except it will feature an M5 instead.
Reportedly, the Vision Pro 2 will include an M5 chip with a strong focus on Apple Intelligence.
And I say: great!
Apple’s focus on Apple Intelligence is evident, and the absence of this feature in visionOS for the $3,500 Vision Pro is disappointing, given its otherwise advanced capabilities.
If Apple were to introduce a new Vision Pro in 2025 with an M5 chip and integrate several Apple Intelligence features into visionOS 3, it would generate the necessary excitement for the platform.
Meanwhile, the company can continue prioritizing the more affordable Vision product, as it has a better chance of achieving widespread success.
For now, it’s crucial for the Vision Pro to remain appealing to early adopters and the curious, and the rumored updates should help achieve this.
According to Apple analyst Ming-Chi Kuo, a new version of the Vision Pro headset is being developed and is expected to begin mass production in the second half of 2025.
Kuo suggests that the most significant change in the upcoming model will be the inclusion of Apple’s M5 chip, a substantial upgrade from the current Vision Pro’s M2 chip. This enhancement is expected to significantly boost the device’s computing power, particularly in terms of integrated Apple Intelligence features.
Despite the upgraded internals, Kuo reports that other hardware specifications and the overall design of the Vision Pro will remain largely unchanged. This approach may help Apple manage production costs, although the price point is anticipated to remain close to the current $3,499 starting price.
Kuo emphasizes that if the new version introduces compelling use cases, it could propel Apple’s spatial computing platform toward mainstream adoption. He also speculated on the potential integration of advanced AI models, such as text-to-video capabilities similar to OpenAI’s Sora, which could greatly enhance the Vision Pro experience.
According to Bloomberg’s Mark Gurman, Apple is planning to incorporate Apple Intelligence features into the Vision Pro headset in the future. While the device is capable of running on-device AI functions such as writing tools, notification summaries, and an enhanced Siri, these features are not expected to be available in 2024. Instead, Apple may be saving the Apple Intelligence integration for visionOS 3, potentially launching in 2025.
Apple’s exploration of a new product category includes venturing into robotics. Additionally, the company is preparing new iPads and accompanying accessories for a May release, the Vision Pro is set to receive another Personas upgrade, and there has been a significant management change at Apple.
Just a year ago, Apple’s future product pipeline seemed abundant. The Vision Pro had not yet been introduced, smart home devices were in development, and the Apple electric car project seemed to be gaining traction.
Today’s situation is markedly different. While the Vision Pro is now available for purchase, it has not achieved widespread popularity. The Apple vehicle project has been scrapped, along with efforts to develop next-generation smartwatch screens.
The performance improvements of processors have begun to level off, and the company is lagging behind in the smart home market.
To compound the situation, Apple’s competitors, such as Microsoft Corp. and Alphabet Inc.’s Google, have made significant progress in generative AI, much to the excitement of consumers and investors. Meanwhile, Apple has remained relatively inactive.
Apple’s business is heavily reliant on the iPhone, which contributes to more than half of its revenue. Sales in that market have stagnated, underscoring the importance of finding a major new product category.
Apple has faced similar challenges in the past. The iMac revitalized the company in the late 1990s, the iPod propelled it into consumer electronics in the early 2000s, and the iPhone transformed Apple into the industry giant it is today. The iPad further solidified its position in our lives.
While Apple is starting to generate more revenue from online services and other offerings, it remains fundamentally a company focused on devices. During the most recent holiday season, the majority of its revenue was derived from products such as the iPhone, Mac, iPad, Apple Watch, and AirPods.
Ultimately, services like the App Store, TV+, and Apple One bundles depend on the iPhone and other devices to function. This underscores the importance of staying at the forefront of hardware innovation.
An Apple vehicle was seen as the “ultimate mobile device,” and it’s clear why that possibility was exciting. It’s a low-profit industry, but the vehicles could have been sold for $100,000 each.
Even if Apple only sold a portion of the number of units of Tesla Inc., that could have resulted in a $50 billion business (or approximately equivalent to the iPad and Mac combined).
The Vision Pro headset introduced Apple to the mixed-reality category, which the company calls spatial computing. However, its greatest potential might be in replacing the Mac and iPad, rather than creating an entirely new source of revenue.
For the device to gain any significant traction, the company will need to produce a more affordable model and ideally bring it to market within the next two years.
Then there’s the smart home sector, where Apple still has large aspirations. It has discussed automating household functions and offering an updated Apple TV set-top box with a built-in camera for FaceTime video calls and gesture-based controls. And all the technology will seamlessly integrate with both the iPhone and Vision Pro.
One aspect of the plan is a lightweight smart display — something similar to a basic iPad. Such a device could be moved from room to room as needed and connected to charging hubs located around the house. Apple has initiated small-scale test production of the screens for this product, but has not made a decision on whether to proceed.
Establishing a unified smart home strategy remains a goal for Apple, but fulfilling the vision has proven challenging. The need to complete the Vision Pro took priority, diverting resources away from smart home efforts.
But now that the Vision Pro has been released and the electric car project has been canceled, Apple has more capacity to refocus on the home. And there’s an exciting potential opportunity in that area. As reported recently, Apple is exploring the concept of creating personal robotic devices infused with artificial intelligence.
The company has internal teams within its hardware engineering and AI divisions exploring robotics. One recent project involved a home robot that could follow a person around the home.
Some involved in the effort have even suggested that Apple could delve into humanoid technology and develop a machine capable of handling household chores. However, such advancements are likely a decade away, and it doesn’t seem that Apple has committed to moving in that direction .
A more immediate move into robotics would be a device that Apple has been working on for several years: a tabletop product that utilizes a robotic arm to move around a display.
The arm could be used to mimic a person on the other side of a FaceTime call, adjusting the screen to replicate a nod or a shake of the head. However, this device also lacks unified support from Apple’s executive team.
So for now, Apple will likely make more gradual improvements to its current lineup: new device sizes, colors, and configurations, in addition to accessories that could generate more revenue from the iPhone. This has largely been the key to the company’s success during Tim Cook’s tenure as CEO.
But with robotics and AI advancing every year, there’s still hope that something from the Apple lab could eventually make its way into consumers’ living rooms.
2024 is shaping up to be the year of the iPad. The new iPads are finally on the horizon. You can mark early May on your calendar if you — like many Power On readers, apparently — have been eagerly anticipating an upgraded tablet.
On the agenda is the overhauled iPad Pro, an iPad Air, a new Magic Keyboard, and an Apple Pencil. In total, this launch is set to be one of the most extensive updates to the Apple tablet in a single day.
And it’s been a long time coming, especially for the iPad Pro. That model hasn’t received a substantial update since 2018.
For those seeking more specific timing, I’m informed that the launch will likely take place the week of May 6. Another indication of this: Apple retail stores are gearing up to receive new product marketing materials later that week.
This is usually a sign that a new product release is imminent. It’s also worth noting — as I reported at the end of March — that the intricate new iPad screens are the reason behind the roughly one-month delay from the initial March release plan.
Regardless, the new lineup is expected to increase sales, but I’m uncertain whether it will address the broader challenges faced by the iPad. As a frequent user of a Mac and iPhone, and now a Vision Pro for watching videos, I find the iPad extremely irrelevant.
The device isn’t sufficiently capable to fully replace a Mac for everyday tasks, and its software still has significant room for improvement. Hopefully, the introduction of iPadOS 18 will bring about substantial enhancements, making the device a true alternative to a Mac.
Setting aside software considerations, the hardware upgrades in the new iPads mark some of the most significant changes in the product’s history. For the first time, Apple will be transitioning its tablet screens to OLED, or organic light-emitting diode, a technology already utilized in the iPhone.
Reportedly, this technology looks stunning on larger displays, taking the experience that iPhone users have had since 2017 to a whole new level. However, one downside to this transition is that the new models will likely come with higher price points, according to the information I’ve received. The current iPad Pro starts at $799.
Additionally, the company is working on new iterations of the entry-level iPad and iPad mini, but they are not expected to be released before the end of the year at the earliest. The new lower-end iPad will likely be a cost-reduced version of the 10th generation model from 2022, while the update for the iPad mini is expected to mainly involve a processor upgrade.
Looking further ahead, Apple engineers are exploring the possibility of foldable iPads. However, this initiative is still in its early stages, and the company has yet to find a way to create foldable screens without the crease seen on similar devices from Samsung Electronics Co. and others.
I’ve been cautioned that if Apple is unable to solve this issue, it might decide to abandon the concept of foldable iPads altogether. Nevertheless, there’s still time.
Apple has introduced more realistic Personas for the Vision Pro, while visionOS 1.2 is currently undergoing testing. The visionOS 1.1 update was released a few weeks ago, and Apple has just added a new feature: Spatial Personas. These are advanced avatars that create the sensation of being in the same room as other people during FaceTime calls (in contrast to the original Personas, which felt more like being confined in a frosted glass box).
Ironically, the initial beta version of visionOS 1.2 was released last week and brought almost no new features. (In fact, two of the original environments that were included with the Vision Pro on Feb. 2 are still not functional.)
I have tested the new Spatial Personas, which are still in beta, with two different individuals for several minutes. I am extremely impressed — I would even go so far as to say that Apple’s communications and marketing teams have not fully highlighted this feature so far . It’s incredibly impressive and unlike anything I have experienced before.
In fact, it’s so impressive that the absence of this feature in the initial Vision Pro launch likely held back the product. If you have a Vision Pro (and somehow know someone else with one), you absolutely have to try it.
Why did Kevin Lynch, the head of Apple Watch, transition to the company’s AI group? One of the behind-the-scenes stories that was overshadowed by the cancellation of the Apple car is the change in Kevin Lynch’s role, who led the project in recent years.
For about ten years, Lynch reported to Apple’s Chief Operating Officer, Jeff Williams. In addition to overseeing the car project, he has been in charge of software engineering for the Apple Watch under Williams.
In an unexpected move, Lynch has now started reporting to John Giannandrea, Apple’s AI chief. Lynch and Williams still have oversight of the Apple Watch, leading to the question: Why was this change necessary?
Those close to the situation believe that Lynch’s move is intended to bring clarity to an area that has posed challenges for Apple: AI. This is something Apple also attempted to address with the car project.
Lynch initially joined that project in 2021, a few months before the project’s leader, Doug Field, stepped down to lead the electric vehicle efforts at Ford Motor Co. Within the company, Lynch is seen as a highly skilled engineering manager.
With AI, it’s no secret that Apple has been struggling to develop large language models and other tools that can compete with the best in the industry. If Giannandrea were to eventually leave the company, Lynch — who has been due for a promotion to the senior vice president level — could be well-positioned to step into his role.