Apple’s Silence at "The Talk Show" Will Speak Volumes

If you follow Apple at all, you’ve likely heard the news: Apple’s skipping John Gruber’s The Talk Show Live at WWDC this year. For a decade, it’s been tradition for Apple executives to join John on stage at the annual conference to recap the the keynote and dive deeper into the announcements. It is one of my favorite parts of that week, something I look forward to every year.
John has an uncanny ability to get Apple executives to just be human for a bit. It often sounds like a couple of guys just out for a beer, which is the exact context I want to hear folks like Craig Federighi in.
But this year is different. No one from Apple is showing up—not Craig, not Joz, not Phil, not JG, not Mike Rockwell. This all follows John’s timely, astute, and deeply appreciated piece “Something is Rotten in the State of Cupertino.” One can’t help but think that Apple seems uninterested in letting anyone hold their feet to the fire.
It’s not entirely surprising that Apple doesn’t want to participate, but I had held out hope they wouldn’t retreat into defensiveness, or worse, pettiness. I had hoped that instead of hiding, they would take the hits, own up to their failures over the past year, and try to offer some hope to developers and fans.
After all, this part of WWDC week is really for the fans.
Apple refusing to participate feels like more than just snubbing Gruber, it’s a missed chance to engage with the very community that cares the most.
I’m not even directly involved, and I still feel burned by this decision.
Even without Apple executives participating, I am still excited for The Talk Show Live. I’m hoping we get a fun cast of familiar characters. Though I still think that it would be absolutely epic for Sam Altman and Jony Ive to roll up and steal the spotlight.
P.S. This situation reminds me of the first time Phil Schiller went on The Talk Show Live in 2015. Instead of shying away from criticism about Apple software quality written by folks like Marco Arment, he addressed them head-on and even made light of the situation while seriously addressing the claims. It was a masterclass. Why can’t Joz or Craig address the AI problem this way? I’m not sure the company is capable of acting like that anymore.
A Tale of Two IOs: What it Looks Like When Apple Doesn't Lead

We are at more than just an inflection point; we’re at a moment where the global technological order may be about to fundamentally change. Don’t think about this just as an “iPhone moment” but as an “Apple acquires NeXT” event. A NeXTus event if you will. When Apple acquired NeXT, it kicked off a series of technological shifts that were completely unforeseen. It led to products like iMac, iPod, iPhone, iPad, and Apple Watch and technologies like Cocoa, Cocoa Touch, AirPort, Apple Silicon, among countless others. That acquisition fundamentally altered the course of human history and changed the technological landscape. I believe that the events of last week may have done the same.
The Earthquake in Mountain View
I was sitting at Google I/O (thanks Google for inviting me!) with some fellow creators and friends when the news that OpenAI was acquiring Jony Ive’s startup io products broke. Despite being in California and there not being an actual earthquake, you could feel the ground move in that moment. I have already been a firm believer that we are experiencing what folks did in the late 70s and early 80s at the dawn of the personal computer revolution. Only this time it isn’t Steve Jobs and Bill Gates leading the charge, it’s Sam Altman and Demis Hassabis of Google Deepmind.
Google put on an amazing show this year, delivering AI products that already work and new hardware that I was able to actually try for myself. Google released new models that can create incredible output, agentic tools like Gemini and Project Mariner in Chrome I am right now running on my Mac, AR glasses powered by Gemini that I got to go hands-on with, and ideas like personal context that integrate with all of their apps to personalize responses. It felt like the opposite of what we have experienced with Apple Intelligence over the past twelve months. I had already been feeling a sense of dread for Apple, but now it is mixed with excitement about the latest Google announcements. Google is not just showing off amazing ideas, they are shipping some of the best products in the industry. While having drinks with my former colleagues from The Verge, I mentioned that the day felt like a “funeral for Apple” simply because I cannot imagine them being anywhere near able to ship products that come close to what I saw in Mountain View. Despite the certain hyperbole, there was clearly a shift afoot. After being beaten senseless throughout the day by Google, the next day Jony Ive landed the knockout punch. The man who designed the iPhone, Steve Jobs’ “spiritual partner,” has declared his previous creations to be “legacy products.” He went so far as to say that the past 30 years of work, which include the iMac, iPod, iPhone, iPad, and Apple Watch, have led to what he is now building with Sam Altman. That is extraordinary and as a lifelong Apple acolyte it is the clearest signal that the river might truly be in danger of drying up.
The Fall of a Titan
The stewards of Steve Jobs’ baby appear to have gotten too comfortable, becoming so obsessed with services revenue, enacting brutal fights with the developers they used to cherish, and having been distracted by products that either can’t get to market or arrive not fully cooked. That is a terrifying combination of factors for an Apple fan, only compounded by what was the inevitable war that Tim Cook now has to wage with the White House over manufacturing.
Another person that has a vested interest in this new venture is Steve Jobs’ widow. Laurene Powell Jobs invested in the new company recently acquired by OpenAI through her firm Emerson Collective. That means that Sam Altman and OpenAI have the backing of both Steve Jobs’ life partner and spiritual partner. They have amassed a fleet of former Apple designers and builders who helped shape the world-changing products we all know and love, like Scott Cannon, Evans Hankey, and Tang Tan. They are thinking big, being imaginative, thoughtful, and clearly have a decisive vision for the way forward. This is as opposed to Apple, which again, appears focused on squeezing as much money out of its existing customers as it can. A close friend whose opinion I deeply value texted me the other day saying “How does it feel for Apple’s best product to be a tv show now?” That stung, but you can argue that at this current moment it is largely true. It is also indicative of the services shift. No one is talking about the latest iPhone or iPad, but about Severance and The Studio. They may be able to finance top tier content, but can they make the devices of the future anymore? They can design beautiful operating systems and sleek hardware but that won’t matter without AI. Well, it won’t matter to them. They like to own the whole widget. Apple makes the best computers but now it does so for other developers’ software. They seem focused on the wrong things at the worst possible time. Services required Apple to make a small philosophical shift, but AI and this current moment require a large structural one. A brain transplant if you will.
The iPhone may rule the US market now, there is no denying their supremacy when it comes to market share and on hand cash, but that can’t last forever. It feels like they are coasting and almost lost. I feel a sense of distrust towards Apple after the Apple Intelligence debacle. They promised us a future they seemingly cannot deliver. They never shipped their car. Vision Pro may be packed with incredible technology, but it has an extraordinary amount of shortcomings from its physical weight to its pricing strategy to the overall positioning. All of this has led to a vibe shift. Of course you will always have young fanboys, blinded by what they perceive as a love for the brand which is actually a cultish loyalty to the company. I once was one of those young fanboys, so I speak from experience. I don’t want to quibble too much about it, but there is a big difference between loving Apple the brand and Apple the company. Loving the brand is believing in Steve’s vision whereas loving the company too often descends into creating excuses for every decision, good or bad. Believing in the brand means believing in human-centric design, taste, wonder, and invention. The company, lately, represents margin maximization and defensive posturing. You can see the overall vibe shift that I mentioned earlier on display with John Gruber’s “Something is Rotten in the State of Cupertino” and John Siracusa’s “Apple Turnaround” pieces. Both are absolutely essential reads.
A New Creative Covenant
The natural path now for lovers of the Apple brand and of Steve’s vision is no longer to obsess over what version of Apple exists now, it is to help envision a new future with what might be a new brand that cares about the ideals that Apple seems to be increasingly abandoning. I think we are about to see Jony Ive shape Sam Altman’s vision into something beautiful and human. Something that as Altman said Steve would be “damn proud” of what Jony is cooking. It is worth noting that Jobs and Altman did in fact meet many years ago during Sam’s Loopt era and it would be hard to find anyone else that understood Steve’s product sensibilities better than Jony, who he obviously has spent quite a bit of time with. The announcement of the acquisition essentially signaled that Jony believes Apple may begin to fade into history, that Sam is the closest thing to a modern Steve Jobs, and that his best work is still ahead of him. You can argue until you’re red in the face about the butterfly keyboard, the 18K gold Apple Watch, a mouse that charges upside down, and a pencil that sticks out of the iPad. But none of that matters when they’re mere blips on his resume. When Tim Cook granted Jony what was essentially complete control over Apple’s product strategy he inadvertently hurt their lineup. By giving him more freedom he may have actually stifled him. The best work doesn’t come from perfect executive harmony and endless options, it comes from competition and from creatively working inside of constraints. He was also lacking an editor, no one left at Apple quite has Steve’s taste or sensibilities, as much as I hate to admit it after years of telling myself that they did. Though there’s no doubt they’re trying their best to attempt to replicate it. If Sam Altman is the closest person to a next generation Steve Jobs, as Jony seems to believe, then he now has the kind of editor he needs. He has the creative partnership and clout needed to bring about more iPhone caliber products.
The Paths Forward
Given all of this, can Apple reposition itself? How can it get back on track? Apple was already arguably three years behind. After Google I/O and this acquisition they may be five. This could ultimately pan out differently than previous Apple products, where they traditionally haven’t been the first but the best. The old rulebook may not apply here. Google was able to catch up because they invented the transformer technology but simply hadn’t productized it as quickly as OpenAI. Apple effectively had nothing and the situation was made worse by executive squabbles and corporate dynamics. What Apple needs to do is chart an entirely new course. The obvious answer to Apple’s problem has been for them to make a significant acquisition, whether that be Anthropic, Mistral, Thinking Machines Lab, or Safe Superintelligence. While there is certainly some merit to the idea of doing this, particularly when it comes to giving some new product people who really understand AI some autonomy at scale, it just might not be the right move. Yes, Apple needs fresh thinkers, new blood that grew up admiring people like Steve, and folks who really understand the gravity of the moment. But I don’t know if they are quite ready for that. Apple clearly didn’t see the shift coming, but how they ultimately respond to that failure is what will actually define them for the next several decades.
So we have established that it is unlikely that the Cupertino company would acquire an existing organization, especially when it comes to Anthropic which is a public benefit corporation and has the baggage of massive investments from Amazon and Google. I also just don’t think that they want to integrate a giant external organization or spend more money than they ever have in a single endeavor.
I would argue they need to do something unusual, dare I say uncomfortable. I see two different paths: the first is to deepen their partnership with Google, let Gemini rip across the iPhone, and in doing so pair the two biggest phone companies in the world together to fight the coming onslaught of AI gadgets. The second path is simply openness. Apple is notorious for being locked down, a walled garden as many like to say. What if they created new tools for model builders to integrate their products into iOS at a deep level? They could let third party apps take full advantage of app intents, shortcuts, and other built-in tools. Default app selection has already been a massive improvement to iOS, but what if you could take that even further? I am not suggesting Apple open iOS in the traditional sense, this isn’t about sideloading. Just give model builders the tools they need to make experiences unhindered. Now I know what you’re thinking. The first path could be philosophically troublesome and given the current state of the default search engine agreement between the two companies, could be legally fraught. The second path requires Apple to do two things: it would need to repair its relationship with developers so that they are better incentivized to build for the platform beyond it having scale and it would need to move resources from first-party products. Neither solution is ideal and they would both face copious challenges. But they might just be more feasible than an acquisition or a leadership change or attempt to race to build comparable products themselves. Resources might be better spent on helping the AI labs than competing with them. At least for now. This could actually prove to be a really good test for Apple, particularly compared to Apple Maps. As the company began to wind down their relationship with Google in the early 2010s, they built their own mapping solution behind the scenes. Obviously the launch didn’t go as planned. But what if they let model builders rip through the ecosystem and quietly spent the next several years in a state of hibernation, building a great future competitor to ChatGPT and Gemini? To do this they would need to not start from scratch, but build on top of existing models. The old Apple ways of doing everything on its own might just not work here.
The Make or Break Moment
There are bigger structural questions to be posed, particularly surrounding leadership, but I’m not necessarily convinced that replacing Tim Cook or even Craig Federighi would miraculously solve all problems. They have been excellent executives. But it’s unclear if they are suited for this war. I believe the now well-known Tim Cook leadership style, focused on perfect harmony, is no longer what Apple needs. It needs a return to the competitive, nonstop, dare I say hardcore environment that Steve Jobs fostered. No more coasting. Call it intense, maybe even brutal—but the pressure cooker forged the future once, and it just might again. Every lifelong Apple observer knows this is how it works. This is going to be a brutal battle and they are still acting like it is peacetime.
For decades the war for technological supremacy was waged between Apple and Microsoft. Then Microsoft was upended by Google. Now, Apple is at risk of being supplanted by OpenAI. The soul of Apple isn’t dead—it just might live somewhere else now. They can reclaim it, but it is going to take everything they have. This is Apple’s moment to choose: fade into legacy or fight to the death for relevance. They aren’t leading the conversation anymore, but there is time to change that.
Is Memory Going to Become the New iMessage Lock-in?
Chances are that if you are reading this, you know all about iMessage lock-in. In case you do not, here is the gist: iMessage serves as both a technical and cultural lock-in keeping users on iPhone, making it more complicated to switch to an Android device. Of course it is entirely possible to go through all of the steps to switch, but there is so much friction that most simply choose not to. A few weeks ago OpenAI rolled out an advanced memory system to ChatGPT, making it so that the LLM can remember all of your previous conversations together. It is a tremendously useful feature that I wrote about already. But over the past several weeks, I have wanted to once again explore using other models in my workflow. I had been pretty steady with ChatGPT, but the latest version of Gemini and new apps like Raycast for iPhone have sparked my interest. I have been attempting to use them in my daily life but ChatGPT’s extensive knowledge of me, my interests, my current life situation, and the projects that I have been working on with it have created a great deal of, you guessed it, friction.
I can ask ChatGPT about things we were discussing the other day, reference things on the fly, and pick up where I left off without having to return to the previous chat thread. But when I open up Gemini or Claude, they know nothing about me. They cannot reference previous chats and they certainly cannot simply be hot swapped into your workflow if you’re a heavy ChatGPT user. Those other models just are not in tune with you the way that ChatGPT is after some extended use. This leads me to the main difference between memory lock-in and iMessage lock-in. iMessage lock-in is an Apple-specific issue widely argued about across the industry. Switching from Android to iPhone is easy and Google does not lock users into a proprietary messaging tool forcing you to disable it before you switch. There is also no cultural stigma associated with switching to iPhone from Android. I would argue that in most cases, the general public in the US would go straight to “congratulations, you upgraded to an iPhone!” Blue bubbles are a big part of the culture in the US but they are a uniquely social thing. Memory is a personal thing. No one else cares, at least now, which LLM you choose to use in your daily life. Except for you. You care. Each one is unique in its own way. But none of them at the moment match the power of ChatGPT with its memory. So, a few years from now, maybe even sooner, you can easily imagine a world where each of the LLM providers locks you in by making memory exclusive.
We can catch this before it happens though. How? Memory should be exportable and importable from every provider. We do not need a new kind of proprietary format or anything like that. We just need OpenAI, Google, Anthropic, xAI, Microsoft, and others to implement memory and incorporate a way to migrate all of your chat history over in a click or two. It does not even have to be easy, it just needs to be possible.
This might seem like I am getting ahead of myself, particularly because ChatGPT is the only option that has memory like this. Others have ways to manually store information about you, but those just are not natural and they generally have small context windows. I hope that once all of these companies finally implement memory (to me it seems essential) that they build in these migration tools. I imagine there are conversations to this effect happening in conference rooms in San Francisco, they probably do not want to make it easy to switch. It would mean that they would have to compete continuously for users. But here is the thing, I do not want this new cohort of tech giants to end up coasting the way that legacy ones often have because of similar factors. If they really are all the future of technology, and I think they are, they should do things differently.
Raycast for iPhone is the Launch Center Successor I've Always Wanted

Before it was sort of abandoned I was a heavy user of Launch Center Pro. Apple enthusiasts and power users alike will almost certainly remember the powerful shortcut tool from Contrast that let you build out an extensive grid of quick ways to get things done. The app is technically still available on the App Store, but it has not been updated in years. I stopped using it once it appeared to fall by the wayside and after Apple began truly supercharging its own solution in Shortcuts. Launch Center Pro was very much a manually configurable tool, but had it kept up with the times I would imagine it would look something like the new Raycast for iOS. Funny enough, Raycast for iOS almost resembles the original version of Launch Center from 2011. We have sort of come full circle.
Raycast is a fantastic Mac tool, I use it as a Spotlight replacement both because of the powerful AI features but also because it just does more things. Spotlight is, in essence, a launcher. It is typically seen as a search tool, but it can technically answer some questions, do math, and get you to places faster than you might have gotten otherwise. Raycast is a supercharged launcher. When it was first announced that they were building an iOS app I was sort of puzzled. It did not make a whole lot of sense to me given that you cannot replace Spotlight on iOS. But the company did not just take Raycast for Mac and port it. They built something entirely new, something arguably better.
Raycast for iOS is dead simple. You get a prompt bar, a couple of shortcuts, a search bar, and an area for your favorite things. Unlike an app such as Launch Center Pro, you do not necessarily need to start manually building actions. You can simply start by talking to the many AI models that are built-in. You will quickly see that there is a wide array of actions that can parse content from other apps via the share sheet too. The company has tried to turn an app into a sort of system framework that takes really good advantage of extensibility. So out of the gate, you can easily pin your favorite models and actions. You can even use Raycast as your notes app, which means you can pin your favorite notes or add the ability to quickly create new ones. There is also a handy snippet tool for saving text blurbs, code blocks, and bits of information you need to copy and paste frequently. It does a lot of things out of the box. It had already earned a spot in my iPhone’s dock at this point. But then I discovered just how much you can do with a feature called quicklinks.
Quicklinks at first glance is really just a place to save frequently visited websites, a glorified bookmarks manager if you will. But it can do so much more than that if you know what you are doing. This is where the Launch Center Pro comparison really comes into view. The iPhone has a deep system layer of URL schemes that many apps use in a variety of ways. In 2025, they are nowhere near as powerful as they used to be. For example, there are no more settings URL schemes to jump into individual sections of that app. But plenty of things are still useful, even if it is just opening an app via a classic web link.
I have configured my Raycast to do a couple of things so far. First and foremost, it can launch into a todo list that I have made and below it I have placed a quick action to start a new note. The third item is where things get going. Using the right URL I made it so that I can directly open the new tweet action within the X app. The top three favorite actions appear on the small and medium home screen widgets, so it is crucial to choose which ones sit at the top carefully. I have included Anthropic’s Claude in the favorites list, primarily because it is my go-to for writing and I would rather use it inside of Raycast than in the official app now. Below Claude is a shortcut to launch my heavy rotation mix playlist on Apple Music. You can simply copy and paste a share link from virtually any of Apple’s services and place it in Raycast quicklinks. You could create shortcuts to publications in Apple News, shows in Apple TV+, and even to the Apple Arcade featured section. One thing that is unusually annoying to access is Google’s new AI search mode. While they have added it to their widget, you can also get a dedicated URL and create your own shortcuts to it. That is what I have done in Raycast. Last but not least, Techmeme. I have placed a bunch of my frequently visited sites inside of quicklinks but Techmeme is the one I go to most nowadays. I am continuing to expand my list of favorites as I discover more ways to push the boundaries of the product. Heck, I just learned you can run Apple Shortcuts you build from the app if you configure things correctly.
What you can see now is that Raycast for iOS can open apps, run actions, parse content to AI, have conversations with you, make notes, edit notes, save important blocks of text and code, and so much more. If you use the right combination of URLs you can create a really powerful launcher that also happens to be your AI tool. That being said, if you just use the app as an AI chat app you can still get a lot out of it. It is limited to some degree given the restrictions that iOS places on third-party apps. But you can work around them and have a really great time.
I am so excited to see where Raycast takes this app, particularly because it is already so damn good. I can think of dozens of new things I want it to do immediately off the top of my head like integrating with the native calendar and reminders frameworks, acting as a pseudo clipboard manager, and even having proper memory that can work across multiple models. In a lot of ways, I can see Raycast for iOS being a sort of stand-in for the home screen if you customize it right. Especially given Apple’s slow and thus far haphazard approach to AI. I imagine that when/if Raycast comes to Android, they will go even further. The possibilities there are endless. But for now, we have the iOS app and I am impressed. I suspect that you will be too.
With Google's Defeat, the Time for Apple's Search Engine Has Arrived

Unless you’ve been living under a rock, you know that Google was formally ruled an advertising monopoly this week. It could potentially be a catastrophic blow to the company that completely upends the web as we know it. A significant component of this case has been Google’s deal with Apple to be the default search engine in Safari. Google pays the Cupertino company nearly $20B a year to maintain dominance on Apple platforms. That is for good reason of course, Apple’s users are some of the most valuable particularly when it comes to ad targeting. But it is also a flagrant way to squash any chance a competitor could have at growing to be a sustainable web search or ad business. Now that the verdict is in officially, I think it is time for Apple to do what has been long rumored. That is, to build their own search engine that powers Safari, Siri, and Spotlight.
An Apple search engine would be a particularly prescient offering given Apple’s rebranding of their various ad businesses to “Apple Ads” making it an umbrella for the various kinds of ad products they have at the moment. It begs the question of whether or not Apple may get into more traditional web ads in addition to the ones on the App Store and in Apple News. There is now a clear opening for them to enter the market, but I do not think they should just expand the ad offerings. They should go full court press and introduce a search engine. As Joe Rosensteel wrote over at Six Colors, the company needs to build on its existing services business and this is one of the ways that they can go about doing that.
As for the product itself, I know there is much hesitation over whether or not Apple could actually produce a good search engine. This is especially the case after the Apple Intelligence debacle and Siri’s long period of neglect. But the beauty of a search engine is that you do not need to necessarily own the whole stack. It would be unlike Apple to go this route, but I think they need to learn to adapt in the new environment they are now operating in. Like DuckDuckGo or Ecosia, Apple should start by using search results from Bing and Copilot while they build up their own database. Package them up in a beautiful Apple-designed user interface with some clean new ad formats, and if need be, build some special tooling to carefully organize the search results in a better way than they would if otherwise seen on Bing. If they wanted to go really crazy, they could leverage their OpenAI partnership and reskin ChatGPT Search too. The company could even charge a subscription for ad-free results or additional features. It also is not like Apple does not have a similar breadth of services as Google on the web. Apple’s search engine could link to various iCloud apps and save your search history to your account. The opportunity is right there in front of them.
Now I fully recognize that giving up a $20B yearly payment from Google is far from ideal, in fact it makes up a substantial part of their current services revenue. But how sustainable is that longterm? I do not think it can be counted on anymore. It is contingency time. While they would have to make up the $20B difference on their balance sheet with Apple Ad sales and Apple Search subscriptions, they would simultaneously be helping Microsoft or OpenAI grow their positions in search, therefore balancing the market a bit better. Google’s dominance in this space needs to be challenged seriously. OpenAI and Perplexity are really the only ones truly taking a crack at it, with Perplexity attempting to be particularly aggressive. But it remains to be seen just how big of a dent they can ultimately put in Google’s market share. While an astronomical number of people are using ChatGPT daily, Google is still overwhelmingly dominant in search. It is going to take an Apple (or an Apple with Microsoft or OpenAI) to truly break Google.
Jony Ive, Laurene Powell Jobs, and Sam Altman Walk into a Bar—Will They Walk Out with the Gadget of the Future?
The worst kept secret in Silicon Valley right now is that Jony Ive is helping Sam Altman build new gadgets. On the surface that may not seem like that much of a big deal. After all, Jony left Apple six years ago. But his career is inextricably linked to and was fundamentally shaped by Steve Jobs and Apple. I do not say that to diminish him in any way, it is simply the truth that he would not be “Sir Jony Ive” without the iMac, the iPod, the iPhone, the iPad, and the Apple Watch. For years he has seemingly tried to separate himself from his historical identity by working on a variety of different projects removed from technology, but he will forever be associated with those products whether he likes it or not. When he spends time on a technology, it carries a lot of weight. He learned from the best. As an example, his signature has been all over Airbnb’s app and branding—the company has been a major client of his firm LoveFrom. It is just one of the things that makes his teaming up with Sam Altman all that much more intriguing.
At first the rumors were relatively vague. The two of them were thinking about building some kind of AI-powered gadget together, whatever it may be. It has since become clear that the two intend to build a personal AI device, likely one that rethinks the role of the smartphone or even begins to replace some core functions. It is not all that surprising that they have fallen back on what could be a handheld device given the flat reaction to dedicated AI gadgets over the past two years, perhaps with the exception of the Meta Ray Bans. I personally think that lots of tech aficionados have sort of written Jony off in his post-Apple years. That has made it easier for him to explore whatever the project ultimately becomes. But it is unmistakable that the man who designed the iPhone, built his entire fortune on it, and shares the credit of creating it with his late best friend, believes that building competitive hardware is worth his time in 2025. That tells me that he not only believes in both Sam Altman and the power of AI, but also that he does not believe Apple is currently well-positioned to do something similar. The saga of Apple Intelligence blunders thus far may just help back that up. So Sam Altman and Jony Ive are working together to create the device of the future, what could that mean?
Many of the rumors seem to equate this product with some of those failed AI gadgets I mentioned. Some pundits think it could be screen-less or have an unusual twist that makes it less of an iPhone competitor and more of a new product category. I think they are all overthinking it. Whatever these two prolific giants of industry are working on is not the Humane AI Pin or the Rabbit R1. It cannot be a vanity project. OpenAI has been spitting out incredible new products at a ridiculously fast pace over the past several months and I do not see Sam Altman wasting anyone’s time. The fact that he wants to pull the project into OpenAI says as much. That suggests it might end up being close to a new kind of phone—perhaps familiar in shape, but powered by something so fundamentally different. When I hear “personal AI device” I hear “we want to replace your phone.” The way to do that, especially if you are a designer, is to make something relatively familiar.
Despite what many AI skeptics have believed, it seems to be bearing out that the chatbot is the interface of the future. At least the near future. A grid of app icons on a home screen could quickly be usurped by a text thread with a digital being that lives in your phone and can use your services for you. We are already heading in that direction. Just this week, Anthropic added connectors for Gmail and Google Calendar making Claude infinitely more personal and useful. OpenAI continues to expand its integrations with apps like Xcode and Notion, making it easier than ever to simply code an entirely new app or write an entire story on a fly with a short prompt. Gemini can access nearly every major Google service already. Microsoft Copilot can see what is on your screen and talk with you about it. Apple was hoping to be able to accomplish these things through a combination of an improved Siri, on-device models, and app intents. But I am not particularly optimistic it is going to all work as well as it needs to. App intents are built on Shortcuts which is already a fragile house of cards. I think they need to start over from scratch, but they are already so far behind that it is unclear if they can risk it. Especially as competitors take giant leaps seemingly on an almost weekly basis, heck, OpenAI just dropped two brand new models today in o3 and o4-mini.
A phone or phone-shaped device with hardware and software designed by Jony Ive and his team at LoveFrom (many of which are former Apple designers, including Evans Hankey) combined with the intelligence of OpenAI could be the first truly formidable opponent the iPhone has had to go up against since Samsung first unveiled the Galaxy series. ChatGPT is an incredibly popular product with hundreds of millions of users. And not because they have to use it, but because they want to use it. It has very strong brand recognition and has become an essential part of peoples’ daily lives. It certainly has for me and many in my orbit. And it is especially the case with younger users, the trend setters who will determine which company owns the future. My generation decided that iMessage and the iPhone were the best. The next might choose otherwise. While it is still incredibly early to say for sure what the device ultimately will be, I imagine a new generation of smartphone, for lack of a better word, that eschews apps for connectors. A device that starts with a text box and an always-listening voice mode that uses your apps and services for you, that does not take you out of context or distract you periodically through the day. The OpenAI device could actually be the antidote to much of the societal damage the current generation of smartphones has done. While I would never use a current Android phone as my daily driver, I would absolutely consider using a Jony Ive-designed OpenAI device in lieu of my iPhone. Especially if it made me more present and productive.
Apple should be worried. They are more vulnerable than they have been in decades and it shows. If Jony Ive, a Steve Jobs acolyte and one of the most prolific designers of our age sees an opening to dethrone Apple and right societal wrongs, he seems likely to take it. And this is not like Jon Rubenstein going to Palm to build the Prē, this is different. There is no Steve to single-handedly steer the ship into the future or to crush competitors with breakneck speed. There was also no new technology nearly as important as a tool like ChatGPT to differentiate other devices. Things get even more interesting when you consider that Emerson Collective is one of the project’s backers. Emerson Collective is none other than Laurene Powell Jobs’ firm. That means Apple could be going up against a new kind of product from the hottest tech company since Google, with the backing of the iPhone’s principal designer, Steve Jobs’ incredibly savvy wife, and the figurehead for the AI revolution. If Apple is incapable of getting their house in order with Intelligence, then a personal OpenAI gadget that begins to take the place of the phone, could truly begin to put deeper and deeper cracks in Apple’s glass house. If this new device succeeds, whenever we may see it, it will not just challenge Apple’s grip on personal hardware—it could redefine what that hardware is and how we use it.
Apple Will Try to Fix the iPad a Fifth Time, Will it Work?
Apple has tried to fix the iPad four times. First in 2015 with the introducing of Split View, picture-in-picture, and better accessory support for keyboards and the Pencil. Then they immediately took a year off. iOS 10 on the iPad sort of phoned it in. It became abundantly clear that they either could not or would not update the iPad as aggressively as it needed. The following year, iOS 11 brought the second wave of major changes with the advanced dock, drag and drop, the Files app, and more adjustable slide over. This was a big year and it showed that there was a clear willingness inside of Cupertino to push the iPad harder. But then again, another year off with iOS 12. Then the third major fix year arrived with the introduction of “iPadOS” as a dedicated platform for the first time. It brought multi-window support to apps, way better slide over support, new gestures, and plenty of other goodies like Home Screen widgets. As you might guess, iPadOS 14 felt like another year off despite a handful of smaller tweaks. iPadOS 15 did not go big though, despite what Apple wanted us to think. It brought new multitasking controls to make everything a bit more obvious to users, but it did not fundamentally change the way things worked. Then iPadOS 16 arrived and sort of blew everything up. Stage manager alone was a massive addition, providing adjustable window sizing and external display support. A common theme however among these updates is that while they were great at the time they launched, they were simply never enough. After stage manager and the reaction users had to it, iPadOS stagnated for two years.
Today, Mark Gurman reports that they are going to try once again with iPadOS 19 to make the iPad a more advanced computing platform. Color me skeptical. Is this a fifth time’s the charm situation? I am trying to be optimistic, particularly after Apple nailed multitasking on visionOS. Plus, this could imply that they have been working on these improvements for three years given the lack of massive updates since iPadOS 16. Mark says that the update will “focus on productivity, multitasking and app window management — with an eye on the device operating more like a Mac.” That sounds promising, but it is unclear what that actually means. Combined with a visual redesign, this could be huge if they do not tiptoe again. Each time the company has attempted an iPad software update to make the device more Mac-like they try too hard to do something different and often end up making something inferior as a result. That cannot happen this time. I seriously hope that they know it.
The Week ChatGPT Truly Became an Assistant
Yesterday, OpenAI introduced a new feature under the radar that I think might have just changed everything.
ChatGPT has had a rudimentary “memory” system for quite some time. It lets the model look for important context in your messages, like your job or a particular spot you enjoy. It then saves things that it thinks might be relevant to your conversations with it in the future and stores them in your settings for you to decide what may be worth saving or deleting. It is, in essence, a customizable data retrieval system on top of the model. You can tell ChatGPT to remember things or just wait for it to realize something was essential. This is still the system primarily in-use by free users, but plus and pro members are now beginning to have access to what was internally called “Moonshine.”
On the surface, Moonshine sounds very simple. ChatGPT can now reference your previous conversations. But in reality, it makes the tool dramatically more intelligent — and personal. By being able to reference things you have talked about before without hoping that the model would catch it or by manually teaching it, it feels more like talking with a person than ever before. Dare I say, ChatGPT actually knows me now.
My ChatGPT knows what my current goals are, what projects I am working on, and what I am generally thinking about. I was already popping in my AirPods and talking with ChatGPT voice mode about problems or briefly messaging it for advice on the fly throughout my day. But this takes it to a whole new level, it is never starting from scratch.
It may fundamentally now know more about who I am than any social media or search algorithm before it. And unlike those algorithms, it is actually information that can be put to good use for my sake. It is not worried about what kind of socks I might want to buy, what celebrities I may be crushing on, or trying to make me feel inadequate. It is never trying to sell or market to me. ChatGPT is trying to help me actually accomplish something, whether it is a personal or professional goal.
After just one day, I have already been having far more fruitful conversations with ChatGPT. It feels like a sneak peek at what virtual assistants can really be.
Craig Saves the Day, Gives Engineers the Green Light to Use Off-the-Shelf LLMs

According to The Information, Apple is now letting engineers build products using third-party LLMs. This is a huge change that could seriously alter the course of Apple Intelligence. I have been a proponent of Apple shifting its focus from building somewhat mediocre closed first-party models to using powerful open-source third-party ones. The company is already so far behind its competitors, that I would rather they focus on making really good products rather than really good models. They may be able to do both simultaneously, but I suspect that Apple could ship spectacularly good products using off the shelf models sooner as other teams build first-party models for the future in secret.
Now that Gemini can see your screen on Android and Copilot can see your screen on Windows, these competing assistants are now truly on another level. They are really assistants in the truest sense of the word now that they can visually understand your context while you are using your devices. They make Android and Windows devices more compelling alternatives too. I am hopeful that Apple will be able to catch up much faster by doing what they do best, which is productizing advanced technology. The only difference is that this time, at least for now, they will be productizing someone else’s tech.
I imagine they will be largely looking at Mistral, Gemma, and Phi. Llama would be an obvious contender too if the company’s relationship with Meta was not so contentious. DeepSeek would be another option, though the optics of using it outside of China would likely not be ideal. We will just have to wait and see!
Microsoft is Filling in Essential Gaps on the Mac For the First Time in Decades
Microsoft has a long history of Mac software development. It is not something you often hear about anymore, but the company’s legacy is as intertwined with Apple’s as any other. I tend to be quite negative about Mac Office. I have never been a fan of PowerPoint, Excel, or Word. I was always an iWork guy personally, despite the flaws those three apps have. This is all to say that Microsoft now makes some of the nicest Mac apps there are. Yes, I cannot believe I just wrote that sentence. I am not talking about Office apps, I am talking about Copilot and Edge. OpenAI has gotten a lot of credit for its world-class Mac ChatGPT client. It is an excellent app, both from a functionality standpoint and in terms of its user interface. But Microsoft came out of nowhere with what I have to say, is a drop dead gorgeous native Copilot app.
Copilot on the Mac does not feel like a web wrapper. Today that is the best case scenario, the least you can hope for. It has fluid animations, a nice soothing color palette, and is blazing fast when answering prompts. It really just works. With the current Apple Intelligence situation, it is something that almost anyone who has a Mac should consider using. Not just because it is free, but because you get so much from that free version. From deep research with o1 to beautifully generated user interfaces for certain answers to a command bar you can bring up from anywhere, it fills a lot of needs. Microsoft’s decision to subsidize extensive usage also makes it what I think is one of, if not, the best free AI tool. Given the vast array of new features unveiled by Microsoft his past week for Copilot, I imagine that it is only going to get better. By the way, you should go watch the presentation if you have not. Mustafa Suleyman is an excellent presenter and does a great job walking through where they are heading with all of this. Thankfully, they are not gatekeeping all of these things to Windows. Microsoft has really filled a major gap in macOS for the first time since before Safari and iWork.
Now I know what you are thinking: “Why would I not just use the ChatGPT app?” That is completely valid and I am not saying one should replace it with Copilot. I think Copilot is a supplement. ChatGPT on the Mac is the best choice for working with code or text. It also happens to now be the best way to generate and edit images on the fly. But Copilot feels more personal and is faster at answering search queries. I have struggled in the past to figure out how the different large language models all fit into my workflow. I used to think that I had to use one or the other. But it is clear they are all sticking around, at least for now. I have altered my workflow to use ChatGPT while coding or designing, Claude while writing or as a back up for difficult coding issues, Gemini while working with Google apps as a sort of basic assistant on my phone, and now Copilot for casual use on the desktop. Others occasionally make their way in there, but this is largely how I am using these tools. At the end of the day, the Copilot Mac app is nearly as good as the ChatGPT app. And I would argue that it has one advantage, it is in the Mac App Store while ChatGPT is not.
Now I have heaped a ton of praise on the Copilot Mac app, but I mentioned that Edge has gotten really good as well. I bring this up largely in the context of changes at The Browser Company. I was a huge fan of Arc, but it is not my browser of choice anymore as I do not think I can count on them to keep iterating on it as they once did. I have tried a variety of clones, some of them are fine, but I am not sure how reliable they are long term. I have even been using the alpha of The Browser Company’s new app, Dia. But I will not say anything about that just yet, it is too early to tell where it is going. After all of this searching, I was so pleasantly surprised by the current state of Edge on the Mac. I used to use Edge in lieu of Chrome given how much less power intensive it was. But I was never particularly a fan of the user interface. Now it is arguably one of the best-looking Mac browsers. It also happens to be a true AI web browser with the Copilot sidebar, which now adopts the new design from the Mac app making it much more palatable. It has quick access to summarization and key points, but the best part is that it supports vertical tabs. Vertical tabs were the best part of Arc and they were implemented in such a powerful way that I have difficulty going back to horizontal ones. Thankfully Edge combines AI with vertical tabs and has a development roadmap that you can count on for the long haul. Arc users, I highly suggest trying the latest version of Edge. Port your chromium extensions over, reconfigure your vertical tabs, and you might just find yourself in a state of browser zen.
So yes, Microsoft now makes one of the best AI apps for the Mac and one of its best browsers. They are both well worth checking out if you have not used them in a while. I would not be surprised if many of you have not even tried the new Copilot app. I continue to be delighted by the experience and my usage of it keeps increasing. But the real story here goes back to what I said earlier, Microsoft is once again filling in gaps on the Mac. As Siri falls further behind and Safari continues to stagnate on the AI side, they have garnered an upper hand. They should not have to fill these gaps, but they saw the opening and they went for the jugular. They could have half-assed these tools, but they went all out making them on par or better than their Windows counterparts. I hope this spurs Apple to build a proper Siri app that tracks your conversations and uses the best models. I also hope this gets the Safari team thinking bigger about how they can incorporate AI and make browsing even more fun.
From a small garage in Los Altos to billions of pockets, wrists, ears, faces, desks, and living rooms. Happy 49th birthday Apple. Next year is the big one.

A Sneak Peek At My First App

It is not quite ready yet, but I wanted to share a first-look at the app that I have been working on over the past several weeks. I still need to squash a few bugs and complete some business-related tasks, but I expect to submit it to the App Store in the near future after a small TestFlight group helps me make sure it is rock solid.
As you can tell, it is a task manager. Well sort of. I would describe it as a notes app moonlighting as a todo list app. It works the way my brain does. I have tried countless task managers over the years but none of them have ever quite felt right. There are plenty of great options, but I suspect that if your brain is as scattered and filled to the brim with thoughts as mine is, that this app will work really well for you. I am still keeping some things quiet, like the name and some of its features. But I wanted to share this sneak peek at the main view to get a pulse check from you, my friends, family, and followers. As you can see it is highly customizable with a variety of themes and unique tagging options. You can even format your notes with some basic markdown and store links. It syncs your notes/tasks between devices using iCloud and also works natively on macOS, iPadOS, and visionOS. This is what I believe to be, a great SwiftUI app through and through.
Feel free to tweet, toot, thread, and skeet your thoughts at me. I would love to hear them. I hope that once I get the business stuff ironed out and the app up on the App Store, you will give my first-ever app a try.

Apple Adds the Lumon Terminal its Mac Lineup
Apple’s marketing campaign for Severance has been top notch. Between the Grand Central pop up, live events with the cast, a marching band, dedicated websites to generate art, a Tim Cook commercial, and a giant balloon in London, they have kept their foot on the gas even with the second season having come to an end. And they are not finished.
Just a little while ago, they finally did the thing every Apple nerd has been quietly hoping they would do. They added the Lumon Terminal Pro to the Mac website right next to the MacBook Pro and other machines. Unfortunately, you cannot buy one or do much with the webpage. But it is fun to admire the beauty of the retro futuristic machine. They clearly did this as a stunt to drive eyeballs to the new video about Severance being edited on the Mac.
Finally, ChatGPT Can Accurately Generate Images with Text
OpenAI dropped an all-new image generation system for ChatGPT today and man is it good. One of the biggest problems with artificially generated images has been the inability to generate accurate text within them. There have historically been problems with inaccurate characters, spelling, or even complete graphical errors. Today’s update to image generation with GPT-4o fixes these. You can now generate charts, signs, logos, word marks, text graphics, pretty much anything you can think of with ease. It nails spelling, seems to set type well, and generally abides by your instructions. The inability to accurately generate text inside of images really made the tool a lot less useful, but now I suspect it will be used by way more people way more often. Outside of text in images, ChatGPT can now generate overall better images that more accurately fit your prompts. You can even upload an image, ask for edits or a new style, and boom you will almost certainly get a great result that works within the structure of the original. It is all pretty darn great.
Fill the Severance-sized Hole in Your Heart with The Studio
The ridiculously good new Seth Rogen series, The Studio, just hit Apple TV+. A few weeks ago I got to watch the first two episodes premiere at SXSW and I walked away from the theater beyond excited for the rest of the series. Both of those episodes are now available to stream and they are as riveting as they are hilarious. The second episode, which guest stars Greta Lee, is particularly special. If you have ever acted or directed or known actors or directors, you will especially enjoy it.
Gemini, Siri, and Default Assistants on iOS
I do not want to harp on the Siri situation, but I do have one suggestion that I think Apple should listen to. Because I suspect it is going to take quite some time for the company to get the new Siri out the door properly, they should do what was previously unthinkable. That is, open up iOS to third-party assistants. I do not say this lightly. I am one of those folks who does not want iOS to be torn open like Android, but I am willing to sign on when it makes good common sense. Right now it does.
ChatGPT, Gemini, Claude, DeepSeek, Perplexity, and Grok are all incredibly popular on iOS. They are in use by millions and millions of people. But some of them are not as powerful on iOS as they are on Android. ChatGPT, Gemini, and Perplexity can all be used as the default personal assistant on Android devices. If you do not like the one that came on your device, you can replace it. Apple has been adding more default app choices on iOS as of late and now is the time to add another one. Gemini in particular has become increasingly powerful due to its deep integration with Google apps. If you use them, you can get way more out of Gemini than the alternatives. I desperately want to use Gemini as my default personal assistant on my iPhone, so much so that I have the voice mode assigned to my action button. Because of Gemini’s integration with Google Keep, Google Tasks, and Google Calendar I can use the Gemini assistant to create reminders, generate lists, take notes, and create and manange events. Even better than that, Gemini can dive into my inbox and find things for me better than any standard search tool. Those are all productivity features, but Gemini also has deep entertainment functionality thanks to YouTube. If you want to find a specific video, Gemini can get the job done. You can even use it with Google Maps. The app is simply a fantastic assistant that is able to replace many of the things I already do with Siri, while also being an LLM-powered ultra smart knowledge engine.
I have not even mentioned the amazing Gemini Live yet, despite not integrating with your Google apps yet, it does let you have fluid conversations about world knowledge and things happening on the web. The app is dead simple, yet more powerful than Siri in almost every metric. While it currently cannot do some things like set a timer, it can do far more important things Siri can only dream of being capable of. If you use Google apps, I highly recommend assigning Gemini to your action button or to the back tap gesture on your iPhone. You will be blown away by how much more powerful it is once you begin taking advantage of the app integrations. Heck, if you use Google Home devices it even controls those as well.
I do not use Gemini as my primary LLM generally, I prefer to use ChatGPT and Claude most of the time for research, coding, and writing. But Gemini has proved to be the best assistant out of them all. So while we wait for Siri to get good, give us the ability to use custom assistants at the system level. It does not have to be available to everyone, heck create a special intent that Google and these companies need to apply for if you want. Have requirements for privacy and design too if need be. But these apps with proper system level overlays would be a massive improvement over the existing version of Siri. I do not want to have to launch the app every single time and the basic ChatGPT integration in Siri is far from the best solution.
Google clearly knows that they have a massive with Gemini right now and you can look no further than their decision to heavily advertise on Apple-centric podcasts. In recent weeks they’ve sponsored The Talk Show, Upgrade, Connected, and ATP. One of the best ways to go after Siri is to capture Apple fans and observers.
I know Apple would be averse to this, namely because of the potential for losing a massive amount of Siri users in the meantime while they get their act together. But it would do two things: it would mitigate some of the damage with customer relations while they wait for the new Siri and it sets a new bar for the Siri team to have to exceed. There would not be any room for failure and they need to be under that kind of pressure.
Very Good News: Mike Rockwell, No Fan of Siri, is Here to Save It
Mark Gurman has just reported that Apple is reshuffling the Siri team’s leadership in light of the recent Apple Intelligence controversy. They are putting Mike Rockwell, the man behind Vision Pro, in charge of the effort and taking AI chief John Gianndrea off the project.
While you can lament the well-documented array of Vision Pro shortcomings, the software: visionOS is an impeccable achievement. It was one of the most polished 1.0s the company has ever shipped despite missing some features. It has a gorgeous design and is chock full of functionality, plus it was largely stable out of the gate. Putting the guy who led the development of the most advanced piece of technology Apple has ever made in charge of Siri is the right move. I trust that he will productize whatever technology comes from the LLM group really well and ensure that they do not get over their skis again. Maybe he will even scrap what they have and fast track an entirely new solution. Ultimately, this is good news and it means a fresh perspective. More on why I am excited about the much-needed course correction below.
This news immediately brought to mind a report from early 2023 via The Information that detailed how the Vision Pro team wanted to develop its own voice control software, instead of using the existing version of Siri. 9to5Mac highlighted a particularly important and now very relevant series of lines from the story:
Inside Apple, Siri remains widely derided for its lack of functionality and improvements since Giannandrea took over, say multiple former Siri employees. For example, the team building Apple’s mixed-reality headset, including its leader Mike Rockwell, has expressed disappointment in the demonstrations the Siri team created to showcase how the voice assistant could control the headset, according to two people familiar with the matter. At one point, Rockwell’s team considered building alternative methods for controlling the device using voice commands, the people said (the headset team ultimately ditched that idea).
The frustration with Siri has finally boiled over and I am extremely optimistic now. If anyone at Apple can turn around Siri, it is someone who recognized the problem long ago but was not given the opportunity to fix it. Until now.
It's Time for WWDC 2025 to Return to Traditions Broken by COVID
This is a weird time. Everybody knows it, even if they do not want to acknowledge it. Apple’s trust with its customers is on the line and today’s news from Mark Gurman about internal conversations proves that they know they are in a complicated position. The company is staying quiet outside of the statement that it gave to folks like John Gruber and the next major Apple event is a little over two months away. They have cleared the deck of early 2025 hardware releases and I imagine we will not hear much from them for a bit outside of, you guessed it, a WWDC announcement.
As usually is the case, I imagine that we will start to get details in about two weeks or so. Every single WWDC keynote since 2020 has been prerecorded and the company has since dropped its weeklong in-person conference, opting for an entirely online approach with a select few welcomed to view the keynote video on the first day. I have been a proponent of Apple returning to its iconic live keynotes for a long time, but fuel has just been added to that fire. Tim, Craig, and Joz need to talk to us like people right there with them, not in some sort of extended ad. They cannot show us another concept video, because if they do unveil something that looks too good to be true (even if it is real) there will be immediate skepticism. We need live demos and we need Craig to be doing them to show the company has confidence in the offerings.
Ryan Christoffel at 9to5Mac laid out a comprehensive argument for it with both the pros and cons. But I firmly believe the pros far outweigh any sort of value the company has gotten the past few years from transitioning to the current format. They need to look human, which means dropping the high production value and returning to form. It also could not hurt for them to spend more time in-person with more developers. They used to invite far more people and in-person sessions were tremendously valuable to developers. Plus, if all of the software platforms truly are getting redesigned from the ground up, this is the year to capture the same kind of live reaction like iOS 7 garnered back in 2013.
If they want other events to remain prerecorded, that is fine. But this WWDC is just too important. The stakes for the next few are likely only going to continue to get higher.
Sky Blue Really is Blue
The new sky blue MacBook Air really is blue. I know that there has been much debate about just how blue it really is. But now that I own one and have seen it in quite a few different lighting conditions, I can say for certain that this is more blue than silver to my eyes. I fully understand that there are some moments when it may look silver, but in most conditions I am looking at a delightfully light blue keyboard deck. Admiring it from afar while it is closed on my desk, it is clearly blue. I think folks are trying to grapple with the fact that we have all wanted more saturated colors by complaining about the new shade. Despite what you might have heard, it is distinctly new and fresh. At this point, I will take what Apple gives me especially when it isn’t silver or space gray. After all, it has been 24 years since a Mac notebook was available in blue.
Hands-On With the Vision Pro Metallica Experience at SXSW
This past weekend, I spent some time in Austin with some very cool Apple folks and a Vision Pro. The company invited me to come and watch the latest immersive video experience at SXSW, and my word is it quite something. I loved Submerged and Parkour, but neither compares to the newest entrant: an extraordinary spatial Metallica concert experience.
When you first try Vision Pro, you inevitably feel shock and awe. Like with any new technology, over time, you tend to forget about that feeling. But it comes flooding right back within seconds of the Metallica experience starting. They really outdid themselves with this one. Not only is it incredibly fun and engaging, it might have finally nailed one of the main issues with immersive video creation. It has been argued at length how filmmakers should cut and stitch their spatial movie experiences together. Earlier immersive videos were not quite perfect, which is of course expected given how early it is in the life of Vision Pro. But the Metallica concert is edited together beautifully. Between black-and-white shots, vibrant colorful ones, up-close clips, and zoomed-out ones; they may have finally figured it out. It was not jarring at all when it would quickly cut from an up-close view of one of the Metallica band members to an overhead pan of the crowd. Maybe that is just the nature of a Metallica concert, so many things are happening all at once. It is pure sensory overload, so you can argue that this form of editing is ideal specifically for a video about this particular subject. But I can easily see the format translating to something narrative. There were particularly special moments, like when following directly behind band members as they walked to the stage through crowds. As someone who loves shows like The West Wing which are well-known for their walk-and-talks, I want this applied to comedies, dramas, shows, and movies of all kinds. It felt like I was right there with the band. I am not a Metallica fan per se, but this was a blast. So do not be intimidated by the heavy metal of it all.
I fully expect nearly everyone who has a Vision Pro to be impressed by this experience. It is very different from previous immersive videos, which is arguably a good thing. It sets a new bar, a higher one. It is a perfect length at around 20 minutes long. This helps particularly if you have difficulty with the weight or any sort of eye strain. Anyone who follows me is probably already aware of my complicated relationship with the hardware. But you do not have to be in Vision Pro for too long to experience the magic. Getting right in the faces of band members and absolutely wild fans in the audience will stick in my head for a while and not in a bad way. I crave even more of this type of content. Believe me when I say, I reminded them over and over that we need more immersive Vision Pro content. This new video proves just how amazing the format can continue to get. I think they are still just scratching the surface.
When Apple invited me, I did not know what the video would be about, and boy am I glad that there was no sort of hint. The surprise of it all made it that much more fun.
The Metallica immersive experience for Vision Pro premieres tomorrow, Friday, March 14th in the TV app. If you do not have a Vision Pro of your own, I highly recommend booking a demo at your closest Apple Store to check it out.
Side note: There is a moment in the video where they stitched together a bunch of still spatial photos. I found this to be one of my favorite parts. I think you might find that to be the case too. I want to see more of that.