With Google's Defeat, the Time for Apple's Search Engine Has Arrived

Unless you’ve been living under a rock, you know that Google was formally ruled an advertising monopoly this week. It could potentially be a catastrophic blow to the company that completely upends the web as we know it. A significant component of this case has been Google’s deal with Apple to be the default search engine in Safari. Google pays the Cupertino company nearly $20B a year to maintain dominance on Apple platforms. That is for good reason of course, Apple’s users are some of the most valuable particularly when it comes to ad targeting. But it is also a flagrant way to squash any chance a competitor could have at growing to be a sustainable web search or ad business. Now that the verdict is in officially, I think it is time for Apple to do what has been long rumored. That is, to build their own search engine that powers Safari, Siri, and Spotlight.

An Apple search engine would be a particularly prescient offering given Apple’s rebranding of their various ad businesses to “Apple Ads” making it an umbrella for the various kinds of ad products they have at the moment. It begs the question of whether or not Apple may get into more traditional web ads in addition to the ones on the App Store and in Apple News. There is now a clear opening for them to enter the market, but I do not think they should just expand the ad offerings. They should go full court press and introduce a search engine. As Joe Rosensteel wrote over at Six Colors, the company needs to build on its existing services business and this is one of the ways that they can go about doing that.

As for the product itself, I know there is much hesitation over whether or not Apple could actually produce a good search engine. This is especially the case after the Apple Intelligence debacle and Siri’s long period of neglect. But the beauty of a search engine is that you do not need to necessarily own the whole stack. It would be unlike Apple to go this route, but I think they need to learn to adapt in the new environment they are now operating in. Like DuckDuckGo or Ecosia, Apple should start by using search results from Bing and Copilot while they build up their own database. Package them up in a beautiful Apple-designed user interface with some clean new ad formats, and if need be, build some special tooling to carefully organize the search results in a better way than they would if otherwise seen on Bing. If they wanted to go really crazy, they could leverage their OpenAI partnership and reskin ChatGPT Search too. The company could even charge a subscription for ad-free results or additional features. It also is not like Apple does not have a similar breadth of services as Google on the web. Apple’s search engine could link to various iCloud apps and save your search history to your account. The opportunity is right there in front of them.

Now I fully recognize that giving up a $20B yearly payment from Google is far from ideal, in fact it makes up a substantial part of their current services revenue. But how sustainable is that longterm? I do not think it can be counted on anymore. It is contingency time. While they would have to make up the $20B difference on their balance sheet with Apple Ad sales and Apple Search subscriptions, they would simultaneously be helping Microsoft or OpenAI grow their positions in search, therefore balancing the market a bit better. Google’s dominance in this space needs to be challenged seriously. OpenAI and Perplexity are really the only ones truly taking a crack at it, with Perplexity attempting to be particularly aggressive. But it remains to be seen just how big of a dent they can ultimately put in Google’s market share. While an astronomical number of people are using ChatGPT daily, Google is still overwhelmingly dominant in search. It is going to take an Apple (or an Apple with Microsoft or OpenAI) to truly break Google.

Jony Ive, Laurene Powell Jobs, and Sam Altman Walk into a Bar—Will They Walk Out with the Gadget of the Future?

The worst kept secret in Silicon Valley right now is that Jony Ive is helping Sam Altman build new gadgets. On the surface that may not seem like that much of a big deal. After all, Jony left Apple six years ago. But his career is inextricably linked to and was fundamentally shaped by Steve Jobs and Apple. I do not say that to diminish him in any way, it is simply the truth that he would not be “Sir Jony Ive” without the iMac, the iPod, the iPhone, the iPad, and the Apple Watch. For years he has seemingly tried to separate himself from his historical identity by working on a variety of different projects removed from technology, but he will forever be associated with those products whether he likes it or not. When he spends time on a technology, it carries a lot of weight. He learned from the best. As an example, his signature has been all over Airbnb’s app and branding—the company has been a major client of his firm LoveFrom. It is just one of the things that makes his teaming up with Sam Altman all that much more intriguing.

At first the rumors were relatively vague. The two of them were thinking about building some kind of AI-powered gadget together, whatever it may be. It has since become clear that the two intend to build a personal AI device, likely one that rethinks the role of the smartphone or even begins to replace some core functions. It is not all that surprising that they have fallen back on what could be a handheld device given the flat reaction to dedicated AI gadgets over the past two years, perhaps with the exception of the Meta Ray Bans. I personally think that lots of tech aficionados have sort of written Jony off in his post-Apple years. That has made it easier for him to explore whatever the project ultimately becomes. But it is unmistakable that the man who designed the iPhone, built his entire fortune on it, and shares the credit of creating it with his late best friend, believes that building competitive hardware is worth his time in 2025. That tells me that he not only believes in both Sam Altman and the power of AI, but also that he does not believe Apple is currently well-positioned to do something similar. The saga of Apple Intelligence blunders thus far may just help back that up. So Sam Altman and Jony Ive are working together to create the device of the future, what could that mean?

Many of the rumors seem to equate this product with some of those failed AI gadgets I mentioned. Some pundits think it could be screen-less or have an unusual twist that makes it less of an iPhone competitor and more of a new product category. I think they are all overthinking it. Whatever these two prolific giants of industry are working on is not the Humane AI Pin or the Rabbit R1. It cannot be a vanity project. OpenAI has been spitting out incredible new products at a ridiculously fast pace over the past several months and I do not see Sam Altman wasting anyone’s time. The fact that he wants to pull the project into OpenAI says as much. That suggests it might end up being close to a new kind of phone—perhaps familiar in shape, but powered by something so fundamentally different. When I hear “personal AI device” I hear “we want to replace your phone.” The way to do that, especially if you are a designer, is to make something relatively familiar.

Despite what many AI skeptics have believed, it seems to be bearing out that the chatbot is the interface of the future. At least the near future. A grid of app icons on a home screen could quickly be usurped by a text thread with a digital being that lives in your phone and can use your services for you. We are already heading in that direction. Just this week, Anthropic added connectors for Gmail and Google Calendar making Claude infinitely more personal and useful. OpenAI continues to expand its integrations with apps like Xcode and Notion, making it easier than ever to simply code an entirely new app or write an entire story on a fly with a short prompt. Gemini can access nearly every major Google service already. Microsoft Copilot can see what is on your screen and talk with you about it. Apple was hoping to be able to accomplish these things through a combination of an improved Siri, on-device models, and app intents. But I am not particularly optimistic it is going to all work as well as it needs to. App intents are built on Shortcuts which is already a fragile house of cards. I think they need to start over from scratch, but they are already so far behind that it is unclear if they can risk it. Especially as competitors take giant leaps seemingly on an almost weekly basis, heck, OpenAI just dropped two brand new models today in o3 and o4-mini.

A phone or phone-shaped device with hardware and software designed by Jony Ive and his team at LoveFrom (many of which are former Apple designers, including Evans Hankey) combined with the intelligence of OpenAI could be the first truly formidable opponent the iPhone has had to go up against since Samsung first unveiled the Galaxy series. ChatGPT is an incredibly popular product with hundreds of millions of users. And not because they have to use it, but because they want to use it. It has very strong brand recognition and has become an essential part of peoples’ daily lives. It certainly has for me and many in my orbit. And it is especially the case with younger users, the trend setters who will determine which company owns the future. My generation decided that iMessage and the iPhone were the best. The next might choose otherwise. While it is still incredibly early to say for sure what the device ultimately will be, I imagine a new generation of smartphone, for lack of a better word, that eschews apps for connectors. A device that starts with a text box and an always-listening voice mode that uses your apps and services for you, that does not take you out of context or distract you periodically through the day. The OpenAI device could actually be the antidote to much of the societal damage the current generation of smartphones has done. While I would never use a current Android phone as my daily driver, I would absolutely consider using a Jony Ive-designed OpenAI device in lieu of my iPhone. Especially if it made me more present and productive.

Apple should be worried. They are more vulnerable than they have been in decades and it shows. If Jony Ive, a Steve Jobs acolyte and one of the most prolific designers of our age sees an opening to dethrone Apple and right societal wrongs, he seems likely to take it. And this is not like Jon Rubenstein going to Palm to build the Prē, this is different. There is no Steve to single-handedly steer the ship into the future or to crush competitors with breakneck speed. There was also no new technology nearly as important as a tool like ChatGPT to differentiate other devices. Things get even more interesting when you consider that Emerson Collective is one of the project’s backers. Emerson Collective is none other than Laurene Powell Jobs’ firm. That means Apple could be going up against a new kind of product from the hottest tech company since Google, with the backing of the iPhone’s principal designer, Steve Jobs’ incredibly savvy wife, and the figurehead for the AI revolution. If Apple is incapable of getting their house in order with Intelligence, then a personal OpenAI gadget that begins to take the place of the phone, could truly begin to put deeper and deeper cracks in Apple’s glass house. If this new device succeeds, whenever we may see it, it will not just challenge Apple’s grip on personal hardware—it could redefine what that hardware is and how we use it.

Apple Will Try to Fix the iPad a Fifth Time, Will it Work?

Apple has tried to fix the iPad four times. First in 2015 with the introducing of Split View, picture-in-picture, and better accessory support for keyboards and the Pencil. Then they immediately took a year off. iOS 10 on the iPad sort of phoned it in. It became abundantly clear that they either could not or would not update the iPad as aggressively as it needed. The following year, iOS 11 brought the second wave of major changes with the advanced dock, drag and drop, the Files app, and more adjustable slide over. This was a big year and it showed that there was a clear willingness inside of Cupertino to push the iPad harder. But then again, another year off with iOS 12. Then the third major fix year arrived with the introduction of “iPadOS” as a dedicated platform for the first time. It brought multi-window support to apps, way better slide over support, new gestures, and plenty of other goodies like Home Screen widgets. As you might guess, iPadOS 14 felt like another year off despite a handful of smaller tweaks. iPadOS 15 did not go big though, despite what Apple wanted us to think. It brought new multitasking controls to make everything a bit more obvious to users, but it did not fundamentally change the way things worked. Then iPadOS 16 arrived and sort of blew everything up. Stage manager alone was a massive addition, providing adjustable window sizing and external display support. A common theme however among these updates is that while they were great at the time they launched, they were simply never enough. After stage manager and the reaction users had to it, iPadOS stagnated for two years.

Today, Mark Gurman reports that they are going to try once again with iPadOS 19 to make the iPad a more advanced computing platform. Color me skeptical. Is this a fifth time’s the charm situation? I am trying to be optimistic, particularly after Apple nailed multitasking on visionOS. Plus, this could imply that they have been working on these improvements for three years given the lack of massive updates since iPadOS 16. Mark says that the update will “focus on productivity, multitasking and app window management — with an eye on the device operating more like a Mac.” That sounds promising, but it is unclear what that actually means. Combined with a visual redesign, this could be huge if they do not tiptoe again. Each time the company has attempted an iPad software update to make the device more Mac-like they try too hard to do something different and often end up making something inferior as a result. That cannot happen this time. I seriously hope that they know it.

The Week ChatGPT Truly Became an Assistant

Yesterday, OpenAI introduced a new feature under the radar that I think might have just changed everything.

ChatGPT has had a rudimentary “memory” system for quite some time. It lets the model look for important context in your messages, like your job or a particular spot you enjoy. It then saves things that it thinks might be relevant to your conversations with it in the future and stores them in your settings for you to decide what may be worth saving or deleting. It is, in essence, a customizable data retrieval system on top of the model. You can tell ChatGPT to remember things or just wait for it to realize something was essential. This is still the system primarily in-use by free users, but plus and pro members are now beginning to have access to what was internally called “Moonshine.”

On the surface, Moonshine sounds very simple. ChatGPT can now reference your previous conversations. But in reality, it makes the tool dramatically more intelligent — and personal. By being able to reference things you have talked about before without hoping that the model would catch it or by manually teaching it, it feels more like talking with a person than ever before. Dare I say, ChatGPT actually knows me now.

My ChatGPT knows what my current goals are, what projects I am working on, and what I am generally thinking about. I was already popping in my AirPods and talking with ChatGPT voice mode about problems or briefly messaging it for advice on the fly throughout my day. But this takes it to a whole new level, it is never starting from scratch.

It may fundamentally now know more about who I am than any social media or search algorithm before it. And unlike those algorithms, it is actually information that can be put to good use for my sake. It is not worried about what kind of socks I might want to buy, what celebrities I may be crushing on, or trying to make me feel inadequate. It is never trying to sell or market to me. ChatGPT is trying to help me actually accomplish something, whether it is a personal or professional goal.

After just one day, I have already been having far more fruitful conversations with ChatGPT. It feels like a sneak peek at what virtual assistants can really be.

Craig Saves the Day, Gives Engineers the Green Light to Use Off-the-Shelf LLMs

According to The Information, Apple is now letting engineers build products using third-party LLMs. This is a huge change that could seriously alter the course of Apple Intelligence. I have been a proponent of Apple shifting its focus from building somewhat mediocre closed first-party models to using powerful open-source third-party ones. The company is already so far behind its competitors, that I would rather they focus on making really good products rather than really good models. They may be able to do both simultaneously, but I suspect that Apple could ship spectacularly good products using off the shelf models sooner as other teams build first-party models for the future in secret.

Now that Gemini can see your screen on Android and Copilot can see your screen on Windows, these competing assistants are now truly on another level. They are really assistants in the truest sense of the word now that they can visually understand your context while you are using your devices. They make Android and Windows devices more compelling alternatives too. I am hopeful that Apple will be able to catch up much faster by doing what they do best, which is productizing advanced technology. The only difference is that this time, at least for now, they will be productizing someone else’s tech.

I imagine they will be largely looking at Mistral, Gemma, and Phi. Llama would be an obvious contender too if the company’s relationship with Meta was not so contentious. DeepSeek would be another option, though the optics of using it outside of China would likely not be ideal. We will just have to wait and see!

Microsoft is Filling in Essential Gaps on the Mac For the First Time in Decades

Microsoft has a long history of Mac software development. It is not something you often hear about anymore, but the company’s legacy is as intertwined with Apple’s as any other. I tend to be quite negative about Mac Office. I have never been a fan of PowerPoint, Excel, or Word. I was always an iWork guy personally, despite the flaws those three apps have. This is all to say that Microsoft now makes some of the nicest Mac apps there are. Yes, I cannot believe I just wrote that sentence. I am not talking about Office apps, I am talking about Copilot and Edge. OpenAI has gotten a lot of credit for its world-class Mac ChatGPT client. It is an excellent app, both from a functionality standpoint and in terms of its user interface. But Microsoft came out of nowhere with what I have to say, is a drop dead gorgeous native Copilot app.

Copilot on the Mac does not feel like a web wrapper. Today that is the best case scenario, the least you can hope for. It has fluid animations, a nice soothing color palette, and is blazing fast when answering prompts. It really just works. With the current Apple Intelligence situation, it is something that almost anyone who has a Mac should consider using. Not just because it is free, but because you get so much from that free version. From deep research with o1 to beautifully generated user interfaces for certain answers to a command bar you can bring up from anywhere, it fills a lot of needs. Microsoft’s decision to subsidize extensive usage also makes it what I think is one of, if not, the best free AI tool. Given the vast array of new features unveiled by Microsoft his past week for Copilot, I imagine that it is only going to get better. By the way, you should go watch the presentation if you have not. Mustafa Suleyman is an excellent presenter and does a great job walking through where they are heading with all of this. Thankfully, they are not gatekeeping all of these things to Windows. Microsoft has really filled a major gap in macOS for the first time since before Safari and iWork.

Now I know what you are thinking: “Why would I not just use the ChatGPT app?” That is completely valid and I am not saying one should replace it with Copilot. I think Copilot is a supplement. ChatGPT on the Mac is the best choice for working with code or text. It also happens to now be the best way to generate and edit images on the fly. But Copilot feels more personal and is faster at answering search queries. I have struggled in the past to figure out how the different large language models all fit into my workflow. I used to think that I had to use one or the other. But it is clear they are all sticking around, at least for now. I have altered my workflow to use ChatGPT while coding or designing, Claude while writing or as a back up for difficult coding issues, Gemini while working with Google apps as a sort of basic assistant on my phone, and now Copilot for casual use on the desktop. Others occasionally make their way in there, but this is largely how I am using these tools. At the end of the day, the Copilot Mac app is nearly as good as the ChatGPT app. And I would argue that it has one advantage, it is in the Mac App Store while ChatGPT is not.

Now I have heaped a ton of praise on the Copilot Mac app, but I mentioned that Edge has gotten really good as well. I bring this up largely in the context of changes at The Browser Company. I was a huge fan of Arc, but it is not my browser of choice anymore as I do not think I can count on them to keep iterating on it as they once did. I have tried a variety of clones, some of them are fine, but I am not sure how reliable they are long term. I have even been using the alpha of The Browser Company’s new app, Dia. But I will not say anything about that just yet, it is too early to tell where it is going. After all of this searching, I was so pleasantly surprised by the current state of Edge on the Mac. I used to use Edge in lieu of Chrome given how much less power intensive it was. But I was never particularly a fan of the user interface. Now it is arguably one of the best-looking Mac browsers. It also happens to be a true AI web browser with the Copilot sidebar, which now adopts the new design from the Mac app making it much more palatable. It has quick access to summarization and key points, but the best part is that it supports vertical tabs. Vertical tabs were the best part of Arc and they were implemented in such a powerful way that I have difficulty going back to horizontal ones. Thankfully Edge combines AI with vertical tabs and has a development roadmap that you can count on for the long haul. Arc users, I highly suggest trying the latest version of Edge. Port your chromium extensions over, reconfigure your vertical tabs, and you might just find yourself in a state of browser zen.

So yes, Microsoft now makes one of the best AI apps for the Mac and one of its best browsers. They are both well worth checking out if you have not used them in a while. I would not be surprised if many of you have not even tried the new Copilot app. I continue to be delighted by the experience and my usage of it keeps increasing. But the real story here goes back to what I said earlier, Microsoft is once again filling in gaps on the Mac. As Siri falls further behind and Safari continues to stagnate on the AI side, they have garnered an upper hand. They should not have to fill these gaps, but they saw the opening and they went for the jugular. They could have half-assed these tools, but they went all out making them on par or better than their Windows counterparts. I hope this spurs Apple to build a proper Siri app that tracks your conversations and uses the best models. I also hope this gets the Safari team thinking bigger about how they can incorporate AI and make browsing even more fun.

From a small garage in Los Altos to billions of pockets, wrists, ears, faces, desks, and living rooms. Happy 49th birthday Apple. Next year is the big one.

Based on a Garage in Los Altos frame from Apple’s March 2019 event intro video&10;

A Sneak Peek At My First App

Auto-generated description: Five smartphone screens displaying colorful productivity app interfaces with various lists and tasks.

It is not quite ready yet, but I wanted to share a first-look at the app that I have been working on over the past several weeks. I still need to squash a few bugs and complete some business-related tasks, but I expect to submit it to the App Store in the near future after a small TestFlight group helps me make sure it is rock solid.

As you can tell, it is a task manager. Well sort of. I would describe it as a notes app moonlighting as a todo list app. It works the way my brain does. I have tried countless task managers over the years but none of them have ever quite felt right. There are plenty of great options, but I suspect that if your brain is as scattered and filled to the brim with thoughts as mine is, that this app will work really well for you. I am still keeping some things quiet, like the name and some of its features. But I wanted to share this sneak peek at the main view to get a pulse check from you, my friends, family, and followers. As you can see it is highly customizable with a variety of themes and unique tagging options. You can even format your notes with some basic markdown and store links. It syncs your notes/tasks between devices using iCloud and also works natively on macOS, iPadOS, and visionOS. This is what I believe to be, a great SwiftUI app through and through.

Feel free to tweet, toot, thread, and skeet your thoughts at me. I would love to hear them. I hope that once I get the business stuff ironed out and the app up on the App Store, you will give my first-ever app a try.

Auto-generated description: Three colorful pin-themed logos with scalloped borders are displayed against a textured white background.

Apple Adds the Lumon Terminal its Mac Lineup

Apple’s marketing campaign for Severance has been top notch. Between the Grand Central pop up, live events with the cast, a marching band, dedicated websites to generate art, a Tim Cook commercial, and a giant balloon in London, they have kept their foot on the gas even with the second season having come to an end. And they are not finished.

Just a little while ago, they finally did the thing every Apple nerd has been quietly hoping they would do. They added the Lumon Terminal Pro to the Mac website right next to the MacBook Pro and other machines. Unfortunately, you cannot buy one or do much with the webpage. But it is fun to admire the beauty of the retro futuristic machine. They clearly did this as a stunt to drive eyeballs to the new video about Severance being edited on the Mac.

Finally, ChatGPT Can Accurately Generate Images with Text

OpenAI dropped an all-new image generation system for ChatGPT today and man is it good. One of the biggest problems with artificially generated images has been the inability to generate accurate text within them. There have historically been problems with inaccurate characters, spelling, or even complete graphical errors. Today’s update to image generation with GPT-4o fixes these. You can now generate charts, signs, logos, word marks, text graphics, pretty much anything you can think of with ease. It nails spelling, seems to set type well, and generally abides by your instructions. The inability to accurately generate text inside of images really made the tool a lot less useful, but now I suspect it will be used by way more people way more often. Outside of text in images, ChatGPT can now generate overall better images that more accurately fit your prompts. You can even upload an image, ask for edits or a new style, and boom you will almost certainly get a great result that works within the structure of the original. It is all pretty darn great.

Fill the Severance-sized Hole in Your Heart with The Studio

The ridiculously good new Seth Rogen series, The Studio, just hit Apple TV+. A few weeks ago I got to watch the first two episodes premiere at SXSW and I walked away from the theater beyond excited for the rest of the series. Both of those episodes are now available to stream and they are as riveting as they are hilarious. The second episode, which guest stars Greta Lee, is particularly special. If you have ever acted or directed or known actors or directors, you will especially enjoy it.

Gemini, Siri, and Default Assistants on iOS

I do not want to harp on the Siri situation, but I do have one suggestion that I think Apple should listen to. Because I suspect it is going to take quite some time for the company to get the new Siri out the door properly, they should do what was previously unthinkable. That is, open up iOS to third-party assistants. I do not say this lightly. I am one of those folks who does not want iOS to be torn open like Android, but I am willing to sign on when it makes good common sense. Right now it does.

ChatGPT, Gemini, Claude, DeepSeek, Perplexity, and Grok are all incredibly popular on iOS. They are in use by millions and millions of people. But some of them are not as powerful on iOS as they are on Android. ChatGPT, Gemini, and Perplexity can all be used as the default personal assistant on Android devices. If you do not like the one that came on your device, you can replace it. Apple has been adding more default app choices on iOS as of late and now is the time to add another one. Gemini in particular has become increasingly powerful due to its deep integration with Google apps. If you use them, you can get way more out of Gemini than the alternatives. I desperately want to use Gemini as my default personal assistant on my iPhone, so much so that I have the voice mode assigned to my action button. Because of Gemini’s integration with Google Keep, Google Tasks, and Google Calendar I can use the Gemini assistant to create reminders, generate lists, take notes, and create and manange events. Even better than that, Gemini can dive into my inbox and find things for me better than any standard search tool. Those are all productivity features, but Gemini also has deep entertainment functionality thanks to YouTube. If you want to find a specific video, Gemini can get the job done. You can even use it with Google Maps. The app is simply a fantastic assistant that is able to replace many of the things I already do with Siri, while also being an LLM-powered ultra smart knowledge engine.

I have not even mentioned the amazing Gemini Live yet, despite not integrating with your Google apps yet, it does let you have fluid conversations about world knowledge and things happening on the web. The app is dead simple, yet more powerful than Siri in almost every metric. While it currently cannot do some things like set a timer, it can do far more important things Siri can only dream of being capable of. If you use Google apps, I highly recommend assigning Gemini to your action button or to the back tap gesture on your iPhone. You will be blown away by how much more powerful it is once you begin taking advantage of the app integrations. Heck, if you use Google Home devices it even controls those as well.

I do not use Gemini as my primary LLM generally, I prefer to use ChatGPT and Claude most of the time for research, coding, and writing. But Gemini has proved to be the best assistant out of them all. So while we wait for Siri to get good, give us the ability to use custom assistants at the system level. It does not have to be available to everyone, heck create a special intent that Google and these companies need to apply for if you want. Have requirements for privacy and design too if need be. But these apps with proper system level overlays would be a massive improvement over the existing version of Siri. I do not want to have to launch the app every single time and the basic ChatGPT integration in Siri is far from the best solution.

Google clearly knows that they have a massive with Gemini right now and you can look no further than their decision to heavily advertise on Apple-centric podcasts. In recent weeks they’ve sponsored The Talk Show, Upgrade, Connected, and ATP. One of the best ways to go after Siri is to capture Apple fans and observers.

I know Apple would be averse to this, namely because of the potential for losing a massive amount of Siri users in the meantime while they get their act together. But it would do two things: it would mitigate some of the damage with customer relations while they wait for the new Siri and it sets a new bar for the Siri team to have to exceed. There would not be any room for failure and they need to be under that kind of pressure.

Very Good News: Mike Rockwell, No Fan of Siri, is Here to Save It

Mark Gurman has just reported that Apple is reshuffling the Siri team’s leadership in light of the recent Apple Intelligence controversy. They are putting Mike Rockwell, the man behind Vision Pro, in charge of the effort and taking AI chief John Gianndrea off the project.

While you can lament the well-documented array of Vision Pro shortcomings, the software: visionOS is an impeccable achievement. It was one of the most polished 1.0s the company has ever shipped despite missing some features. It has a gorgeous design and is chock full of functionality, plus it was largely stable out of the gate. Putting the guy who led the development of the most advanced piece of technology Apple has ever made in charge of Siri is the right move. I trust that he will productize whatever technology comes from the LLM group really well and ensure that they do not get over their skis again. Maybe he will even scrap what they have and fast track an entirely new solution. Ultimately, this is good news and it means a fresh perspective. More on why I am excited about the much-needed course correction below.

This news immediately brought to mind a report from early 2023 via The Information that detailed how the Vision Pro team wanted to develop its own voice control software, instead of using the existing version of Siri. 9to5Mac highlighted a particularly important and now very relevant series of lines from the story:

Inside Apple, Siri remains widely derided for its lack of functionality and improvements since Giannandrea took over, say multiple former Siri employees. For example, the team building Apple’s mixed-reality headset, including its leader Mike Rockwell, has expressed disappointment in the demonstrations the Siri team created to showcase how the voice assistant could control the headset, according to two people familiar with the matter. At one point, Rockwell’s team considered building alternative methods for controlling the device using voice commands, the people said (the headset team ultimately ditched that idea).

The frustration with Siri has finally boiled over and I am extremely optimistic now. If anyone at Apple can turn around Siri, it is someone who recognized the problem long ago but was not given the opportunity to fix it. Until now.

It's Time for WWDC 2025 to Return to Traditions Broken by COVID

This is a weird time. Everybody knows it, even if they do not want to acknowledge it. Apple’s trust with its customers is on the line and today’s news from Mark Gurman about internal conversations proves that they know they are in a complicated position. The company is staying quiet outside of the statement that it gave to folks like John Gruber and the next major Apple event is a little over two months away. They have cleared the deck of early 2025 hardware releases and I imagine we will not hear much from them for a bit outside of, you guessed it, a WWDC announcement.

As usually is the case, I imagine that we will start to get details in about two weeks or so. Every single WWDC keynote since 2020 has been prerecorded and the company has since dropped its weeklong in-person conference, opting for an entirely online approach with a select few welcomed to view the keynote video on the first day. I have been a proponent of Apple returning to its iconic live keynotes for a long time, but fuel has just been added to that fire. Tim, Craig, and Joz need to talk to us like people right there with them, not in some sort of extended ad. They cannot show us another concept video, because if they do unveil something that looks too good to be true (even if it is real) there will be immediate skepticism. We need live demos and we need Craig to be doing them to show the company has confidence in the offerings.

Ryan Christoffel at 9to5Mac laid out a comprehensive argument for it with both the pros and cons. But I firmly believe the pros far outweigh any sort of value the company has gotten the past few years from transitioning to the current format. They need to look human, which means dropping the high production value and returning to form. It also could not hurt for them to spend more time in-person with more developers. They used to invite far more people and in-person sessions were tremendously valuable to developers. Plus, if all of the software platforms truly are getting redesigned from the ground up, this is the year to capture the same kind of live reaction like iOS 7 garnered back in 2013.

If they want other events to remain prerecorded, that is fine. But this WWDC is just too important. The stakes for the next few are likely only going to continue to get higher.

Sky Blue Really is Blue

The new sky blue MacBook Air really is blue. I know that there has been much debate about just how blue it really is. But now that I own one and have seen it in quite a few different lighting conditions, I can say for certain that this is more blue than silver to my eyes. I fully understand that there are some moments when it may look silver, but in most conditions I am looking at a delightfully light blue keyboard deck. Admiring it from afar while it is closed on my desk, it is clearly blue. I think folks are trying to grapple with the fact that we have all wanted more saturated colors by complaining about the new shade. Despite what you might have heard, it is distinctly new and fresh. At this point, I will take what Apple gives me especially when it isn’t silver or space gray. After all, it has been 24 years since a Mac notebook was available in blue.

Hands-On With the Vision Pro Metallica Experience at SXSW

This past weekend, I spent some time in Austin with some very cool Apple folks and a Vision Pro. The company invited me to come and watch the latest immersive video experience at SXSW, and my word is it quite something. I loved Submerged and Parkour, but neither compares to the newest entrant: an extraordinary spatial Metallica concert experience.

When you first try Vision Pro, you inevitably feel shock and awe. Like with any new technology, over time, you tend to forget about that feeling. But it comes flooding right back within seconds of the Metallica experience starting. They really outdid themselves with this one. Not only is it incredibly fun and engaging, it might have finally nailed one of the main issues with immersive video creation. It has been argued at length how filmmakers should cut and stitch their spatial movie experiences together. Earlier immersive videos were not quite perfect, which is of course expected given how early it is in the life of Vision Pro. But the Metallica concert is edited together beautifully. Between black-and-white shots, vibrant colorful ones, up-close clips, and zoomed-out ones; they may have finally figured it out. It was not jarring at all when it would quickly cut from an up-close view of one of the Metallica band members to an overhead pan of the crowd. Maybe that is just the nature of a Metallica concert, so many things are happening all at once. It is pure sensory overload, so you can argue that this form of editing is ideal specifically for a video about this particular subject. But I can easily see the format translating to something narrative. There were particularly special moments, like when following directly behind band members as they walked to the stage through crowds. As someone who loves shows like The West Wing which are well-known for their walk-and-talks, I want this applied to comedies, dramas, shows, and movies of all kinds. It felt like I was right there with the band. I am not a Metallica fan per se, but this was a blast. So do not be intimidated by the heavy metal of it all.

I fully expect nearly everyone who has a Vision Pro to be impressed by this experience. It is very different from previous immersive videos, which is arguably a good thing. It sets a new bar, a higher one. It is a perfect length at around 20 minutes long. This helps particularly if you have difficulty with the weight or any sort of eye strain. Anyone who follows me is probably already aware of my complicated relationship with the hardware. But you do not have to be in Vision Pro for too long to experience the magic. Getting right in the faces of band members and absolutely wild fans in the audience will stick in my head for a while and not in a bad way. I crave even more of this type of content. Believe me when I say, I reminded them over and over that we need more immersive Vision Pro content. This new video proves just how amazing the format can continue to get. I think they are still just scratching the surface.

When Apple invited me, I did not know what the video would be about, and boy am I glad that there was no sort of hint. The surprise of it all made it that much more fun.

The Metallica immersive experience for Vision Pro premieres tomorrow, Friday, March 14th in the TV app. If you do not have a Vision Pro of your own, I highly recommend booking a demo at your closest Apple Store to check it out.

Side note: There is a moment in the video where they stitched together a bunch of still spatial photos. I found this to be one of my favorite parts. I think you might find that to be the case too. I want to see more of that.

‘The Studio’ is Probably Apple TV+’s Next Big Hit

Apple TV+ has had quite the ride since it launched in 2019. It took a few months to really begin to hit its stride with the first season of Ted Lasso, which helped keep us all feeling just a little sane during COVID. More recently, Severance has been a huge hit. It is one of the best shows I have ever seen and virtually everyone I know is as captivated by it as I am. These two shows served as tentpoles for the service, keeping existing subscribers engaged and helping grow the base with new ones. Last night I saw the first two episodes of The Studio here at SXSW and let me tell you, this is Apple’s third major show.

I do not want to undervalue shows like Slow Horses or Shrinking, but they have not had the impact those other two I mentioned before have. The entire audience was howling all throughout the first two episodes of The Studio, people were absolutely glued to the screen. It feels so real, like you are actually watching the day to day of a movie studio team running a business and messing things up. The first two episodes were hilarious, but they were also beautifully shot and the cast is absolutely stacked. I cannot wait until I can stream more of the show and I cannot wait to get all my friends and family members to watch it. I suspect they are going to love it.

The Studio premieres on TV+ March 26th.

Can Apple Even Pull This Off?

I guess I should not be surprised, but I am still furious. Apple shared with John Gruber earlier today that they are delaying the launch of the flagship new personalized Siri features for Apple Intelligence that were unveiled last summer at WWDC. We have been waiting what already feels like forever for these new features and it sounds like we are in for an even longer wait. Who knows if we will see these features in iOS 19.0 or if we will have to wait until early 2026 for a point release, but regardless this is a huge blow to the company’s efforts and reputation in the space. Their rollout of Apple Intelligence has already been haphazard. From marketing it in Apple Stores and in ads before it was ready to ship to dropping features in small separate releases over the past few months to a mostly “meh” reaction.

This feels like a gut punch, namely because we were already not confident that these delayed features could actually rival other AI tools. Apple’s Siri execution is not something that fills anyone with confidence. If these features arrive as they were announced months and months from now, ChatGPT, Gemini, Alexa+, etc. could already have made even further leaps ahead. I cannot help but think that this is sort of a combination of AirPower and MobileMe. It felt almost vaporware-like when revealed at WWDC and it certainly seems like they are having a great deal of difficulty making it a reality. Sounds a lot like AirPower, right? If it does not arrive fully fleshed out and reliable, then they will have to start over like they did with MobileMe and iCloud. In fact, they might have to start over anyway before they even ship anything.

I firmly believe they need to shake things up internally and consider a major acquisition in this new friendlier corporate environment. Get rid of Siri entirely and create a whole new brand built on a new foundation. Who they should buy (or partner with) is an entirely different issue.

There Was Something in the Air After All

For the first time in 24 years, since the introduction of the first white iBook, Apple has a blue laptop again. While the new MacBook Airs are most certainly a “spec bump,” they make for a pretty good one. The new Air features the M4 (as expected), but also gets a massively upgraded FaceTime camera and a $100 price drop. I’m glad my diatribe about the boring iPad announcements leading to more boring updates was wrong. $999 for this new machine is a steal.