AI
With Google's Defeat, the Time for Apple's Search Engine Has Arrived

Unless youâve been living under a rock, you know that Google was formally ruled an advertising monopoly this week. It could potentially be a catastrophic blow to the company that completely upends the web as we know it. A significant component of this case has been Googleâs deal with Apple to be the default search engine in Safari. Google pays the Cupertino company nearly $20B a year to maintain dominance on Apple platforms. That is for good reason of course, Appleâs users are some of the most valuable particularly when it comes to ad targeting. But it is also a flagrant way to squash any chance a competitor could have at growing to be a sustainable web search or ad business. Now that the verdict is in officially, I think it is time for Apple to do what has been long rumored. That is, to build their own search engine that powers Safari, Siri, and Spotlight.
An Apple search engine would be a particularly prescient offering given Appleâs rebranding of their various ad businesses to âApple Adsâ making it an umbrella for the various kinds of ad products they have at the moment. It begs the question of whether or not Apple may get into more traditional web ads in addition to the ones on the App Store and in Apple News. There is now a clear opening for them to enter the market, but I do not think they should just expand the ad offerings. They should go full court press and introduce a search engine. As Joe Rosensteel wrote over at Six Colors, the company needs to build on its existing services business and this is one of the ways that they can go about doing that.
As for the product itself, I know there is much hesitation over whether or not Apple could actually produce a good search engine. This is especially the case after the Apple Intelligence debacle and Siriâs long period of neglect. But the beauty of a search engine is that you do not need to necessarily own the whole stack. It would be unlike Apple to go this route, but I think they need to learn to adapt in the new environment they are now operating in. Like DuckDuckGo or Ecosia, Apple should start by using search results from Bing and Copilot while they build up their own database. Package them up in a beautiful Apple-designed user interface with some clean new ad formats, and if need be, build some special tooling to carefully organize the search results in a better way than they would if otherwise seen on Bing. If they wanted to go really crazy, they could leverage their OpenAI partnership and reskin ChatGPT Search too. The company could even charge a subscription for ad-free results or additional features. It also is not like Apple does not have a similar breadth of services as Google on the web. Appleâs search engine could link to various iCloud apps and save your search history to your account. The opportunity is right there in front of them.
Now I fully recognize that giving up a $20B yearly payment from Google is far from ideal, in fact it makes up a substantial part of their current services revenue. But how sustainable is that longterm? I do not think it can be counted on anymore. It is contingency time. While they would have to make up the $20B difference on their balance sheet with Apple Ad sales and Apple Search subscriptions, they would simultaneously be helping Microsoft or OpenAI grow their positions in search, therefore balancing the market a bit better. Googleâs dominance in this space needs to be challenged seriously. OpenAI and Perplexity are really the only ones truly taking a crack at it, with Perplexity attempting to be particularly aggressive. But it remains to be seen just how big of a dent they can ultimately put in Googleâs market share. While an astronomical number of people are using ChatGPT daily, Google is still overwhelmingly dominant in search. It is going to take an Apple (or an Apple with Microsoft or OpenAI) to truly break Google.
Jony Ive, Laurene Powell Jobs, and Sam Altman Walk into a BarâWill They Walk Out with the Gadget of the Future?
The worst kept secret in Silicon Valley right now is that Jony Ive is helping Sam Altman build new gadgets. On the surface that may not seem like that much of a big deal. After all, Jony left Apple six years ago. But his career is inextricably linked to and was fundamentally shaped by Steve Jobs and Apple. I do not say that to diminish him in any way, it is simply the truth that he would not be âSir Jony Iveâ without the iMac, the iPod, the iPhone, the iPad, and the Apple Watch. For years he has seemingly tried to separate himself from his historical identity by working on a variety of different projects removed from technology, but he will forever be associated with those products whether he likes it or not. When he spends time on a technology, it carries a lot of weight. He learned from the best. As an example, his signature has been all over Airbnbâs app and brandingâthe company has been a major client of his firm LoveFrom. It is just one of the things that makes his teaming up with Sam Altman all that much more intriguing.
At first the rumors were relatively vague. The two of them were thinking about building some kind of AI-powered gadget together, whatever it may be. It has since become clear that the two intend to build a personal AI device, likely one that rethinks the role of the smartphone or even begins to replace some core functions. It is not all that surprising that they have fallen back on what could be a handheld device given the flat reaction to dedicated AI gadgets over the past two years, perhaps with the exception of the Meta Ray Bans. I personally think that lots of tech aficionados have sort of written Jony off in his post-Apple years. That has made it easier for him to explore whatever the project ultimately becomes. But it is unmistakable that the man who designed the iPhone, built his entire fortune on it, and shares the credit of creating it with his late best friend, believes that building competitive hardware is worth his time in 2025. That tells me that he not only believes in both Sam Altman and the power of AI, but also that he does not believe Apple is currently well-positioned to do something similar. The saga of Apple Intelligence blunders thus far may just help back that up. So Sam Altman and Jony Ive are working together to create the device of the future, what could that mean?
Many of the rumors seem to equate this product with some of those failed AI gadgets I mentioned. Some pundits think it could be screen-less or have an unusual twist that makes it less of an iPhone competitor and more of a new product category. I think they are all overthinking it. Whatever these two prolific giants of industry are working on is not the Humane AI Pin or the Rabbit R1. It cannot be a vanity project. OpenAI has been spitting out incredible new products at a ridiculously fast pace over the past several months and I do not see Sam Altman wasting anyoneâs time. The fact that he wants to pull the project into OpenAI says as much. That suggests it might end up being close to a new kind of phoneâperhaps familiar in shape, but powered by something so fundamentally different. When I hear âpersonal AI deviceâ I hear âwe want to replace your phone.â The way to do that, especially if you are a designer, is to make something relatively familiar.
Despite what many AI skeptics have believed, it seems to be bearing out that the chatbot is the interface of the future. At least the near future. A grid of app icons on a home screen could quickly be usurped by a text thread with a digital being that lives in your phone and can use your services for you. We are already heading in that direction. Just this week, Anthropic added connectors for Gmail and Google Calendar making Claude infinitely more personal and useful. OpenAI continues to expand its integrations with apps like Xcode and Notion, making it easier than ever to simply code an entirely new app or write an entire story on a fly with a short prompt. Gemini can access nearly every major Google service already. Microsoft Copilot can see what is on your screen and talk with you about it. Apple was hoping to be able to accomplish these things through a combination of an improved Siri, on-device models, and app intents. But I am not particularly optimistic it is going to all work as well as it needs to. App intents are built on Shortcuts which is already a fragile house of cards. I think they need to start over from scratch, but they are already so far behind that it is unclear if they can risk it. Especially as competitors take giant leaps seemingly on an almost weekly basis, heck, OpenAI just dropped two brand new models today in o3 and o4-mini.
A phone or phone-shaped device with hardware and software designed by Jony Ive and his team at LoveFrom (many of which are former Apple designers, including Evans Hankey) combined with the intelligence of OpenAI could be the first truly formidable opponent the iPhone has had to go up against since Samsung first unveiled the Galaxy series. ChatGPT is an incredibly popular product with hundreds of millions of users. And not because they have to use it, but because they want to use it. It has very strong brand recognition and has become an essential part of peoplesâ daily lives. It certainly has for me and many in my orbit. And it is especially the case with younger users, the trend setters who will determine which company owns the future. My generation decided that iMessage and the iPhone were the best. The next might choose otherwise. While it is still incredibly early to say for sure what the device ultimately will be, I imagine a new generation of smartphone, for lack of a better word, that eschews apps for connectors. A device that starts with a text box and an always-listening voice mode that uses your apps and services for you, that does not take you out of context or distract you periodically through the day. The OpenAI device could actually be the antidote to much of the societal damage the current generation of smartphones has done. While I would never use a current Android phone as my daily driver, I would absolutely consider using a Jony Ive-designed OpenAI device in lieu of my iPhone. Especially if it made me more present and productive.
Apple should be worried. They are more vulnerable than they have been in decades and it shows. If Jony Ive, a Steve Jobs acolyte and one of the most prolific designers of our age sees an opening to dethrone Apple and right societal wrongs, he seems likely to take it. And this is not like Jon Rubenstein going to Palm to build the PrÄ, this is different. There is no Steve to single-handedly steer the ship into the future or to crush competitors with breakneck speed. There was also no new technology nearly as important as a tool like ChatGPT to differentiate other devices. Things get even more interesting when you consider that Emerson Collective is one of the projectâs backers. Emerson Collective is none other than Laurene Powell Jobsâ firm. That means Apple could be going up against a new kind of product from the hottest tech company since Google, with the backing of the iPhoneâs principal designer, Steve Jobsâ incredibly savvy wife, and the figurehead for the AI revolution. If Apple is incapable of getting their house in order with Intelligence, then a personal OpenAI gadget that begins to take the place of the phone, could truly begin to put deeper and deeper cracks in Appleâs glass house. If this new device succeeds, whenever we may see it, it will not just challenge Appleâs grip on personal hardwareâit could redefine what that hardware is and how we use it.
The Week ChatGPT Truly Became an Assistant
Yesterday, OpenAI introduced a new feature under the radar that I think might have just changed everything.
ChatGPT has had a rudimentary âmemoryâ system for quite some time. It lets the model look for important context in your messages, like your job or a particular spot you enjoy. It then saves things that it thinks might be relevant to your conversations with it in the future and stores them in your settings for you to decide what may be worth saving or deleting. It is, in essence, a customizable data retrieval system on top of the model. You can tell ChatGPT to remember things or just wait for it to realize something was essential. This is still the system primarily in-use by free users, but plus and pro members are now beginning to have access to what was internally called âMoonshine.â
On the surface, Moonshine sounds very simple. ChatGPT can now reference your previous conversations. But in reality, it makes the tool dramatically more intelligent â and personal. By being able to reference things you have talked about before without hoping that the model would catch it or by manually teaching it, it feels more like talking with a person than ever before. Dare I say, ChatGPT actually knows me now.
My ChatGPT knows what my current goals are, what projects I am working on, and what I am generally thinking about. I was already popping in my AirPods and talking with ChatGPT voice mode about problems or briefly messaging it for advice on the fly throughout my day. But this takes it to a whole new level, it is never starting from scratch.
It may fundamentally now know more about who I am than any social media or search algorithm before it. And unlike those algorithms, it is actually information that can be put to good use for my sake. It is not worried about what kind of socks I might want to buy, what celebrities I may be crushing on, or trying to make me feel inadequate. It is never trying to sell or market to me. ChatGPT is trying to help me actually accomplish something, whether it is a personal or professional goal.
After just one day, I have already been having far more fruitful conversations with ChatGPT. It feels like a sneak peek at what virtual assistants can really be.
Craig Saves the Day, Gives Engineers the Green Light to Use Off-the-Shelf LLMs

According to The Information, Apple is now letting engineers build products using third-party LLMs. This is a huge change that could seriously alter the course of Apple Intelligence. I have been a proponent of Apple shifting its focus from building somewhat mediocre closed first-party models to using powerful open-source third-party ones. The company is already so far behind its competitors, that I would rather they focus on making really good products rather than really good models. They may be able to do both simultaneously, but I suspect that Apple could ship spectacularly good products using off the shelf models sooner as other teams build first-party models for the future in secret.
Now that Gemini can see your screen on Android and Copilot can see your screen on Windows, these competing assistants are now truly on another level. They are really assistants in the truest sense of the word now that they can visually understand your context while you are using your devices. They make Android and Windows devices more compelling alternatives too. I am hopeful that Apple will be able to catch up much faster by doing what they do best, which is productizing advanced technology. The only difference is that this time, at least for now, they will be productizing someone else’s tech.
I imagine they will be largely looking at Mistral, Gemma, and Phi. Llama would be an obvious contender too if the company’s relationship with Meta was not so contentious. DeepSeek would be another option, though the optics of using it outside of China would likely not be ideal. We will just have to wait and see!
Finally, ChatGPT Can Accurately Generate Images with Text
OpenAI dropped an all-new image generation system for ChatGPT today and man is it good. One of the biggest problems with artificially generated images has been the inability to generate accurate text within them. There have historically been problems with inaccurate characters, spelling, or even complete graphical errors. Today’s update to image generation with GPT-4o fixes these. You can now generate charts, signs, logos, word marks, text graphics, pretty much anything you can think of with ease. It nails spelling, seems to set type well, and generally abides by your instructions. The inability to accurately generate text inside of images really made the tool a lot less useful, but now I suspect it will be used by way more people way more often. Outside of text in images, ChatGPT can now generate overall better images that more accurately fit your prompts. You can even upload an image, ask for edits or a new style, and boom you will almost certainly get a great result that works within the structure of the original. It is all pretty darn great.
Gemini, Siri, and Default Assistants on iOS
I do not want to harp on the Siri situation, but I do have one suggestion that I think Apple should listen to. Because I suspect it is going to take quite some time for the company to get the new Siri out the door properly, they should do what was previously unthinkable. That is, open up iOS to third-party assistants. I do not say this lightly. I am one of those folks who does not want iOS to be torn open like Android, but I am willing to sign on when it makes good common sense. Right now it does.
ChatGPT, Gemini, Claude, DeepSeek, Perplexity, and Grok are all incredibly popular on iOS. They are in use by millions and millions of people. But some of them are not as powerful on iOS as they are on Android. ChatGPT, Gemini, and Perplexity can all be used as the default personal assistant on Android devices. If you do not like the one that came on your device, you can replace it. Apple has been adding more default app choices on iOS as of late and now is the time to add another one. Gemini in particular has become increasingly powerful due to its deep integration with Google apps. If you use them, you can get way more out of Gemini than the alternatives. I desperately want to use Gemini as my default personal assistant on my iPhone, so much so that I have the voice mode assigned to my action button. Because of Gemini’s integration with Google Keep, Google Tasks, and Google Calendar I can use the Gemini assistant to create reminders, generate lists, take notes, and create and manange events. Even better than that, Gemini can dive into my inbox and find things for me better than any standard search tool. Those are all productivity features, but Gemini also has deep entertainment functionality thanks to YouTube. If you want to find a specific video, Gemini can get the job done. You can even use it with Google Maps. The app is simply a fantastic assistant that is able to replace many of the things I already do with Siri, while also being an LLM-powered ultra smart knowledge engine.
I have not even mentioned the amazing Gemini Live yet, despite not integrating with your Google apps yet, it does let you have fluid conversations about world knowledge and things happening on the web. The app is dead simple, yet more powerful than Siri in almost every metric. While it currently cannot do some things like set a timer, it can do far more important things Siri can only dream of being capable of. If you use Google apps, I highly recommend assigning Gemini to your action button or to the back tap gesture on your iPhone. You will be blown away by how much more powerful it is once you begin taking advantage of the app integrations. Heck, if you use Google Home devices it even controls those as well.
I do not use Gemini as my primary LLM generally, I prefer to use ChatGPT and Claude most of the time for research, coding, and writing. But Gemini has proved to be the best assistant out of them all. So while we wait for Siri to get good, give us the ability to use custom assistants at the system level. It does not have to be available to everyone, heck create a special intent that Google and these companies need to apply for if you want. Have requirements for privacy and design too if need be. But these apps with proper system level overlays would be a massive improvement over the existing version of Siri. I do not want to have to launch the app every single time and the basic ChatGPT integration in Siri is far from the best solution.
Google clearly knows that they have a massive with Gemini right now and you can look no further than their decision to heavily advertise on Apple-centric podcasts. In recent weeks they’ve sponsored The Talk Show, Upgrade, Connected, and ATP. One of the best ways to go after Siri is to capture Apple fans and observers.
I know Apple would be averse to this, namely because of the potential for losing a massive amount of Siri users in the meantime while they get their act together. But it would do two things: it would mitigate some of the damage with customer relations while they wait for the new Siri and it sets a new bar for the Siri team to have to exceed. There would not be any room for failure and they need to be under that kind of pressure.
Very Good News: Mike Rockwell, No Fan of Siri, is Here to Save It
Mark Gurman has just reported that Apple is reshuffling the Siri team’s leadership in light of the recent Apple Intelligence controversy. They are putting Mike Rockwell, the man behind Vision Pro, in charge of the effort and taking AI chief John Gianndrea off the project.
While you can lament the well-documented array of Vision Pro shortcomings, the software: visionOS is an impeccable achievement. It was one of the most polished 1.0s the company has ever shipped despite missing some features. It has a gorgeous design and is chock full of functionality, plus it was largely stable out of the gate. Putting the guy who led the development of the most advanced piece of technology Apple has ever made in charge of Siri is the right move. I trust that he will productize whatever technology comes from the LLM group really well and ensure that they do not get over their skis again. Maybe he will even scrap what they have and fast track an entirely new solution. Ultimately, this is good news and it means a fresh perspective. More on why I am excited about the much-needed course correction below.
This news immediately brought to mind a report from early 2023 via The Information that detailed how the Vision Pro team wanted to develop its own voice control software, instead of using the existing version of Siri. 9to5Mac highlighted a particularly important and now very relevant series of lines from the story:
Inside Apple, Siri remains widely derided for its lack of functionality and improvements since Giannandrea took over, say multiple former Siri employees. For example, the team building Appleâs mixed-reality headset, including its leader Mike Rockwell, has expressed disappointment in the demonstrations the Siri team created to showcase how the voice assistant could control the headset, according to two people familiar with the matter. At one point, Rockwellâs team considered building alternative methods for controlling the device using voice commands, the people said (the headset team ultimately ditched that idea).
The frustration with Siri has finally boiled over and I am extremely optimistic now. If anyone at Apple can turn around Siri, it is someone who recognized the problem long ago but was not given the opportunity to fix it. Until now.
Can Apple Even Pull This Off?
I guess I should not be surprised, but I am still furious. Apple shared with John Gruber earlier today that they are delaying the launch of the flagship new personalized Siri features for Apple Intelligence that were unveiled last summer at WWDC. We have been waiting what already feels like forever for these new features and it sounds like we are in for an even longer wait. Who knows if we will see these features in iOS 19.0 or if we will have to wait until early 2026 for a point release, but regardless this is a huge blow to the companyâs efforts and reputation in the space. Their rollout of Apple Intelligence has already been haphazard. From marketing it in Apple Stores and in ads before it was ready to ship to dropping features in small separate releases over the past few months to a mostly âmehâ reaction.
This feels like a gut punch, namely because we were already not confident that these delayed features could actually rival other AI tools. Appleâs Siri execution is not something that fills anyone with confidence. If these features arrive as they were announced months and months from now, ChatGPT, Gemini, Alexa+, etc. could already have made even further leaps ahead. I cannot help but think that this is sort of a combination of AirPower and MobileMe. It felt almost vaporware-like when revealed at WWDC and it certainly seems like they are having a great deal of difficulty making it a reality. Sounds a lot like AirPower, right? If it does not arrive fully fleshed out and reliable, then they will have to start over like they did with MobileMe and iCloud. In fact, they might have to start over anyway before they even ship anything.
I firmly believe they need to shake things up internally and consider a major acquisition in this new friendlier corporate environment. Get rid of Siri entirely and create a whole new brand built on a new foundation. Who they should buy (or partner with) is an entirely different issue.
A Comet Approaches Planet Browser Company
If you have followed me for a bit, you know that I really love the work that The Browser Company has done with Arc. They built a browser that is ridiculously powerful, infinitely customizable, and even extended their ambitions to mobile with one of the nicest AI search tools. But something happened last fall that began to make me question my commitment to Arc. The company decided to pivot and refocus the vast majority of their efforts on a new browser called Dia. Dia looks very different from Arc, the product that has brought them recognition and a loyal user base. The new browser is all about AI and agentic usage of your web tools. To be clear, it is a great idea and I think they’ll make a great product. But it continues to baffle myself and others that they have chosen to branch off instead of building upon their established product. The Browser Company is much smaller than many other AI startups, but it is scrappy. Really cool folks work there and they are awfully creative. That being said, the shift in focus clearly came with a massive risk. One that finally reared its head this week when Perplexity announced they too were building an AI web browser.
Perplexity is one of my favorite AI products. The search engine is tremendously useful for research. The company is growing, has raised tons of money from the biggest names, and moves at breakneck speed. They are not a competitor that is going to go easy on anyone. The new browser that Perplexity is building is called Comet. One can surmise from their marketing materials that Comet is going to be very similar to Dia from a functionality standpoint. Agentic browsers are obviously going to be a big thing, but Perplexity has so many advantages in this fight that I struggle to see how The Browser Company succeeds with Dia. Particularly after having abandoned their original claim to fame and having burned some loyal users who wanted to continue to see Arc flourish. The Browser Company’s biggest advantages were their taste and their willingness to be quirky. Those have sort of been thrown out the window in this scenario. They may be an advantage over Google or Mozilla, but Perplexity also has amazing design chops and has been trying all sorts of things. I can’t forget to mention that The Browser Company’s founding designer Nate Parrott, who has seemed to be a major part of Arc’s success, left to join Anthropic. We learned of this news on the same day that Perplexity announced Comet. Those are a devastating combination.
One thing I have not mentioned is that there is no reason not to believe that OpenAI or Anthropic could launch their own web browsers at some point. They both offer tools that can control a web browser in Operator and Computer use respectively. Why not just own the whole stack if you can? While you can make the case that these two entering the space would also be major challenges for Comet, Perplexity is already big and much more established in the marketplace than The Browser Company. When Jensen Huang is specifically shouting you out on stage, you’re in a winning position. If Perplexity can carve out space alongside ChatGPT, Claude, Gemini, Grok and others as a dedicated search tool, they can probably do the same with web browsers. Another advantage Perplexity has is their own series of AI models that are custom tailored to searching the web and at the moment, they seem to be the best at it.
Google is going to eventually integrate these types of features into Chrome, which is of course the biggest browser. But there are niches to be won. I suspect that the niche The Browser Company was gunning for with Dia, is going to be won over by Comet. I would love for them to focus on Arc again and own the productivity browser space, but Silicon Valley’s occasionally toxic growth mindset calls I suppose.
A Pretty Siri Can't Hide the Ugly Truth
We’ve been waiting for Siri to get good for quite a long time. Before the emergence of ChatGPT, Siri sort of languished. Outside of the addition of Shortcuts in 2018, it has generally remained at the same level of intelligence since 2015 when they added proactive functionality. Last summer, Apple showed off an all-new Siri that is supposed to understand far more context, be able to access content from your apps, and actually take action for you. We are still waiting for that Siri unfortunately and it is unclear when we are going to get it. At the moment, Siri may have a pretty new animation but it is largely the same as it was before iOS 18 with ChatGPT tacked on as a fallback.
We know that Google, Microsoft, and OpenAI have taken things to the next level over the past few years while others have fallen behind. Note that I said “others,” because it hadn’t just been Apple. Much like Siri, Amazon’s Alexa hadn’t really advanced in a while. It was still useful for simple tasks, but it was not a proper AI assistant in the modern sense. Today, they caught up. Alexa+ is the next generation of the assistant and it looks fantastic. It’s powered by a variety of LLMs, runs on existing hardware, integrates with tons and tons of services, and is free for Prime subscribers. That is a potentially devastating combination for Alexa’s competitors. Assuming it works as well as it looks like it does, and I have no reason to doubt the poetic waxing of Panos Panay, it is going to be a formidable opponent to ChatGPT, Gemini, and others. So many people already pay for Prime and with this being included, it is going to be tough to swallow a separate AI purchase. In case you missed it, Alexa+ is not just for Echo devices, it will be an app on phones and a website on computers. It is going to be everywhere. Alexa+ also has an unusually beautiful design for an Amazon product, I imagine we have Panos to thank for that.
So here we are. Everyone is caught up, except for Apple. Siri may have a pretty glowing animation but it is not even remotely the same kind of personal assistant that these others are. Even the version of Siri shown at WWDC last year doesn’t appear to be quite as powerful as Alexa+. Who knows how good the app intents powered Siri will even be at the end of the day when it ships, after all according to reports it has been pushed back and looks like an increasingly difficult endeavor. I obviously want Siri to be great. It desperately needs improvement, not just to compete but to make using an iPhone an even better experience. Other Apple Intelligence features like Genmoji and summarized notifications can be nice, but they’re small potatoes in the grand scheme of things. It is no secret that Apple is not the best at services and their historical ethos makes developing these new kinds of products increasingly difficult. This entirely new landscape presents challenges that are almost certainly already forcing Apple to reconsider the way that they do things. I hope to see the first glimpse of the truly new Siri in iOS 18.4 beta by April, but we’ll just have to wait and see what happens. For now, the real takeaway here is this: Amazon seems to have fixed Alexa before Apple could fix Siri.
Gemini Integration Could be a Huge Upgrade Over ChatGPT in Siri
A few weeks ago Federico Viticci wrote a great piece on MacStories about how one of Gemini’s greatest strengths in the LLM competition is that it’s the only one with Google app extensions. On Friday, Apple released the first beta of iOS 18.4 and while many of the AI updates that we were expecting weren’t there, references to one new feature we weren’t quite anticipating yet surfaced in a review of the update’s code. As spotted by Aaron from MacRumors, Apple is actively working on bringing Gemini to iOS alongside the current ChatGPT integration. This is super interesting, not because there will be another frontier model available to use with Siri for world knowledge but because if this version of Gemini includes extensions it will be leaps and bounds more useful than the existing ChatGPT integration.
As Federico points out in his story, Gemini 2.0 combined with Google Maps, Search, Workspace and YouTube is a formidable offering. The current ChatGPT integration in iOS allows you to sign in with your personal account to get even more out of Siri. While it doesn’t allow you to use custom GPTs or change the OpenAI model, it does save your conversations, lets you generate images with DALL-E and gives you extended usage limits if you have a paid subscription. With that all being the case with ChatGPT, one would have to imagine that Gemini in iOS would have some sort of additional integrations with Google services. I hope that it’s the extensions that communicate with Maps, Search, Workspace and YouTube.
I think it’s safe to say that Gemini in Siri will almost certainly be able to talk to Search and YouTube since neither of those require any sort of personal data. The question that I have is whether or not being signed in with your Google account will give Siri with Gemini the ability to access your Gmail, Calendar, and Drive? Given that you can already connect your Google account to iOS for mail, contacts, calendars, and notes I can’t see why they wouldn’t be able to integrate the other services as well. I’m not sure if Apple would prefer Google not offer this and instead allow their future Siri upgrade to directly surface information from your account once signed in rather than needing to pass off requests to Gemini. But like I said, I think it’s safe to imagine that Google Search and YouTube should at least be accessible through Gemini in Siri. Maps is the outlier here, particularly given the touchy history between the two companies. You could argue the same with YouTube, but I think that’s too fundamental to Search and Apple doesn’t have a competitor. While I want to lean towards Google Maps not being integrated, the data it offers is crucial to so many potential requests. Maybe Apple just defaults to standard Siri mapping results unless you specifically ask Gemini. We’ll have to wait and see.
At the end of the day, this is good news whether or not all of the extensions are fully available through Siri. Another model to choose from is already a big step for Apple to take and Gemini is getting better and better. It will give ChatGPT a real run for its money when users decide which service to use. That is of course, unless Apple allows Siri to work with both models at any time rather than forcing you to pick one in Settings. In which case, things get even more interesting. There are so many possibilities for how this could go down.
Pebble Was More Than a Smartwatch Pioneer â Exploring Core, the Original AI Gadget

With Pebble returning to the smartwatch fray, I thought it would be fun to revisit their only non-watch product: Pebble Core. While Core never ended up shipping, looking back it gave us a glimpse of something thatâs been happening over the past year or two. AI-centric gadgets like the Humane AI Pin, Rabbit R1, Limitless Pendant, and others have become a major topic of conversation amongst the tech crowd. So has, their usefulness but thatâs a separate debate. Theyâre all trying to be the next big thing, though none of them have quite stuck the landing. Each one has plenty of shortcomings and youâve no doubt heard about them already. While they generally position themselves as pioneers, all the way back in 2016 Pebble gave us the first piece of AI hardware in the form of Core. They may not have thought about it that way at the time, but if you go back and re-examine their marketing materials itâs clear that they wouldâve ended up going down that path had they not been gobbled up by Fitbit.
Pebble Core was largely meant to be a Pebble without a display, targeted at runners and hackers. It was a small little square that clipped to your clothes. It played music through Spotify, used GPS to track your runs, could take voice notes, and had a 3G radio in it for SOS calls. That doesnât sound particularly crazy, but I left out one key feature: Alexa. Itâs no secret that Alexa never reached its full potential and sort of languished over the years. Thatâs hopefully going to change later this month. Alexaâs shortcomings aside, the idea was that you could just talk to a screen-less device and ask it to do things. It sounds a lot like the Humane AI Pin without the laser. Pebble suggested that hackers could expand the deviceâs capabilities with examples like ordering an uber, turning lights on/off, tracking your pet, opening your garage door, unlocking your car, and so on. These are the same examples that companies generally use today. It was the âleave your phone at home and still be able to do essential tasksâ device that companies are still chasing to this day.
I highly recommend going back and revisiting Pebbleâs very last product announcement, both because itâs probably a bit of a glimpse of what could be in-store for Pebbleâs return but also because itâs the earliest example I can think of for a modern AI gadget.
AI Searches Just Got Way Faster
If youâve been using Perplexity or any other AI-based search engine, you know that it takes a few seconds to get the answer to your questions typically. Thatâs why I was so psyched about Perplexityâs announcement today that they had shipped an updated version of their in-house Sonar model built on Llama and powered by new Cerebras infrastructure. One of the reasons that Perplexity is so great is that they use specialized AI models tuned specifically for search. Sonar is much faster, while still providing great search results. I highly recommend you check out their blog post about the changes that theyâve made and give the updated search tool a try. I was pretty blown away, particularly compared to ChatGPT Search. In my tests, Perplexity with the new Sonar model was three to five times faster than ChatGPT Search. Typically a search through Perplexity would surface links instantly and output generated in about two seconds. When I would run the same searches through ChatGPT Search, it would take typically six to ten seconds to get the entire result. I also vastly prefer the output from Perplexity, the tuning they’ve done seems to make a big difference both in the substance and the sources used. What they’ve accomplished is really astonishing.
Operator May be The Future, But itâs Still Too Early
A little over a week ago I reluctantly paid $200 last week to try out OpenAIâs new Operator web app designed to allow you to automate complex tasks. The tool, a new website included in the ChatGPT Pro subscription offering, lets you command a virtual machine to go out and complete requests via a browser. Itâs similar to what Rabbit created for their LAM feature for the R1 gadget, if youâre familiar with that.
Upon opening Operator, youâll see a range of suggestions like trip planning, food delivery, and restaurant booking. Those are all compelling things to automate, but theyâre not necessarily things that average people do constantly. The fact that OpenAI sees these as Operatorâs strengths at the moment might help in-part explain why itâs gated to the $200/month subscription. While I canât exactly get myself to aggressively integrate Operator into my workflow, particularly because I am not comfortable signing into accounts on remote virtual machines, I can see how it is the future. I would love to send my Operator out to clean up my inbox, to beautify presentations, to organize paywalled stories across sites, to post on all social media sites for me at once, to clean up storage space on my machines, to create watchlists on streaming platforms, and so on. The possibilities are endless, but the reality is that we need Operator to run locally or a formal API that securely interfaces with these services. I do not want a remote browser controlled by someone else to have carte blanche access to my most important accounts. Without granting Operator those credentials, it isnât particularly useful.
Thereâs another piece of the puzzle worth touching on, which is that Operator is just not quite necessary yet for me personally. I quickly realized that I just donât need it, despite how amazing it is to play with. Even some of the tasks I described above as use cases that make sense for me personally are things I often enjoy doing on my own. It also happens to just not be quite ready. While playing with it, Operator would occasionally get lost or confused when browsing certain sites. OpenAI has acknowledged this is the case, but I still fear it doing something it shouldnât. Weâll see how it evolves over the next few months. Ultimately, once Operator is part of a standard ChatGPT Plus subscription, it may be more palatable. But for now, you really donât need to spend the money to try it.
Perplexity Pro Now Has R1, o3-mini, and Just About Everything Else
Iâve used nearly all of the premium AI subscription services, but none of them come close to offering the value that Perplexity Pro does. Despite the companyâs contentious relationship with publishers, Iâve continued to believe it is one of the best-positioned startups in the space. After the emergence of DeepSeek, my lack of usage of Sora or Operator, and realizing my primary use case is almost always search, I moved on from services offered directly by the LLM providers.
Perplexity is particularly appealing because of how quickly the company moves to integrate new models and features. I was floored by how quickly they had DeepSeek R1 up and running both on-site and in their apps. No one capitalized better on the DeepSeek moment in the US than they did. Fortunately, this wasnât a one-off. When OpenAI surprised us all with o3-mini on Friday, I was anxious to play with it. Of course, I could do so because my ChatGPT subscription was still active. But it took a day for Perplexity to have o3-mini integrated into their reasoning system right alongside DeepSeek R1. That means that Perplexity now has every major model (sans Llama, though that might be a benefit given how itâs inescapable inside of Meta apps already) built-in to their Pro subscription. Users get 500 reasoning queries a day using R1 and o3-mini, safely hosted in the US, coupled with the best web search offering there is.
That would be enough, but recall that Perplexity Pro already had several advanced models built-in. Your subscription gets you access to Claude 3.5 Sonnet, GPT 4o, and Grok 2. Plus, you get access to Perplexityâs own models. Instead of having to subscribe to Claude or ChatGPT, you can get them both in a single subscription. And it precludes you from having to use the official Grok or DeepSeek solutions. Again, all of this would be enough if you didnât also get Playground v3, DALL_E 3, and FLUX.1 for image generation. Itâs a mind-blowing amount of value for a single $20/month subscription, which would otherwise cost you the same to subscribe to just one of some of these.
Perplexity ties it all up and adds a nice little bow on top. The UI of the app and the website are great and continue to get better. They keep adding data widgets, switching to a writing or math focus is easy as can be, and you can even build out dedicated pages for topics and edit them as you wish. Itâs a ridiculously good offering and if you have an Android device, the assistant just makes it all that much better. I canât recommend it enough. If you havenât used Perplexity, try the free version and I think youâll immediately be blown away by how good it is as a search engine. But if you want to get more out of these tools, itâs well worth trying the Pro subscription.
(Perplexity now has Gemini too as of 2/5!)
Perplexity Assistant is the Best Example of Apple Needing to Expand Extensibility on iOS
Last week Perplexity launched their latest product: Perplexity Assistant for Android. I’ve been a big fan of Perplexity for a long time. Despite its often contentious relationship with publishers, they make a spectacularly good tool. I love them, if only for the fact that they’re the first search engine to give Google a run for its money in years. But this isn’t about the search engine game or the AI search race. Perplexity Assistant is a replacement for Google Assistant and Gemini on Android phones. It can be summoned from anywhere, understand screen context, answer complex questions, and do normal personal assistant things like set reminders. I’ve been using it on my Pixel 9 and have been extremely impressed. It’s created an even starker contrast to the experience I have on my iPhone. Just the other day, John Gruber wrote about how the latest incarnation of Siri can often be worse than the previous non-AI infused version. Despite Siri’s new AI features, I still mostly use it for reminders, alarms, and calendar events. I almost never use it for complex questions or even for the ChatGPT integration. I prefer to just use the native app. Perplexity Assistant on Android is so good, both functionally and visually. It’s often better than Gemini and it is leaps and bounds ahead of Siri. It is the first example in my opinion, of how much we could benefit from Apple opening up more parts of iOS. I continue to believe that Apple should not wholesale open up the operating system to work like Android. But in this new day and age of artificial intelligence, it would be a great reward to users to allow us to use other assistants. And I know what you’re thinking, wouldn’t this just highlight how bad Siri can be? Sure, but I think it would also further light a fire under Apple’s butt to fix Siri. Real competition to drive improvements. They can’t coast.
DeepSeek Arrives at an Awkward Time, But itâs Still Amazing
If you havenât tried DeepSeek yet, go do it right now. The app, which is a Chinese-owned AI tool designed to compete with the likes of ChatGPT, Gemini, Llama and Grok, has rocketed to the top of the App Store charts recently and for good reason. DeepSeek is special, not for its user interface or general experience, but because it shows you how it âthinks.â The company trained its model differently; frankly, I donât fully understand how, but I trust that they did so in a way that makes it better than many other open-source models. While DeepSeek is rising to prominence at an awkward time, look no further than the TikTok divestment situation; it is well worth a try. At this point, I am still using ChatGPT as my primary AI tool. But being able to watch DeepSeek think through a request and show you exactly how it got its output is extraordinary. Oh, and the best part? Itâs completely free, yet it rivals the best premium models from other companies. If you donât want to use DeepSeek directly for privacy or censorship-related reasons, you can also do so locally on your Mac using Ollama or on iOS using the latest TestFlight beta of Fullmoon from Mainframe.