AI
Can Apple Even Pull This Off?
I guess I should not be surprised, but I am still furious. Apple shared with John Gruber earlier today that they are delaying the launch of the flagship new personalized Siri features for Apple Intelligence that were unveiled last summer at WWDC. We have been waiting what already feels like forever for these new features and it sounds like we are in for an even longer wait. Who knows if we will see these features in iOS 19.0 or if we will have to wait until early 2026 for a point release, but regardless this is a huge blow to the companyâs efforts and reputation in the space. Their rollout of Apple Intelligence has already been haphazard. From marketing it in Apple Stores and in ads before it was ready to ship to dropping features in small separate releases over the past few months to a mostly âmehâ reaction.
This feels like a gut punch, namely because we were already not confident that these delayed features could actually rival other AI tools. Appleâs Siri execution is not something that fills anyone with confidence. If these features arrive as they were announced months and months from now, ChatGPT, Gemini, Alexa+, etc. could already have made even further leaps ahead. I cannot help but think that this is sort of a combination of AirPower and MobileMe. It felt almost vaporware-like when revealed at WWDC and it certainly seems like they are having a great deal of difficulty making it a reality. Sounds a lot like AirPower, right? If it does not arrive fully fleshed out and reliable, then they will have to start over like they did with MobileMe and iCloud. In fact, they might have to start over anyway before they even ship anything.
I firmly believe they need to shake things up internally and consider a major acquisition in this new friendlier corporate environment. Get rid of Siri entirely and create a whole new brand built on a new foundation. Who they should buy (or partner with) is an entirely different issue.
A Comet Approaches Planet Browser Company
If you have followed me for a bit, you know that I really love the work that The Browser Company has done with Arc. They built a browser that is ridiculously powerful, infinitely customizable, and even extended their ambitions to mobile with one of the nicest AI search tools. But something happened last fall that began to make me question my commitment to Arc. The company decided to pivot and refocus the vast majority of their efforts on a new browser called Dia. Dia looks very different from Arc, the product that has brought them recognition and a loyal user base. The new browser is all about AI and agentic usage of your web tools. To be clear, it is a great idea and I think they’ll make a great product. But it continues to baffle myself and others that they have chosen to branch off instead of building upon their established product. The Browser Company is much smaller than many other AI startups, but it is scrappy. Really cool folks work there and they are awfully creative. That being said, the shift in focus clearly came with a massive risk. One that finally reared its head this week when Perplexity announced they too were building an AI web browser.
Perplexity is one of my favorite AI products. The search engine is tremendously useful for research. The company is growing, has raised tons of money from the biggest names, and moves at breakneck speed. They are not a competitor that is going to go easy on anyone. The new browser that Perplexity is building is called Comet. One can surmise from their marketing materials that Comet is going to be very similar to Dia from a functionality standpoint. Agentic browsers are obviously going to be a big thing, but Perplexity has so many advantages in this fight that I struggle to see how The Browser Company succeeds with Dia. Particularly after having abandoned their original claim to fame and having burned some loyal users who wanted to continue to see Arc flourish. The Browser Company’s biggest advantages were their taste and their willingness to be quirky. Those have sort of been thrown out the window in this scenario. They may be an advantage over Google or Mozilla, but Perplexity also has amazing design chops and has been trying all sorts of things. I can’t forget to mention that The Browser Company’s founding designer Nate Parrott, who has seemed to be a major part of Arc’s success, left to join Anthropic. We learned of this news on the same day that Perplexity announced Comet. Those are a devastating combination.
One thing I have not mentioned is that there is no reason not to believe that OpenAI or Anthropic could launch their own web browsers at some point. They both offer tools that can control a web browser in Operator and Computer use respectively. Why not just own the whole stack if you can? While you can make the case that these two entering the space would also be major challenges for Comet, Perplexity is already big and much more established in the marketplace than The Browser Company. When Jensen Huang is specifically shouting you out on stage, you’re in a winning position. If Perplexity can carve out space alongside ChatGPT, Claude, Gemini, Grok and others as a dedicated search tool, they can probably do the same with web browsers. Another advantage Perplexity has is their own series of AI models that are custom tailored to searching the web and at the moment, they seem to be the best at it.
Google is going to eventually integrate these types of features into Chrome, which is of course the biggest browser. But there are niches to be won. I suspect that the niche The Browser Company was gunning for with Dia, is going to be won over by Comet. I would love for them to focus on Arc again and own the productivity browser space, but Silicon Valley’s occasionally toxic growth mindset calls I suppose.
A Pretty Siri Can't Hide the Ugly Truth
We’ve been waiting for Siri to get good for quite a long time. Before the emergence of ChatGPT, Siri sort of languished. Outside of the addition of Shortcuts in 2018, it has generally remained at the same level of intelligence since 2015 when they added proactive functionality. Last summer, Apple showed off an all-new Siri that is supposed to understand far more context, be able to access content from your apps, and actually take action for you. We are still waiting for that Siri unfortunately and it is unclear when we are going to get it. At the moment, Siri may have a pretty new animation but it is largely the same as it was before iOS 18 with ChatGPT tacked on as a fallback.
We know that Google, Microsoft, and OpenAI have taken things to the next level over the past few years while others have fallen behind. Note that I said “others,” because it hadn’t just been Apple. Much like Siri, Amazon’s Alexa hadn’t really advanced in a while. It was still useful for simple tasks, but it was not a proper AI assistant in the modern sense. Today, they caught up. Alexa+ is the next generation of the assistant and it looks fantastic. It’s powered by a variety of LLMs, runs on existing hardware, integrates with tons and tons of services, and is free for Prime subscribers. That is a potentially devastating combination for Alexa’s competitors. Assuming it works as well as it looks like it does, and I have no reason to doubt the poetic waxing of Panos Panay, it is going to be a formidable opponent to ChatGPT, Gemini, and others. So many people already pay for Prime and with this being included, it is going to be tough to swallow a separate AI purchase. In case you missed it, Alexa+ is not just for Echo devices, it will be an app on phones and a website on computers. It is going to be everywhere. Alexa+ also has an unusually beautiful design for an Amazon product, I imagine we have Panos to thank for that.
So here we are. Everyone is caught up, except for Apple. Siri may have a pretty glowing animation but it is not even remotely the same kind of personal assistant that these others are. Even the version of Siri shown at WWDC last year doesn’t appear to be quite as powerful as Alexa+. Who knows how good the app intents powered Siri will even be at the end of the day when it ships, after all according to reports it has been pushed back and looks like an increasingly difficult endeavor. I obviously want Siri to be great. It desperately needs improvement, not just to compete but to make using an iPhone an even better experience. Other Apple Intelligence features like Genmoji and summarized notifications can be nice, but they’re small potatoes in the grand scheme of things. It is no secret that Apple is not the best at services and their historical ethos makes developing these new kinds of products increasingly difficult. This entirely new landscape presents challenges that are almost certainly already forcing Apple to reconsider the way that they do things. I hope to see the first glimpse of the truly new Siri in iOS 18.4 beta by April, but we’ll just have to wait and see what happens. For now, the real takeaway here is this: Amazon seems to have fixed Alexa before Apple could fix Siri.
Gemini Integration Could be a Huge Upgrade Over ChatGPT in Siri
A few weeks ago Federico Viticci wrote a great piece on MacStories about how one of Gemini’s greatest strengths in the LLM competition is that it’s the only one with Google app extensions. On Friday, Apple released the first beta of iOS 18.4 and while many of the AI updates that we were expecting weren’t there, references to one new feature we weren’t quite anticipating yet surfaced in a review of the update’s code. As spotted by Aaron from MacRumors, Apple is actively working on bringing Gemini to iOS alongside the current ChatGPT integration. This is super interesting, not because there will be another frontier model available to use with Siri for world knowledge but because if this version of Gemini includes extensions it will be leaps and bounds more useful than the existing ChatGPT integration.
As Federico points out in his story, Gemini 2.0 combined with Google Maps, Search, Workspace and YouTube is a formidable offering. The current ChatGPT integration in iOS allows you to sign in with your personal account to get even more out of Siri. While it doesn’t allow you to use custom GPTs or change the OpenAI model, it does save your conversations, lets you generate images with DALL-E and gives you extended usage limits if you have a paid subscription. With that all being the case with ChatGPT, one would have to imagine that Gemini in iOS would have some sort of additional integrations with Google services. I hope that it’s the extensions that communicate with Maps, Search, Workspace and YouTube.
I think it’s safe to say that Gemini in Siri will almost certainly be able to talk to Search and YouTube since neither of those require any sort of personal data. The question that I have is whether or not being signed in with your Google account will give Siri with Gemini the ability to access your Gmail, Calendar, and Drive? Given that you can already connect your Google account to iOS for mail, contacts, calendars, and notes I can’t see why they wouldn’t be able to integrate the other services as well. I’m not sure if Apple would prefer Google not offer this and instead allow their future Siri upgrade to directly surface information from your account once signed in rather than needing to pass off requests to Gemini. But like I said, I think it’s safe to imagine that Google Search and YouTube should at least be accessible through Gemini in Siri. Maps is the outlier here, particularly given the touchy history between the two companies. You could argue the same with YouTube, but I think that’s too fundamental to Search and Apple doesn’t have a competitor. While I want to lean towards Google Maps not being integrated, the data it offers is crucial to so many potential requests. Maybe Apple just defaults to standard Siri mapping results unless you specifically ask Gemini. We’ll have to wait and see.
At the end of the day, this is good news whether or not all of the extensions are fully available through Siri. Another model to choose from is already a big step for Apple to take and Gemini is getting better and better. It will give ChatGPT a real run for its money when users decide which service to use. That is of course, unless Apple allows Siri to work with both models at any time rather than forcing you to pick one in Settings. In which case, things get even more interesting. There are so many possibilities for how this could go down.
Pebble Was More Than a Smartwatch Pioneer â Exploring Core, the Original AI Gadget

With Pebble returning to the smartwatch fray, I thought it would be fun to revisit their only non-watch product: Pebble Core. While Core never ended up shipping, looking back it gave us a glimpse of something thatâs been happening over the past year or two. AI-centric gadgets like the Humane AI Pin, Rabbit R1, Limitless Pendant, and others have become a major topic of conversation amongst the tech crowd. So has, their usefulness but thatâs a separate debate. Theyâre all trying to be the next big thing, though none of them have quite stuck the landing. Each one has plenty of shortcomings and youâve no doubt heard about them already. While they generally position themselves as pioneers, all the way back in 2016 Pebble gave us the first piece of AI hardware in the form of Core. They may not have thought about it that way at the time, but if you go back and re-examine their marketing materials itâs clear that they wouldâve ended up going down that path had they not been gobbled up by Fitbit.
Pebble Core was largely meant to be a Pebble without a display, targeted at runners and hackers. It was a small little square that clipped to your clothes. It played music through Spotify, used GPS to track your runs, could take voice notes, and had a 3G radio in it for SOS calls. That doesnât sound particularly crazy, but I left out one key feature: Alexa. Itâs no secret that Alexa never reached its full potential and sort of languished over the years. Thatâs hopefully going to change later this month. Alexaâs shortcomings aside, the idea was that you could just talk to a screen-less device and ask it to do things. It sounds a lot like the Humane AI Pin without the laser. Pebble suggested that hackers could expand the deviceâs capabilities with examples like ordering an uber, turning lights on/off, tracking your pet, opening your garage door, unlocking your car, and so on. These are the same examples that companies generally use today. It was the âleave your phone at home and still be able to do essential tasksâ device that companies are still chasing to this day.
I highly recommend going back and revisiting Pebbleâs very last product announcement, both because itâs probably a bit of a glimpse of what could be in-store for Pebbleâs return but also because itâs the earliest example I can think of for a modern AI gadget.
AI Searches Just Got Way Faster
If youâve been using Perplexity or any other AI-based search engine, you know that it takes a few seconds to get the answer to your questions typically. Thatâs why I was so psyched about Perplexityâs announcement today that they had shipped an updated version of their in-house Sonar model built on Llama and powered by new Cerebras infrastructure. One of the reasons that Perplexity is so great is that they use specialized AI models tuned specifically for search. Sonar is much faster, while still providing great search results. I highly recommend you check out their blog post about the changes that theyâve made and give the updated search tool a try. I was pretty blown away, particularly compared to ChatGPT Search. In my tests, Perplexity with the new Sonar model was three to five times faster than ChatGPT Search. Typically a search through Perplexity would surface links instantly and output generated in about two seconds. When I would run the same searches through ChatGPT Search, it would take typically six to ten seconds to get the entire result. I also vastly prefer the output from Perplexity, the tuning they’ve done seems to make a big difference both in the substance and the sources used. What they’ve accomplished is really astonishing.
Operator May be The Future, But itâs Still Too Early
A little over a week ago I reluctantly paid $200 last week to try out OpenAIâs new Operator web app designed to allow you to automate complex tasks. The tool, a new website included in the ChatGPT Pro subscription offering, lets you command a virtual machine to go out and complete requests via a browser. Itâs similar to what Rabbit created for their LAM feature for the R1 gadget, if youâre familiar with that.
Upon opening Operator, youâll see a range of suggestions like trip planning, food delivery, and restaurant booking. Those are all compelling things to automate, but theyâre not necessarily things that average people do constantly. The fact that OpenAI sees these as Operatorâs strengths at the moment might help in-part explain why itâs gated to the $200/month subscription. While I canât exactly get myself to aggressively integrate Operator into my workflow, particularly because I am not comfortable signing into accounts on remote virtual machines, I can see how it is the future. I would love to send my Operator out to clean up my inbox, to beautify presentations, to organize paywalled stories across sites, to post on all social media sites for me at once, to clean up storage space on my machines, to create watchlists on streaming platforms, and so on. The possibilities are endless, but the reality is that we need Operator to run locally or a formal API that securely interfaces with these services. I do not want a remote browser controlled by someone else to have carte blanche access to my most important accounts. Without granting Operator those credentials, it isnât particularly useful.
Thereâs another piece of the puzzle worth touching on, which is that Operator is just not quite necessary yet for me personally. I quickly realized that I just donât need it, despite how amazing it is to play with. Even some of the tasks I described above as use cases that make sense for me personally are things I often enjoy doing on my own. It also happens to just not be quite ready. While playing with it, Operator would occasionally get lost or confused when browsing certain sites. OpenAI has acknowledged this is the case, but I still fear it doing something it shouldnât. Weâll see how it evolves over the next few months. Ultimately, once Operator is part of a standard ChatGPT Plus subscription, it may be more palatable. But for now, you really donât need to spend the money to try it.
Perplexity Pro Now Has R1, o3-mini, and Just About Everything Else
Iâve used nearly all of the premium AI subscription services, but none of them come close to offering the value that Perplexity Pro does. Despite the companyâs contentious relationship with publishers, Iâve continued to believe it is one of the best-positioned startups in the space. After the emergence of DeepSeek, my lack of usage of Sora or Operator, and realizing my primary use case is almost always search, I moved on from services offered directly by the LLM providers.
Perplexity is particularly appealing because of how quickly the company moves to integrate new models and features. I was floored by how quickly they had DeepSeek R1 up and running both on-site and in their apps. No one capitalized better on the DeepSeek moment in the US than they did. Fortunately, this wasnât a one-off. When OpenAI surprised us all with o3-mini on Friday, I was anxious to play with it. Of course, I could do so because my ChatGPT subscription was still active. But it took a day for Perplexity to have o3-mini integrated into their reasoning system right alongside DeepSeek R1. That means that Perplexity now has every major model (sans Llama, though that might be a benefit given how itâs inescapable inside of Meta apps already) built-in to their Pro subscription. Users get 500 reasoning queries a day using R1 and o3-mini, safely hosted in the US, coupled with the best web search offering there is.
That would be enough, but recall that Perplexity Pro already had several advanced models built-in. Your subscription gets you access to Claude 3.5 Sonnet, GPT 4o, and Grok 2. Plus, you get access to Perplexityâs own models. Instead of having to subscribe to Claude or ChatGPT, you can get them both in a single subscription. And it precludes you from having to use the official Grok or DeepSeek solutions. Again, all of this would be enough if you didnât also get Playground v3, DALL_E 3, and FLUX.1 for image generation. Itâs a mind-blowing amount of value for a single $20/month subscription, which would otherwise cost you the same to subscribe to just one of some of these.
Perplexity ties it all up and adds a nice little bow on top. The UI of the app and the website are great and continue to get better. They keep adding data widgets, switching to a writing or math focus is easy as can be, and you can even build out dedicated pages for topics and edit them as you wish. Itâs a ridiculously good offering and if you have an Android device, the assistant just makes it all that much better. I canât recommend it enough. If you havenât used Perplexity, try the free version and I think youâll immediately be blown away by how good it is as a search engine. But if you want to get more out of these tools, itâs well worth trying the Pro subscription.
(Perplexity now has Gemini too as of 2/5!)
Perplexity Assistant is the Best Example of Apple Needing to Expand Extensibility on iOS
Last week Perplexity launched their latest product: Perplexity Assistant for Android. I’ve been a big fan of Perplexity for a long time. Despite its often contentious relationship with publishers, they make a spectacularly good tool. I love them, if only for the fact that they’re the first search engine to give Google a run for its money in years. But this isn’t about the search engine game or the AI search race. Perplexity Assistant is a replacement for Google Assistant and Gemini on Android phones. It can be summoned from anywhere, understand screen context, answer complex questions, and do normal personal assistant things like set reminders. I’ve been using it on my Pixel 9 and have been extremely impressed. It’s created an even starker contrast to the experience I have on my iPhone. Just the other day, John Gruber wrote about how the latest incarnation of Siri can often be worse than the previous non-AI infused version. Despite Siri’s new AI features, I still mostly use it for reminders, alarms, and calendar events. I almost never use it for complex questions or even for the ChatGPT integration. I prefer to just use the native app. Perplexity Assistant on Android is so good, both functionally and visually. It’s often better than Gemini and it is leaps and bounds ahead of Siri. It is the first example in my opinion, of how much we could benefit from Apple opening up more parts of iOS. I continue to believe that Apple should not wholesale open up the operating system to work like Android. But in this new day and age of artificial intelligence, it would be a great reward to users to allow us to use other assistants. And I know what you’re thinking, wouldn’t this just highlight how bad Siri can be? Sure, but I think it would also further light a fire under Apple’s butt to fix Siri. Real competition to drive improvements. They can’t coast.
DeepSeek Arrives at an Awkward Time, But itâs Still Amazing
If you havenât tried DeepSeek yet, go do it right now. The app, which is a Chinese-owned AI tool designed to compete with the likes of ChatGPT, Gemini, Llama and Grok, has rocketed to the top of the App Store charts recently and for good reason. DeepSeek is special, not for its user interface or general experience, but because it shows you how it âthinks.â The company trained its model differently; frankly, I donât fully understand how, but I trust that they did so in a way that makes it better than many other open-source models. While DeepSeek is rising to prominence at an awkward time, look no further than the TikTok divestment situation; it is well worth a try. At this point, I am still using ChatGPT as my primary AI tool. But being able to watch DeepSeek think through a request and show you exactly how it got its output is extraordinary. Oh, and the best part? Itâs completely free, yet it rivals the best premium models from other companies. If you donât want to use DeepSeek directly for privacy or censorship-related reasons, you can also do so locally on your Mac using Ollama or on iOS using the latest TestFlight beta of Fullmoon from Mainframe.