If you’ve been using Perplexity or any other AI-based search engine, you know that it takes a few seconds to get the answer to your questions typically. That’s why I was so psyched about Perplexity’s announcement today that they had shipped an updated version of their in-house Sonar model built on Llama and powered by new Cerebras infrastructure. One of the reasons that Perplexity is so great is that they use specialized AI models tuned specifically for search. Sonar is much faster, while still providing great search results. I highly recommend you check out their blog post about the changes that they’ve made and give the updated search tool a try. I was pretty blown away, particularly compared to ChatGPT Search. In my tests, Perplexity with the new Sonar model was three to five times faster than ChatGPT Search. Typically a search through Perplexity would surface links instantly and output generated in about two seconds. When I would run the same searches through ChatGPT Search, it would take typically six to ten seconds to get the entire result. I also vastly prefer the output from Perplexity, the tuning they’ve done seems to make a big difference both in the substance and the sources used. What they’ve accomplished is really astonishing.