At Mx3AI in Hoxton on Thursday, December 7th, select industry decision-makers will come together to discuss the latest AI developments. One of the speakers is Lucky Gunasekara, Co-founder and CEO of Miso, whose company is making serious inroads into publisher search. The goal? To keep readers highly engaged within a publisher’s ecosystem.
As with a number of leading AI exponents, Miso’s Lucky Gunasekara comes armed with a formidable academic background – he met Miso’s CTO and Co-Founder Andy Hsieh when they both were working at Cornell Tech in Manhattan studying small data algorithms alongside Professor Deborah Estrin.
Whilst it’s certainly no coincidence that Miso is now headquartered in Cambridge, Mass., next to MIT, the company’s mission is now very much of a commercial nature – in essence how to make personalization, search, recommendations and predictions operate effectively for single websites at small scale. As Gunasekara points out, Google’s approach to personalised search “doesn’t scale down well to the real world of most normal websites”.
In an interview prior to Mx3AI where Gunasekara is speaking about ‘Why the future of AI in media will be small, decentralised, open source and profitable’ we asked him to explain more about his company’s goal to help publishers harness search and recommendations for themselves, without the aid of Silicon Valley.
Can you give us a little more about your company’s background?
We were founded in 2017 but didn’t start building Miso until 2019. Whilst I’m based in Boston, we’re very much a distributed team with colleagues in Los Angeles, Cupertino, Omaha, Dallas, and most of all, Taiwan, where the bulk of our engineering team is based. We’ve been really fortunate to be funded and advised by Bloomberg’s venture arm, and a set of really incredible operators, including the first investor in Pinterest and a Co-Founder of Meta.
What are your key solutions and product suite?
Our flagship product is Answers, which is a low/no-hallucination LLM-based search engine that provides what readers really want – answers and insights to questions and queries that are top of mind. And where the answers are directly sourced from the content on a publisher’s site, and nothing else.
We’ve built Answers out so that it’s running a fully private LLM, and recently figured out how to contextually advertise it. So if you’re searching for the best apple watch for scuba diving, you’ll get an affiliate link for the Apple Watch Ultra 2 alongside a description of its strengths. Or if you’re researching vacation options in Oahu that are kid friendly, the Expedia link for the Disney Aulani resort shows up next to commentary from a review.
Engagement with Answers has been up to 10-50x higher than traditional keyword search. Which has been great validation of our early hunch that readers really do just want answers when they hit the search bar.
Answers is also generating a lot of real-time first party data and insights that we’re now integrating into DMPs and CDPs. And now with that interest graph data and set of predictions, we’re quickly powering content discovery and recommendations, across every article and homepages and section pages.
We’re achieving important ML and AI capabilities without needing a publisher to have intense levels of well organised data to begin with. We’re starting with just their content. And logging how readers search, browse and click that content. That’s it. So that means we can really democratise a really interesting set of capabilities for any publisher or newsroom.
Our Discovery solution is also doing very well, with recommendations driving almost 2x higher CTR on the homepage feeds and article recommendations we’re driving with partners. Although the AI engine I’m probably most excited about is our contextual ads. Using a LLM to contextually understand a reader’s intent and context, and then show a relevant commerce link to buy the product or flight or hotel of interest, it’s what ads really should be – we’re seeing 10-15x higher CTR with these contextual ads.
Can you provide an example of your tech at work?
Our Answers engine is being used on Macworld.com, which they’ve dubbed Smart Answers. Macworld’s publisher, Foundry, have been great to work with, and really took a practical approach to experimenting with AI quickly and safely with us. The engine allows any reader to very quickly ask questions on products, tech trends and rumours, and get verified, citation-backed Answers built not just on Macworld content, but also articles from their sibling sites, Techadvisor, Techhive, and PCWorld.
Macworld saw their first week of Answers do more traffic in one week than the previous year of keyword search. We’ve also been embedding questions in articles as a form of discovery, and these are also seeing very high clickthrough rates, driving sometimes almost 80% of Answers traffic and early adoption.
We are now live on almost 75 odd websites, including Macworld, America’s Test Kitchen, O’Reilly Media, Outside Magazine, The News Lens, Edie.net, Oxygen, Utilityweek, Backpacker and Trailrunner.
What is your product roadmap for 2024?
We’re excited to upgrade to Answers 2.0, with an even more advanced reasoning engine and native affiliate ads that we’ll be directly sourcing and delivering via several new partners.
Another new product is called Alerts, which is a service for readers to subscribe to Alerts to topics and deals from Answers. Think of it like an AI-powered briefings and alerts service, one that’s interactive for the reader, where they can even email it back with follow up questions and requests.
The other is called News Radio, where we’re excited to see if text-to-speech can read back articles to readers via their headphones, and turn that into a sort of radio station that you can turn on anytime from a publisher, or even get as a daily personal pod every morning.
Beyond this, we’re going to also be expanding our recommendations library so that there are more in-line contextual and even interactive content discovery points in articles – ones that appear contextually when and where most users are naturally at-risk in tailing off in their reading.
To date, which applications of AI have inspired you the most within media?
For all of the text search work I do, I mainly play with Midjourney and Runway for fun. If I could go back and be 12 again, I would be shooting my own Star Wars film in my backyard with these tools. But honestly, it’s thought provoking as well on the ethics and legality of these tools, especially in how easy it is now to duplicate an artists’ life’s work. We’re going to need to reconcile those two sides of this coin soon.
What are the chief concerns/fears media groups have with AI in general that you are seeing?
This is a Napster moment for the world of media. Generative AI is in a lot of ways an existential threat on several levels. First off, it’s just the fact that many information needs can now be served by ChatGPT and LLMs directly, and because they’ve scraped the information from publishers without their permission. The way LLMs are being made and delivered – regardless of what a tech company’s lawyers say – is largely an act of theft.
The other aspect of LLMs that’s concerning is that the economics of the entire open web could be going through a hard reset soon. If Apple’s new AI voice assistant gives you an answer directly, one sourced from a publisher, but there’s no citation or credit given…that’s bad. Suddenly publishers could be stripped of the core traffic that they use to drive ads, subscriptions and their core readership acquisition in general. This is basically like flipping the table over on the original Search deal that Google made with the web in its earliest days – I send you traffic, you monetise that traffic.
I think the media landscape has a lot of power, but I also think they’re somewhat moving too slowly at the same time. OpenAI, Anthropic, Microsoft and Google are looking to boil a frog here, and the industry is the frog. Leaders are acknowledging this existential risk, but they’re moving so slowly these days to take legal or collective action. It’s concerning.
What are the key challenges media groups face when trying to implement AI across their operations?
Personally, I think the barriers to adopting AI at this point can’t be any lower. Aimee Rinehart at the AP has put this well that if you can bake a cake, you can adopt generative AI.
With Answers, we really just need content, and now that we’ve launched this ability for any group to self-serve into their own Answers sandbox for free, I don’t think there’s really any friction on adoption. You just literally put in your url, and we do a snapshot crawl of your site for 1000 or so articles, and voila here’s your own private Answers sandbox. And now that we’re integrating directly with WordPress, you don’t even need an engineer to get going on the backend.
That said, it really needs to be the culture of a media group to experiment and run AB tests in the first place. That’s the biggest driver to AI adoption in my experience.
AI is like a gigantic wave right now, you want to surf it now but not drown. And you really need dedicated product leaders and 1-2 developers to do that.
I think we’re in for a very big and very exciting change. Digital AI assistance, at least like the computer on Star Trek is going to get here within the decade, and it’s going to change a lot of how we live and work. And for publishers they’re vital to that future, and should really seize this opportunity quickly to steer and shape what the future of AI is really going to be. They’re always going to be the key source of up-to-date and fact-checked information, so they’re more valuable in many ways collectively than the AI and models themselves.
Join us for Mx3AI in Hoxton on Thursday, December 7th where select industry decision-makers will come together to discuss the latest AI developments. Topics cover the big picture, magazine, B2B and news media, technology and monetisation, policy and regulation, the workplace, content management and journalism, and practical implementation.