How to build your own Perplexity for any dataset

Learnings from building “Ask Hacker Search

Jonathan Unikowski (@jnnnthnn) • May 23rd, 2024

Last week, I released “Ask Hacker Search”, a question-answering engine that leverages Hacker News' comments. You ask it a question, and it provides an answer based on opinions expressed on Hacker News. As one of my friends puts it, it's “a distillation of the wisdom of the hacker news crowd”.

I was inspired to build Ask Hacker Search by Perplexity, which brands itself as an “answer engine” covering the entire web. While Perplexity offers the ability to focus your search on certain segments on the web, it misses the ability to focus it on a specific site, hence the impetus for Ask Hacker Search.

A demo of Ask Hacker Search

A screenshot of Perplexity

A screenshot of Perplexity and its focus feature

Perplexity-style experiences are becoming increasingly prevalent: Google recently expanded its Generative AI features in Search, OpenAI is actively iterating on their existing “web browsing” offering and is rumored to have plans to expand it into a search engine, and companies like Glean aim to provide similar capabilities for internal company data.

All this made me curious: What does it take to build something like this? How difficult would it be to create a similar tool for a specific dataset I care about?

Table of contents (estimated reading time: 12 minutes)

How Perplexity works

First, let's understand what happens when a user makes a query on Perplexity [1]:

A diagram illustrating how Perplexity works

A conceptual overview of how Perplexity works (click to enlarge)

The links that feed into the generated summary are what allows the LLM to leverage data it might not have seen or retained from training, and helps avoid hallucinations. This makes for a really great user experience! If I have a factual question that Google doesn't immediately surface in a card, I might need to go to multiple pages, sometimes full of ads, each with distinct layouts, just to piece together the various components of available information. Through Perplexity, that work is done for me almost instantly by an LLM. Magic!

Where Perplexity tends to let me down more is when I want to understand the opinions of a specific group of people. Because Perplexity relies on Google and Bing' indexes, it often picks SEO'd websites that either share banalities or just aim to sell you something. That is a natural side-effect of using mainstream search engines' indexes: these are not tailored to any specific use cases. In turn, this provides a motivation for leveraging a dataset of your choice with a Perplexity-style UX: depending on your dataset, you might get far more useful results for certain queries!

Comparing supabase answers

Ask Hacker Search's answers for opinions on technical topics (left) often are more actionable than Perplexity's (right)

How I built Ask Hacker Search

Now that we know the main ingredients, how do we put them together? What follows is what I did for Hacker Search, and what you might consider doing for your own Perplexity for your dataset.

I should note that my objective was to create something useful, not something perfect. I gave myself a few days to build it, and therefore generally chose technologies and products I'm familiar with irrespective of whether they were the best tool for the task.

The index and content

The simplest way to build an index of content for Ask Hacker Search would have been to use SerpAPI to scrape Google's index in response to users' queries (using the site:news.combinator.com suffix). Armed with links, one could then visit each Hacker News page that comes up, scrape its contents, and provide those to an LLM for ranking.

However, I didn't want to take that approach. While in theory it might yield good results by leveraging Google's index, it involved a fair number of intermediary steps (searching Google via an API, getting URLs as results, scraping those using something like ScrapingBee or Browserbase, parsing results, etc), and the steps involving HN would require me to devise an approach to avoiding getting rate limited (leaving aside the complexities inherent to the legal status of scraping those sites, and the potentially substantially load I could've put on a website I love). Luckily, there was another way: Hacker News offers an API that allows you to download raw data.

The Hacker News API repository on GitHub

The Hacker News API repository on GitHub

Ingesting comments

Hacker News has more than 40 million posts. I opted to only focus on the past 3 years of content, which boiled down to 10 million entries. The HN API makes it possible to scrape items individually by querying incremental IDs. Ingesting those can be done easily by using a task queue like GCP's Cloud Tasks and flexible compute similar to Cloud Run. The total cost of task queueing and computing for 10 million comments was under $50, well within GCP's $300 trial credits.

With the content on hand, I now needed a way to search it through an index. Creating an index can get very involved, because you're not only trying to make it possible to identify what segments of your content might be relevant to a user query, but also which specific ones are most relevant.

The dominant approach is a mix of lexical search and semantic search (see Ask Hacker Search answer on the topic). Given my time constraints, I decided to only use semantic search. Semantic search is particularly effective when trying to answer users' questions, because it enables retrieval based on the user's intent rather than the exact keywords they used. So, again in the spirit of picking the most straightforward path, I used OpenAI's embedding APIs.

This approach works pretty well! While testing manually, I found that it often sufficed to surface the most relevant results, and that adding a step where an LLM would rerank the results by sorting them by relevancy would generally be good enough. Since HN comments are typically quite short, embedding 10 million of them doesn't actually involve processing that many tokens, and the total cost was a few dozen dollars on the OpenAI front. I used Turbopuffer for storing and querying embeddings, as it is cheap, fast, and very scalable.

An overview of a simple ingestion service

An overview of a simple ingestion service, also known as rocket science

From here, we have an approximation of a reasonably good index:

We can now go from a query to a sorted list of comments to compose an answer. The next step is to have an LLM do the heavy lifting for us.

LLM magic

I needed an LLM to take the raw comments as retrieved from our index and turn them into a response to the user's query. When picking an LLM, you nowadays have many options allowing you to trade off quality, price, and speed for the needs of your application. Since I wanted to move fast, I opted to use OpenAI's LLMs, largely because I'm familiar with the developer experience of working with them. I experimented with GPT-3.5, GPT-4, and upon its release, GPT-4o. I ultimately used GPT-4o given its relatively low price, high speed, and good quality for this type of tasks.

One of my key takeaway from this experience is that LLMs at GPT-4's level of capability are incredibly forgiving. For some queries, more than half of the retrieved comments are uninteresting or irrelevant. And yet, the LLM could seamlessly extract the high entropy bits and conjure a coherent answer (example of a query with very few content in the HN dataset). I suspected this would work (as I'd seen how Perplexity can boil down a long SEO-puffed-up piece of information back into its original essence), but I was shocked by how well it did work. It's outright magical.

Building the end-user UI

I built a simple end-user UI using React, Next.js, Tailwind, and tRPC (the T3 stack is a good scaffolding for such apps). The UI is inspired by Hacker News and enables the user to both review LLM-generated summaries and the source comments. It's fast and easy to use.

Because the thesis for Ask Hacker Search is that users will be interested in the actual comments rather than an entirely LLM-abstracted summary, I wanted to include quotes from the actual comments into the LLM-generated responses.

My approach to that is fairly simple:

Because GPT-4o is pretty good at not hallucinating or accidentally misquoting, this works more often than not, and makes for a really cool UX!

The quotations feature at work

What parts were most interesting

Superficially, it might seem like all I did here was tie together a few APIs and datastores. Indeed, leveraging a highly curated dataset like Hacker News and off-the-shelf tools made many things easier. In that respect, we've come a long way: I built my first semantic search engine in 2016, and at the time I had to write my own doc2vec implementation, deal with raw matrix multiplications by hand, use all kinds of hacks to deal with the shortcomings of doc2vec, and figure out how to serve it in production.

That said, even though it is much easier to build these things today, one thing I found out in practice is that small details like how you clean up your data or prompt an LLM compound and have a massive impact on the final product. What follows are a few interesting learnings in that vein.

1. Getting quotations right

While the citation approach generally works well, it sometimes fails to link back to the original comment, often because the LLM truncated or rephrased parts of it. A much better approach would be to ask the LLM to generate unique identifiers for the quotations and work with those, rather than operating from raw citation tokens (see e.g. “Deterministic Quoting: Making LLMs Safer for Healthcare ”, kindly pointed to me by gburtini, or ContextCite for alternative options). This would also enable one to directly insert the original quotation as an injected block, which seems like an interesting alternative UX.

2. The quality of your retriever conditions the quality of your results

Unsurprisingly, the underlying dataset and how you're drawing from it has a massive impact on the quality of results. As an example, in one instance, a user asked a very specific question about a fairly obscure book. Nothing really relevant came up in the dataset, and because I hadn't implemented an "I don't know" escape hatch, the LLM would just attempt to craft a narrative that made very little sense.

Additionally, my semantic search approach is pretty naive. For example, the shape of the user's query (e.g. "what is the best programming language") might not closely match the target items (e.g. "the best programming language is..."). Better approaches would be to precompute questions matching to potential answers, or to use an LLM to transform the query such that it matches the general shape of the candidate answer (a technique also known as HyDE). I got away without doing that for this project given that the audience for Ask Hacker Search is fairly technical and understand the need for prompting (and in fact, might prefer having the ability to prompt), but that would certainly be a big barrier for a broader consumer product.

Lastly, because I retrieve individual comments rather than full threads, often the context surrounding comments is lost. As an example, one comment might ask "Should I use X?" and 10 reponses saying "YES!" might follow, yet those responses might never get fetched in the right context. Fetching parent and child comments would undoubtedly improve the quality of results, rather than taking comments out of context.

3. Date relevance is hard

Related to the above, I didn't put much care into the timeliness of the data fed to the LLM. As an example, the dataset might contain contradictory statements that are purely a result of a change of events:

An illustration of date sensitivity

Sam Altman as Schrödinger's CEO

A better approach here would be similar to Perplexity's: bias towards recent news sources, provide the LLM with an exact present date, and instruct it to give more credence to more recent statements. My intuition is that one could also do interesting things by letting the LLM retrieve data iteratively for date ranges of its choosing through function calling.

4. LLMs are expensive

Duh. But yeah, really. While computing embeddings wasn't very expensive, I did spend a fair amount of time and effort pre-launch ensuring that on-the-fly summaries wouldn't break the bank:

An illustration of the inference caps

Failing open for curious users

All of this overhead substantially slowed me down, adding a few hours of engineering work that wouldn't have been needed for a non-LLM product. When I wes building websites as a teenager, I could make enough money making websites to afford to rent a web server to launch new projects every week without worrying about ballooning costs. I wonder what interesting experimentation we're missing out on now that that is less straightforward.

A disgruntled user shares their opinion

Speaking of teenagers, a disgruntled user shares their opinion.

5. Vibes rule until they don't

An interesting property of LLM-powered products is that you can get very far very fast just by assessing how the product works at an anecdotal level, tweaking your prompts and what goes into the context window, and re-assessing the quality of the outcome.

However, one inevitably hits diminishing returns past a certain point. Changing the prompt to make the LLM less likely to hallucinate might make it more likely to refuse answering questions it should otherwise answer. Telling it to make sure to respond to the user's query might cause it to contrive its response into a plausible but completely incorrect shape. Building evals would be my first next step to continue iterating quickly without silent regressions, and I encourage anyone building products with LLM past the prototyping stage to do the same.

What’s next

I enjoyed working on this project and have been using Ask Hacker Search multiple times a week. Although I think it could be improved further, I'm likely going to wind down my efforts to maintenance levels, as it was merely a side project. I am however working on something much bigger, and somewhat related to Hacker Search. Follow me on X to hear more about it when it goes live!

Lastly, if you build something using this blueprint, please reach out! I'd love to see what you come up with.

[1] What I've outlined here is my experience building this side project on a strict time budget. I have no insider information regarding how Perplexity actually works, and some of my assessment might be incorrect. Additionally, this outline does not cover the substantial gap between a simple product like mine and a scaled-up version of it: making the UX consumer-grade, distributing the product, and scaling it to millions of users in a short period of time are all non-trivial endeavors, and Perplexity certainly deserves a lot of credit for that. Back to text.