Explained in layman’s terms, no DS knowledge is required.
If we ever working on any data science project or want to add smarter search (like vector search) into your system. you may have already been using this algorithm — HNSW.
This post is not to go through the papers (I am not able to do that) of details, just to simply extract the core idea behind this algorithm to share and discuss.
As usual, let’s start with a simple analogy.
NSW (Navigatable small world)
So today let’s say you want to catch up with your friend on the weekend, you stay in city1, and your friend in the city2, city1, and city2 together become a “Nevigatible small world”, in this world, 2 rules:
- you can only use public transport
- There are only buses
To achieve the shortest traveling time, the bus system is built in such a way:
- every time add a new station (randomly pick) to map
- When adding a new station, connect to the nearest station on the map (greedy)
Let’s build it.
So here as you can see: there are 2 cities. now if you are the builder to link these 7 bus stops together to make sure of minimum traveling time (starting from any station), how to do it?
Remember the idea how of a minimum spanning tree be built up step by step here?
So the NSW algorithm is kind of similar to the prim algorithm. the difference is instead of using a heap to track the minimum distance to the starting point, we find the (k) closest points on the map and connect it.
So let’s say you only have 1 friend to meet in City 2, so there is only 1 “friend house” to search, so k=1. So does it means if you want to meet 3 friends in City 2 today, the same process will happen as below but k=3 (every time connect 3 closest bus stops)? yes.
Step 1, randomly pick a bus stop, number 2.