Technology & Innovation
You type in a destination, hit navigate, and before you even put your phone down, Google Maps already knows the fastest way to get there. Across an entire city. With live traffic. Through road networks with hundreds of thousands of intersections.
We've used it so many times it stopped feeling impressive. It really shouldn't have.
Before getting into how the routing actually works, there's one conceptual shift worth making: stop thinking about roads as roads, and start thinking about them as a graph.
In graph theory, a graph is just a set of nodes connected by edges. Nothing fancy. But that simple structure turns out to describe a surprisingly large number of real-world systems, and road networks are one of the most natural fits.
Every intersection is a node. Every road segment between two intersections is an edge. And every edge carries a weight representing the cost of traveling it, usually time or distance. Once you frame it this way, navigation becomes a well-defined mathematical problem: find the path from node A to node B where the total weight of edges is as small as possible. Computer scientists call this the shortest path problem, and it's been studied for decades.
The most intuitive way to solve it is to check every possible route and return whichever one is cheapest. Simple enough on paper.
The problem is that a city like Jakarta has roughly half a million road segments. The number of possible paths between any two points in a network that size is not just large, it's practically uncountable. Checking all of them isn't slow. It's impossible.
So the interesting question isn't "how do we find the shortest path." It's "how do we find the shortest path without checking most of the graph."
In 1956, a Dutch computer scientist named Edsger Dijkstra came up with an approach that's still foundational to routing today. The idea is disarmingly simple: always expand the node you can currently reach at the lowest cost.
Rather than wandering the graph randomly, Dijkstra's algorithm works like this. Start at the origin with a cost of zero, and mark everything else as unknown. From there, look at all directly reachable neighbors and calculate the cost of reaching them through the current position. If that cost beats whatever was previously recorded, update it. Then move to the next node with the lowest known cost and repeat.
The reason this works so well is that once a node is "settled," you've provably found the cheapest way to reach it. You never have to revisit it. No wasted effort, no second-guessing.
For a road network with millions of nodes, that's already a massive leap over brute force. But for something operating at Google's scale, it still has a notable flaw.
Dijkstra's expands outward from the start point in every direction at once, like a ripple spreading across water. It has no concept of where the destination actually is. Routing from South Jakarta to North Jakarta, it'll spend real effort exploring roads heading south and east, roads that will almost certainly never appear in the final route.
For a small city, you can live with that. For a continental road network with tens of millions of nodes, it becomes a genuine bottleneck.
The A* algorithm fixes this with one addition: a heuristic, basically an educated guess of how far any given node is from the destination.
Instead of expanding the node with the lowest cost from the origin, A* expands the node with the lowest value of cost from origin plus estimated cost to destination. The estimate is usually straight-line distance, which always underpromises (you can never travel faster than a straight line in physical space). That property keeps the algorithm honest, it still finds the optimal route, it just stops wasting time on directions that are obviously wrong.
The practical difference is significant. A* concentrates its search toward the destination rather than radiating outward blindly. On real city graphs, it can explore a fraction of the nodes Dijkstra's would touch and still return the exact same answer.
Here's what makes modern navigation genuinely hard: the graph isn't static.
Traffic conditions change the weight of edges constantly. A road that normally takes 3 minutes might take 18 during an incident. Temporary closures remove edges entirely. Routing preferences like avoiding tolls or sticking to highways change which edges are even eligible.
All of that is happening live, across billions of queries per day, and every response still needs to land in under a second.
The way Google handles this is through precomputation. Road networks are analyzed offline to identify a natural hierarchy: local streets feed into arterial roads, which feed into highways. For a long trip, the algorithm can quickly jump to the highway level for the bulk of the journey and only deal with local street detail near the start and end. This technique, known as Contraction Hierarchies, is a big reason why routing from one city to another feels just as instant as navigating around the block.
Looking at the progression from brute force to Dijkstra's to A* to Contraction Hierarchies, the interesting thing is that none of these are fundamentally new math. They're the same shortest path problem, solved by increasingly exploiting what we already know about the structure of road networks.
Dijkstra's works because road costs are non-negative. A* works because physical space gives you a free admissible heuristic. Contraction Hierarchies work because humans built road networks with a natural hierarchy baked in. Each optimization is just the algorithm paying closer attention to the shape of the problem.
That's the real lesson here. The jump from minutes to milliseconds didn't come from faster computers or more memory. It came from asking, at every step, what do we already know about this problem that we're not using yet?
Graph theory shows up in more places than most people realize. Social networks, package manager dependency trees, logistics pipelines, fraud detection systems, content recommendation engines: all of these are graphs at their core, and all of them run into the same fundamental tradeoff between exhaustive correctness and practical speed.
The navigation problem is just an unusually clean example of how that tradeoff gets resolved when the stakes are high enough and the scale is large enough to force real engineering.
Google Maps finding your route in milliseconds is not a magic trick. It's what graph theory looks like when it's taken seriously.