Unraveling the Hype Behind Shortest Path Algorithms

Welcome to the first in a series of articles on a classic computer science problem: finding the shortest path. This challenge is the engine behind GPS navigation, network routing, and even finding connections in social networks.
Recently, there’s been a buzz about a groundbreaking solution by Ran Duan, which promises near-linear time performance-a huge leap forward.

But to truly appreciate why this is so significant, we need to build our understanding from the ground up. This series will guide you from foundational algorithms to the advanced concepts that make modern solutions blazing fast (tm).

What Exactly is the “Shortest Path Problem”?

Before we dive in, let’s clarify what we’re solving. We’re working with graphs, which are collections of nodes (points) connected by edges (lines). Each edge has a weight or cost—think of it as the distance, time, or expense to get from one node to another. The goal is to find a path through this graph with the minimum possible total weight.

This single problem actually breaks down into a few distinct categories, each with its own set of algorithms and use cases.

Single-Source Shortest Path (SSSP)

This is the classic version and will be our main focus. The question it answers is:

"Given a single starting node, what's the lowest-cost path from it to every other node in the graph?"

Think of using Google Maps to see the travel time from your home to all other locations in a city. You have one source (your home), and you’re calculating the best route to many possible destinations. Dijkstra’s algorithm is the most famous SSSP solver.

All-Pairs Shortest Path (APSP)

This is a more comprehensive and computationally intensive problem. It answers the question:

"What's the lowest-cost path from any node to any other node in the entire graph?"

Instead of a single list of routes from one starting point, the result of an APSP algorithm is typically a lookup table. This table lets you instantly find the shortest path cost between any two nodes you choose. This is incredibly useful for applications that need to quickly query arbitrary routes, like calculating the distances in a logistics network or identifying degrees of separation in a social network.

For example, a travel website might pre-calculate an APSP table for all major airports. When you search for a flight, it can then quickly use this table to find and combine routes without running a complex search from scratch every single time.

K-Shortest Path

Sometimes, the absolute shortest path isn’t the only one you care about. What if the fastest route has an undesirable transfer or a high cost? The K-shortest path problem addresses this by finding not just the best path, but the k-best paths. It answers:

"What are the top 3 (or 5, or 'k') shortest paths from a starting node to a destination node?"

This is exactly what happens when a flight search engine gives you multiple options: the fastest, the cheapest, and a “recommended” balance. It provides alternatives, giving the user more flexibility than a single SSSP result would.

For this series, we’re sticking with the SSSP problem for now, as it forms the foundation for understanding all the others.

The Benchmark – Dijkstra’s Algorithm

Our journey begins with the cornerstone of pathfinding: Dijkstra’s Algorithm. Created by Edsger W. Dijkstra in 1956, it’s a brilliant and intuitive method for solving the SSSP problem.
At its core, Dijkstra’s is a greedy algorithm. It’s “greedy” because at every step, it makes the choice that looks best at that moment without worrying about the future. It always asks the same simple question: “What is the absolute closest, unvisited node I can go to next?”
Think of it like a ripple spreading across a lake. The algorithm starts at a source node and expands outwards, one step at a time, always moving to the next nearest node. It maintains a list of tentative distances to every other node and systematically declares them “final” as it explores.

How It Works: A Step-by-Step Analogy

Imagine you’re at home (the start node) and want to find the shortest driving time to every other location in your city.

  1. Initialization: You start with a list of all locations. The time to get “home” is 0. For every other location, you assume the time is infinite because you haven’t found a path there yet. You also have a list of places you need to visit, which currently includes all locations.
  2. The Step: Look at your list of unvisited locations and travel to the one with the shortest known time from home. Right now, that’s just immediate neighbors.
  3. Update and Repeat: From your new location, look at its neighbors. For each neighbor, calculate the total time from home. If this new path is shorter than any path you’ve found to that neighbor before, you update your list with the new, shorter time.
  4. Mark as Visited: Once you have checked all the neighbors of your current location, you mark it as “visited.” You will never need to check it again, as the algorithm guarantees you’ve found the shortest path to it.
  5. Continue: Go back to step 2: pick the unvisited location with the shortest (and newly updated) time from home and repeat the process.

You continue this until you have visited every location. By the end, you’ll have a complete map of the shortest paths from your home to everywhere else.

Visualizing Dijkstra’s Algorithm

Visuals make this process much clearer. Let it here:

A Simple JavaScript Implementation

For those who prefer code, here’s a simplified implementation of Dijkstra’s algorithm. It uses a basic array to find the next node to visit, but in a real-world scenario, a specialized data structure called a Priority Queue (or Min-Heap) would make this much faster.


/**
 * A naive implementation of a Priority Queue.
 * In a production environment, you would use a more efficient implementation like a Min-Heap. 
 * Feel free to skip this part if you're not interested in the code.
 */
class PriorityQueue {
    constructor() {
        this.elements = [];
    }

    // Add an element to the queue with a given priority
    enqueue(element, priority) {
        this.elements.push({ element, priority });
    }

    // Remove and return the element with the highest priority (lowest priority value)
    dequeue() {
        if (this.isEmpty()) {
            return null;
        }
        // This is the naive part: searching the whole array every time.
        let highestPriorityIndex = 0;
        for (let i = 1; i < this.elements.length; i++) {
            if (this.elements[i].priority < this.elements[highestPriorityIndex].priority) {
                highestPriorityIndex = i;
            }
        }
        // .splice() removes the element and returns it as an array
        return this.elements.splice(highestPriorityIndex, 1)[0].element;
    }

    isEmpty() {
        return this.elements.length === 0;
    }
}


/**
 * Finds the shortest paths from a start node to all other nodes in the graph.
 * @param {object} graph - The graph represented as an adjacency list.
 * @param {string} startNode - The starting node.
 * @returns {object} - An object containing the shortest distances to all nodes.
 */
function dijkstra(graph, startNode) {
    // === Step 1: Initialization ===
    // This is where we set up our data structures.
    const distances = {};
    const priorityQueue = new PriorityQueue();

    // Assume all distances are infinite to start.
    for (const node in graph) {
        distances[node] = Infinity;
    }
    // The distance to the starting node is always 0.
    distances[startNode] = 0;

    // We begin our journey from the start node.
    priorityQueue.enqueue(startNode, 0);


    // === Step 2: The Main Loop (Continue until all reachable nodes are visited) ===
    // This loop will continue as long as there are nodes to be explored.
    while (!priorityQueue.isEmpty()) {

        // === Step 3: Select the Unvisited Node with the Smallest Distance ===
        // This is the "greedy" part of the algorithm. We always choose the
        // closest node to visit next. Conceptually, this also "Marks the Node as Visited"
        // because we will never process it again.
        const currentNode = priorityQueue.dequeue();

        if (currentNode === null) {
            break;
        }

        // === Step 4: Update Distances and Repeat for Neighbors ===
        // Now we look at all the nodes connected to our current one.
        for (const neighborNode in graph[currentNode]) {
            const weightOfEdge = graph[currentNode][neighborNode];
            const distanceToNeighbor = distances[currentNode] + weightOfEdge;

            // Have we found a new, shorter path to this neighbor?
            if (distanceToNeighbor < distances[neighborNode]) {
                // If so, update the shortest distance we've found so far.
                distances[neighborNode] = distanceToNeighbor;
                // And add it to our list of nodes to explore, so we can
                // check its neighbors later.
                priorityQueue.enqueue(neighborNode, distanceToNeighbor);
            }
        }
    }

    return distances;
}

// --- Example Usage ---
const myGraph = {
    'A': { 'B': 4, 'C': 2 },
    'B': { 'A': 4, 'C': 5, 'D': 10 },
    'C': { 'A': 2, 'B': 5, 'D': 3 },
    'D': { 'B': 10, 'C': 3 }
};

const shortestPathsFromA = dijkstra(myGraph, 'A');
console.log(shortestPathsFromA); // Output: { A: 0, B: 4, C: 2, D: 5 }

Limitations and What’s Next

Dijkstra’s is powerful, but it has one critical weakness: it fails if there are negative edge weights. Its greedy nature means that once a node is marked “visited,” the path to it is considered final. A negative edge discovered later could have created a shorter path, but the algorithm would never go back to check. For example, if you were routing money through different financial services and some offered cashback (a negative cost), Dijkstra’s might miss the cheapest path because it finalized a route before discovering the bonus. This limitation, along with its “blind” exploration, opens the door for other, more robust or more efficient algorithms.

  • What if we have negative weights? In our next article, we’ll explore the Bellman-Ford algorithm, which handles this exact problem and can even detect paradoxes like negative-cost cycles (cashback).
  • Can we search more intelligently? After that, we’ll look at the A* (A-Star) algorithm, a favorite in game development, which enhances Dijkstra’s by using heuristics to guide its search in a more purposeful direction.

Stay tuned as we continue to build the foundation needed to understand the cutting edge of pathfinding.