Graphs
Master this topic with zero to advance depth.
Graph Algorithms: The Zero-to-Hero Guide
A Graph is the ultimate tool for modeling relationships. From social networks to Google Maps, graphs are everywhere. Mastery involves understanding three things: Representation, Traversal, and Specific Patterns.
1. Representing Graphs: The Choice Matters
| Representation | Memory | Edge Query | Use Case |
|---|---|---|---|
| Adjacency List | Sparse graphs (standard for most interviews). | ||
| Adjacency Matrix | Dense graphs where . | ||
| Edge List | Used in Kruskal's or Bellman-Ford where edge iteration is key. |
2. The Golden Rule of Traversals
A. Breadth-First Search (BFS) - "The Shortest Path King"
- Process: Level-by-level using a Queue.
- Visual: Like a ripple in a pond.
- Guarantee: Finds the shortest path in an unweighted graph.
B. Depth-First Search (DFS) - "The Exhaustive Explorer"
- Process: Deep-dive using Recursion/Stack.
- Visual: Like exploring a maze.
- Best For: Connectivity, Cycle detection, and Pathfinding (Backtracking).
3. Complexity Cheat Sheet
| Algorithm | Goal | Complexity |
|---|---|---|
| BFS/DFS | Connectivity / Traversal | |
| Topological Sort | Ordering Dependencies (DAG) | |
| Dijkstra | Shortest Path (Weighted, No Neg) | |
| Bellman-Ford | Shortest Path (Handles Neg Weights) | |
| Floyd-Warshall | All-Pairs Shortest Path | |
| Prim's / Kruskal's | Minimum Spanning Tree (MST) |
4. The Expert's Mental Model
- Is it a Graph?: If you see "Relationships", "Connections", or "States", it's a graph. Cells in a Grid are nodes; adjacent cells have edges.
- Directed vs. Undirected: Does imply ? (e.g., Follower vs. Friend).
- Weighted vs. Unweighted: Does the connection have a cost? (e.g., Distance vs. Connection).
- Implicit Graphs: The graph isn't nodes/edges yet. Nodes are states and edges are logical transitions (e.g., Word Ladder).
[!IMPORTANT] For Grid Problems, avoid building an actual adjacency list. Use directional arrays
dx = {0,0,1,-1}anddy = {1,-1,0,0}to traverse implicitly.
Course Schedule
There are a total of numCourses courses you have to take, labeled from 0 to numCourses - 1. You are given an array prerequisites where prerequisites[i] = [a_i, b_i] indicates that you must take course b_i first if you want to take course a_i.
Return true if you can finish all courses. Otherwise, return false.
Visual Representation
Courses: 2, Prerequisites: [[1, 0]]
Graph: 0 -> 1
Cycle? No. Result: true
Prerequisites: [[1, 0], [0, 1]]
Graph: 0 <-> 1 (Cycle!)
Cycle? Yes. Result: falseExamples
To take course 1 you should have finished course 0. So it is possible.
Level II: Intermediate (DFS - Cycle Detection)
Intuition
A graph is a Directed Acyclic Graph (DAG) if there are no Back Edges during DFS. We use three states for each node: 0 (unvisited), 1 (visiting/in current path), and 2 (visited). If we encounter a node with state 1, a cycle exists.
Thought Process
- Build an adjacency list.
- For each course, if not visited, launch DFS.
- In DFS:
- Mark node as
1(visiting). - Traverse neighbors. If a neighbor is in state
1, returnfalse(cycle). - Mark node as
2(fully processed).
- Mark node as
- If no cycle is found for any node, return
true.
Pattern: DFS Cycle Detection
Detailed Dry Run
pre=[[1,0],[0,1]]
| Node | State | Neighbors | Result |
|---|---|---|---|
| 0 | 1 | [1] | Launch DFS(1) |
| 1 | 1 | [0] | 0 is state 1! CYCLE! |
| Return | - | - | false |
Level III: Optimal (Kahn's Algorithm - BFS)
Intuition
Kahn's algorithm uses In-Degrees to peel off nodes with no dependencies. This is the gold standard for Topological Sort as it's iterative and avoids recursion stack issues.
Thought Process
- Calculate the in-degree of each course (number of prerequisites).
- Add all courses with
in-degree == 0to a queue. - While the queue is not empty:
- Pop a course, increment
completedcount. - For each course that depends on it, decrement its in-degree.
- If in-degree becomes 0, add to queue.
- Pop a course, increment
- If
completed == numCourses, returntrue.
Pattern: Topological Sort
Detailed Dry Run
num=2, pre=[[1,0]]
| Pop | Queue | In-Degrees | Count |
|---|---|---|---|
| - | [0] | [0:0, 1:1] | 0 |
| 0 | [1] | [1:0] | 1 |
| 1 | [] | - | 2 |
| Result: 2 == 2 (True) |
Number of Islands
Given an m x n 2D binary grid grid which represents a map of '1's (land) and '0's (water), return the number of islands.
An island is surrounded by water and is formed by connecting adjacent lands horizontally or vertically. You may assume all four edges of the grid are all surrounded by water.
Visual Representation
Grid:
1 1 0 0 0
1 1 0 0 0
0 0 1 0 0
0 0 0 1 1
Islands: 3 (Top-left, Middle, Bottom-right)Examples
Three distinct connected groups of '1's.
Level II: Intermediate (Union-Find)
Intuition
Each land cell '1' can be seen as a node. Adjacent '1's belong to the same connected component. We can use the Disjoint Set Union (DSU) to group these cells and count the number of resulting sets.
Thought Process
- Count total '1's in the grid (initial component count).
- Iterate through every cell. If it's a '1', check its Right and Down neighbors.
- If a neighbor is also '1', perform a
union. If they were in different sets, decrement the component count. - Return the final count of components.
Pattern: Disjoint Set Union (DSU)
Detailed Dry Run
Grid: [[1,1], [0,1]]
| Cell | Neighbors | Action | Sets |
|---|---|---|---|
| (0,0) | (0,1) is '1' | Union(0,1) | {0,1}, {1,1} (Count: 2) |
| (0,1) | (1,1) is '1' | Union(1,1) | {0,1,1,1} (Count: 1) |
| Result: 1 |
Level III: Optimal (DFS / BFS)
Intuition
Each cell '1' is a node, and adjacent '1's have edges. We can use a traversal to find Connected Components. When we find a '1', we launch a DFS/BFS to visit all reachable '1's and mark them as '0' ("sinking" the island) to avoid re-visiting.
Thought Process
- Iterate through every cell
(r, c). - If
grid[r][c] == '1':- Increment
count. - Launch DFS starting from
(r, c).
- Increment
- In DFS:
- Reached Water or Out of Bounds? Return.
- Mark current cell as '0' (visited).
- Recurse in 4 directions.
Pattern: Connected Components / Flood Fill
Detailed Dry Run
Grid: [[1,1], [0,1]]
| i, j | Val | Action | Count |
|---|---|---|---|
| 0,0 | '1' | Start DFS. Sink (0,0), (0,1), (1,1) | 1 |
| 0,1 | '0' | Skip | 1 |
| 1,1 | '0' | Skip | 1 |
| Final Count: 1 |
Clone Graph
Given a reference of a node in a connected undirected graph, return a deep copy (clone) of the graph.
Each node in the graph contains a value (int) and a list (List[Node]) of its neighbors.
Visual Representation
1 -- 2
| |
4 -- 3
Clone:
1' -- 2'
| |
4' -- 3'Examples
Node 1 is connected to 2 and 4. Node 2 is connected to 1 and 3, etc.
Level II: Intermediate (BFS + HashMap)
Intuition
BFS uses a queue to visit nodes level-by-level. By using a HashMap to store original -> clone, we can manage complexity and cycles. Every time we encounter a neighbor, if it's not in the map, we clone it and add to the queue.
Thought Process
- Use a HashMap to store
originalNode -> clonedNode. - Push the start node to a queue and create its clone.
- While the queue is not empty:
- Pop
curr. For each neighbornbr:- If
nbris NOT in the map (unvisited):- Create its clone and add it to the map.
- Push
nbrto the queue.
- Add the
cloneMap.get(nbr)to thecloneMap.get(curr)'s neighbors list.
- If
- Pop
- Return the clone of the input node.
Pattern: BFS with Mapping
Detailed Dry Run
1 -- 2
| Map | Queue | Action |
|---|---|---|
| {1:1'} | [1] | Init |
| {1:1', 2:2'} | [2] | Pop 1, clone 2, add 2' to 1' neighbors |
| {1:1', 2:2'} | [] | Pop 2, 1 is in map, add 1' to 2' neighbors |
Level III: Optimal (DFS + HashMap)
Intuition
DFS is often more concise for graph cloning as it mimics the structure of the graph through the call stack. The HashMap acts as our "visited" set while also storing the deep copies.
Thought Process
- Use a HashMap to store
originalNode -> clonedNode. - In DFS function:
- If the node is NULL, return NULL.
- If the node is already in the map, return the existing clone.
- Otherwise, create a new clone and add it to the map.
- For each neighbor of the original node, recursively call DFS and add the result to the clone's neighbor list.
- Return the DFS call for the input node.
Pattern: DFS with Mapping
Detailed Dry Run
1 -> [2, 4]
| Map | Call | Action |
|---|---|---|
| {} | DFS(1) | Create 1', Map {1:1'}, Call DFS(2) |
| {1:1'} | DFS(2) | Create 2', Map {1:1', 2:2'}, Call DFS(3) |
| ... | ... | ... |
| {1:1', 2:2', 3:3', 4:4'} | DFS(4) | Return 4' to 1' neighbors list |
Pacific Atlantic Water Flow
There is an m x n rectangular island that borders both the Pacific Ocean and Atlantic Ocean. The Pacific Ocean touches the island's left and top edges, and the Atlantic Ocean touches the island's right and bottom edges.
Water can flow to a neighboring cell (north, south, east, west) if the neighboring cell's height is less than or equal to the current cell's height.
Return a list of grid coordinates where rain water can flow to both the Pacific and Atlantic oceans.
Visual Representation
Pacific Ocean
| 1 2 2 3 2 |
| 3 2 3 4 4 |
| 2 4 5 3 1 | Atlantic Ocean
| 6 7 1 4 5 |
| 5 1 1 2 4 |Examples
Cells at these coordinates can flow to both oceans.
Level I: Brute Force (DFS from Every Cell)
Intuition
For every single cell in the grid, we start a traversal (DFS) to see if we can reach both the Pacific and Atlantic oceans. A cell can reach an ocean if there is a path of decreasing or equal heights to the boundary.
Thought Process
- For each cell
(r, c):- Run
canReachPacific(r, c)using DFS. - Run
canReachAtlantic(r, c)using DFS. - If both are true, add
(r, c)to results.
- Run
Why it's inefficient
We repeat the same exploration many times for neighboring cells. Many paths are re-calculated.
Detailed Dry Run
Grid: [[1, 2], [1, 1]]
| Cell | Reach Pac? | Reach Atl? | Result |
|---|---|---|---|
| (0,0) | Yes (Edge) | No (Trapped) | No |
| (0,1) | Yes (Edge) | Yes (Edge) | YES |
Level III: Optimal (DFS from Ocean Coasts)
Intuition
Instead of asking "Can this cell reach the ocean?", we ask "What cells can the ocean reach?". Water flows from high to low, so if we start at the ocean and go uphill (height >= current), we can find all reachable cells efficiently.
Thought Process
- Create two boolean grids:
canReachPacificandcanReachAtlantic. - Start DFS from all Pacific boundary cells (left and top edges) and mark reachable uphill cells in
canReachPacific. - Start DFS from all Atlantic boundary cells (right and bottom edges) and mark reachable uphill cells in
canReachAtlantic. - Cells marked in both grids are the result.
Pattern: Inverse Search / Multi-Source DFS
Detailed Dry Run
Grid: [[1,2],[1,1]]
| Ocean | Start Cells | Reachable (Uphill) |
|---|---|---|
| Pacific | (0,0),(0,1),(1,0) | (0,0), (0,1), (1,0) |
| Atlantic | (1,1),(0,1),(1,0) | (1,1), (0,1), (1,0) |
| Intersection: (0,1), (1,0), (1,1) |
Rotting Oranges
You are given an m x n grid where each cell can have one of three values:
0representing an empty cell,1representing a fresh orange, or2representing a rotten orange.
Every minute, any fresh orange that is 4-directionally adjacent to a rotten orange becomes rotten.
Return the minimum number of minutes that must elapse until no cell has a fresh orange. If this is impossible, return -1.
Visual Representation
Minute 0: [[2,1,1],[1,1,0],[0,1,1]]
Minute 1: [[2,2,1],[2,1,0],[0,1,1]]
Minute 2: [[2,2,2],[2,2,0],[0,1,1]]
Minute 3: [[2,2,2],[2,2,0],[0,2,1]]
Minute 4: [[2,2,2],[2,2,0],[0,2,2]]
Result: 4Examples
Fresh oranges at (0,1), (0,2), (1,0), (1,1), (2,1), (2,2) rot in 4 minutes.
Level I: Brute Force (Simulation)
Intuition
Simulate the rotting process minute by minute. In each minute, look for fresh oranges that are adjacent to any rotten orange. Mark them to rot in the next minute. Repeat this until no new oranges rot.
Thought Process
- In a loop:
- Check every cell. If it's fresh ('1') and has a rotten neighbor ('2'), mark it to rot.
- Update the grid. Increment time.
- If no orange rotted in this round, stop.
- After the loop, check if any fresh oranges remain.
Why it's inefficient
We iterate through the entire grid multiple times (up to times).
Detailed Dry Run
Grid: [[2,1,1],[0,1,1],[1,0,1]] Min 1: (0,0) rots (0,1). Grid: [[2,2,1],...] Min 2: (0,1) rots (0,2). Grid: [[2,2,2],...] ...
Level III: Optimal (Multi-source BFS)
Intuition
Rotting spreads like a wave. BFS is perfect for level-by-level spreading. Instead of one source, we start with all initial rotten oranges in the queue.
Thought Process
- Count
freshoranges and add all rotten orange coordinates to a Queue. - While queue is not empty and
fresh > 0:- For all oranges currently in the queue (this level/minute):
- Poll an orange, visit its 4 neighbors.
- If neighbor is fresh:
- Mark it rotten, decrement
fresh, add to queue.
- Mark it rotten, decrement
- Increment
minutesafter processing each level.
- For all oranges currently in the queue (this level/minute):
- Return
minutesiffresh == 0, else-1.
Pattern: Multi-source BFS
Detailed Dry Run
Grid: [[2,1,1],[1,1,0],[0,1,1]]
- Queue: [(0,0)], Fresh: 6
- Min 1: Rot (0,1), (1,0). Queue: [(0,1),(1,0)], Fresh: 4
- Min 2: Rot (0,2), (1,1). Queue: [(0,2),(1,1)], Fresh: 2 ...
Course Schedule II
There are a total of numCourses courses you have to take, labeled from 0 to numCourses - 1. You are given an array prerequisites where prerequisites[i] = [a_i, b_i] indicates that you must take course b_i first if you want to take course a_i.
Return the ordering of courses you should take to finish all courses. If there are many valid answers, return any of them. If it is impossible to finish all courses, return an empty array.
Visual Representation
Prerequisites: [[1, 0], [2, 0], [3, 1], [3, 2]]
Graph:
0 -> 1 -> 3
0 -> 2 -> 3
Possible Order: [0, 1, 2, 3] or [0, 2, 1, 3]Examples
To take course 3 you should have finished both courses 1 and 2. Both courses 1 and 2 should be taken after you finished course 0.
Level II: Intermediate (DFS - Post-order)
Intuition
In a DAG, if we perform DFS and record nodes in Post-order (after visiting all neighbors), and then reverse that list, we get a topological sort. We must also check for cycles using the recursion stack.
Thought Process
- Build adjacency list.
- Use a
visitedset to track globally visited nodes and apath(recursion stack) set for cycle detection. - In DFS(u):
- If
uinpath, returnFalse(cycle found). - If
uinvisited, returnTrue(already processed). - Add
utopathandvisited. - DFS neighbors. If any fails, return
False. - Remove
ufrompathand appenduto a Result List.
- If
- Call DFS for all unvisited nodes. Reverse the result list.
Pattern: DFS Topological Sort
Detailed Dry Run
num=2, pre=[[1,0]]
| Call | Path | Result | Action |
|---|---|---|---|
| DFS(0) | {0} | [] | Visit 1 |
| DFS(1) | {0,1} | [1] | No neighbors, backtrack |
| - | {0} | [1,0] | Done with 0 |
| Reverse [1,0] -> [0,1] |
Level III: Optimal (Kahn's Algorithm - BFS)
Intuition
Kahn's algorithm uses the concept of in-degrees (number of incoming edges). Nodes with 0 in-degree are ready to be taken. We repeatedly take such nodes and 'remove' their outgoing edges, potentially creating new 0 in-degree nodes.
Thought Process
- Calculate
indegreefor all nodes. - Add all nodes with
indegree == 0to a Queue. - While queue is not empty:
- Pop
u, add it to theresultlist. - For each neighbor
vofu:- Decrement
indegree[v]. - If
indegree[v] == 0, addvto queue.
- Decrement
- Pop
- If
resultlength ==numCourses, return it; otherwise, a cycle exists.
Pattern: Topological Sort (BFS)
Detailed Dry Run
2 courses, 1 -> 0
- Indegrees: {0:1, 1:0}
- Queue: [1]
- Pop 1, Res: [1], Indegree 0 becomes 0, Queue [0]
- Pop 0, Res: [1, 0] Result: [1, 0]
Graph Valid Tree
You have a graph of n nodes labeled from 0 to n - 1. You are given an array edges where edges[i] = [a_i, b_i] indicates that there is an undirected edge between nodes a_i and b_i in the graph.
Return true if the edges of the given graph make up a valid tree, and false otherwise.
Visual Representation
n=5, edges=[[0,1],[0,2],[0,3],[1,4]]
0
/|\n 1 2 3
|
4
Valid Tree! Result: trueExamples
All nodes are connected and there are no cycles.
Level I: Brute Force (DFS / BFS)
Intuition
A graph is a tree if it is connected and has no cycles. We can perform a traversal from any node and check:
- Did we visit all nodes?
- Did we encounter any already-visited nodes during traversal (cycle)?
Thought Process
- Build adjacency list.
- Start DFS from node 0.
- Keep track of
visitednodes. For each neighbor, make sure it's not the immediate parent (to handle undirected edges). - If a neighbor is already in
visited, returnFalse(cycle). - After DFS, check if
visited.size() == n.
Pattern: Cycle Detection in Undirected Graph
Detailed Dry Run
n=3, edges=[[0,1],[1,2],[2,0]]
- DFS(0): mark 0 visited, neighbor 1.
- DFS(1): mark 1 visited, neighbor 0 (skip), neighbor 2.
- DFS(2): mark 2 visited, neighbor 1 (skip), neighbor 0 (ALREADY VISITED!) -> Cycle found, return False.
Level III: Optimal (Union-Find)
Intuition
This is the most efficient way to check for both conditions. As we add edges, if two nodes already have the same root, adding an edge between them creates a cycle. Connectivity is guaranteed if we process exactly edges without cycles.
Thought Process
- Fundamental Tree Property: A valid tree with nodes MUST have exactly edges. If
len(edges) != n - 1, returnFalse. - Use Union-Find (DSU) with Path Compression.
- For each edge
(u, v):- Find roots of
uandv. - If roots are the same, return
False(cycle detected). - Otherwise, union them.
- Find roots of
- If all edges processed, return
True.
Pattern: Disjoint Set Union (DSU)
Detailed Dry Run
n=4, edges=[[0,1],[1,2],[2,3]]
- Edge (0,1): Union(0,1). Roots: 0,1. OK.
- Edge (1,2): Union(1,2). Roots: 0,2. OK.
- Edge (2,3): Union(2,3). Roots: 0,3. OK. All connected, no cycles. True.
Minimum Height Trees
A tree is an undirected graph in which any two vertices are connected by exactly one path. In other words, any connected graph without simple cycles is a tree.
For a tree of nodes, you can choose any node as the root. The height of the tree is the maximum distance between the root and any leaf.
Return a list of all MHTs' root labels.
Visual Representation
n = 4, edges = [[1, 0], [1, 2], [1, 3]]
0
|
1
/ \n2 3
Root 1 has height 1. Other roots have height 2.
Result: [1]Examples
Root 1 gives the minimum height tree.
Level I: Brute Force (DFS/BFS from every node)
Intuition
To find the root that minimizes height, we can treat every node as the root one-by-one, perform a BFS/DFS to find its height, and pick the minimum.
Thought Process
- For each node from to :
- Start a BFS/DFS from .
- Find the maximum distance from to any other node (this is the height).
- Record the height of each node. Find the minimum height.
- Return all nodes that produce this minimum height.
Why it's inefficient
For each of the nodes, we traverse elements. This leads to complexity.
Detailed Dry Run
n=3, edges=[[0,1],[1,2]]
- Root 0: Height 2 (path 0-1-2)
- Root 1: Height 1 (paths 1-0, 1-2)
- Root 2: Height 2 (path 2-1-0) Min Height: 1. Result: [1]
Level III: Optimal (Leaf Removal - BFS)
Intuition
The center of a tree is limited to 1 or 2 nodes. If we continuously remove the 'outer layers' (leaves), we will eventually converge on the center. This is similar to Kahn's algorithm but for undirected trees.
Thought Process
- Use an adjacency list and calculate the
degreeof each node. - Add all nodes with
degree == 1to a Queue (initial leaves). - In a loop, while
remainingNodes > 2:- Subtract current leaves count from
remainingNodes. - For each leaf in the queue:
- Pop it, find its only neighbor
nbr. - Decrement
degree[nbr]. - If
degree[nbr]becomes 1, addnbrto the next level queue.
- Pop it, find its only neighbor
- Subtract current leaves count from
- The final remaining nodes in the queue are the roots of MHTs.
Pattern: Topological Trimming
Detailed Dry Run
n=4, edges=[[1,0],[1,2],[1,3]]
- Degrees: {0:1, 1:3, 2:1, 3:1}. Leaves: [0, 2, 3]
- remainingNodes = 4. 4 > 2 is true.
- Remove 0,2,3. Neighbors: 1's degree 3->0. But we only decrement once for each leaf.
- 1's degree becomes 0. Queue empty. Result: [1]
Network Delay Time
You are given a network of n nodes, labeled from 1 to n. You are also given times, a list of travel times as directed edges times[i] = (u_i, v_i, w_i), where u_i is the source node, v_i is the target node, and w_i is the time it takes for a signal to travel from source to target.
We will send a signal from a given node k. Return the minimum time it takes for all the n nodes to receive the signal. If it is impossible for all the n nodes to receive the signal, return -1.
Visual Representation
Nodes: 4, Times: [[2,1,1],[2,3,1],[3,4,1]], K: 2
(1) <--1-- (2) --1--> (3) --1--> (4)
Signal at 2 reaches 1 and 3 at T=1.
Signal reaches 4 at T=2.
Result: 2Examples
Signal reaches all nodes in 2 units of time.
Level I: Brute Force (Bellman-Ford)
Intuition
Bellman-Ford is a simpler shortest path algorithm that works by repeatedly relaxing all edges. After iterations, all shortest paths are guaranteed to be found.
Thought Process
- Initialize
distarray with infinity,dist[k] = 0. - For iterations:
- For each edge in
times:- If
dist[u] + w < dist[v], updatedist[v] = dist[u] + w.
- If
- For each edge in
- After iterations, check
max(dist). If any node is still at infinity, return-1.
Why it's Level I
It is computationally slower than Dijkstra but much simpler to implement correctly as it doesn't require a priority queue.
Detailed Dry Run
n=4, k=2, edges=[[2,1,1],[2,3,1],[3,4,1]]
- Init: dist=[inf, 0, inf, inf] (using 1st index for node 1)
- Iter 1:
- edge (2,1,1): dist[1]=1
- edge (2,3,1): dist[3]=1
- edge (3,4,1): dist[4]=1+1=2
- Iter 2 & 3: No changes. Max dist = 2
Level III: Optimal (Dijkstra's Algorithm)
Intuition
This is a single-source shortest path problem on a weighted graph with non-negative weights. Dijkstra's algorithm uses a priority queue to always expand the node with the smallest known distance.
Thought Process
- Build an adjacency list:
u -> (v, w). - Initialize a
distarray with infinity,dist[k] = 0. - Use a Min-Priority Queue to store
(time, node)pairs. - While the queue is not empty:
- Pop
(t, u). Ift > dist[u], continue (stale entry). - For each neighbor
(v, weight)ofu:- If
dist[u] + weight < dist[v]:- Update
dist[v] = dist[u] + weightand push to queue.
- Update
- If
- Pop
- The answer is
max(dist). If any node is unreachable (infinity), return-1.
Pattern: Shortest Path (Dijkstra)
Detailed Dry Run
n=4, k=2, edges=[[2,1,1],[2,3,1],[3,4,1]]
| PQ (t,u) | dist[] | Action |
|---|---|---|
| (0,2) | [inf, 0, inf, inf] | Init |
| (1,1), (1,3) | [1, 0, 1, inf] | Pop 2, relax 1 & 3 |
| (1,3) | [1, 0, 1, inf] | Pop 1 |
| (2,4) | [1, 0, 1, 2] | Pop 3, relax 4 |
| Max(dist) = 2 |
Cheapest Flights Within K Stops
There are n cities connected by some number of flights. You are given an array flights where flights[i] = [from_i, to_i, price_i] indicates that there is a flight from city from_i to city to_i with cost price_i.
You are also given three integers src, dst, and k. Return the cheapest price from src to dst with at most k stops. If there is no such route, return -1.
Visual Representation
(0) --100--> (1) --100--> (2)
| ^
|----------500------------|
K = 1:
Path 0 -> 1 -> 2: Price 200. Stops: 1 (OK)
Result: 200
K = 0:
Path 0 -> 2: Price 500. Stops: 0 (OK)
Path 0 -> 1 -> 2: Stops: 1 (FAIL)
Result: 500Examples
0 to 1 then to 2 is cheapest within 1 stop.
Level I: Brute Force (BFS with Depth)
Intuition
We can use BFS to explore paths level by level. Each level represents one 'stop'. We stop exploring when we exceed stops.
Thought Process
- Build adjacency list.
- Queue stores
(node, current_price, current_stops). - Keep a
min_pricesarray to prune paths that are more expensive than what we've seen at a certain node. - While queue is not empty:
- Pop
(u, price, stops). - If
stops > k, skip. - For each neighbor
vwith flight costw:- If
price + w < min_prices[v]:min_prices[v] = price + w.- Push
(v, price + w, stops + 1).
- If
- Pop
Pattern: Level-wise BFS
Detailed Dry Run
k=1, src=0, dst=2, edges=[[0,1,100],[1,2,100],[0,2,500]]
- Queue: [(0, 0, -1)] (stops starts at -1 as first flight is 0 stops)
- Pop (0,0,-1). Neighbors: (1, 100, 0), (2, 500, 0). min_prices: {1:100, 2:500}
- Pop (1,100,0). Neighbors: (2, 200, 1). 200 < 500. min_prices: {2:200}
- Pop (2,500,0). Neighbors: None.
- Pop (2,200,1). Stops = k. Done. Result: 200
Level III: Optimal (Bellman-Ford / BFS)
Intuition
Dijkstra focuses on shortest distance, but here we have a constraint on the number of edges (stops). Bellman-Ford is perfect for this: we run the relaxation process exactly k+1 times to find the shortest path with at most k edges.
Thought Process
- Initialize
pricesarray with infinity,prices[src] = 0. - For
ifrom 0 tok:- Create a
tempcopy ofpricesto avoid using updated prices from the SAME iteration (level-wise relaxation). - For each flight
(u, v, cost):- If
prices[u]is not infinity, updatetemp[v] = min(temp[v], prices[u] + cost).
- If
- Set
prices = temp.
- Create a
- Return
prices[dst]if reachable, else-1.
Pattern: Constrained Shortest Path
Detailed Dry Run
k=1, src=0, dst=2, flights=[[0,1,100],[1,2,100],[0,2,500]]
| Iter | 0 | 1 | 2 | Actions |
|---|---|---|---|---|
| Init | 0 | inf | inf | - |
| 1 | 0 | 100 | 500 | Relax edges from 0 |
| 2 | 0 | 100 | 200 | Relax edge 1->2 (100+100=200) |
| Result: 200 |
Word Ladder
A transformation sequence from word beginWord to word endWord using a dictionary wordList is a sequence of words such that:
- Every adjacent pair of words differs by a single letter.
- Every
s_ifor1 <= i <= kis inwordList. s_k == endWord.
Return the number of words in the shortest transformation sequence, or 0 if no such sequence exists.
Visual Representation
Begin: "hit", End: "cog", List: ["hot","dot","dog","lot","log","cog"]
"hit" -> "hot" -> "dot" -> "dog" -> "cog"
Path length: 5Examples
hit -> hot -> dot -> dog -> cog (5 words).
Level I: Brute Force (BFS with Word Comparison)
Intuition
The simplest BFS approach is to compare the current word with every word in the dictionary to find neighbors (words with only 1 character difference).
Thought Process
- Use a Queue for BFS, starting with
(beginWord, 1). - In each step:
- Pop
word, dist. - Iterate through every
winwordList:- If
wis not visited andisNeighbor(word, w):- If
w == endWord, returndist + 1. - Mark
was visited and add to queue.
- If
- If
- Pop
Why it's Level I
If is the number of words, this involves comparisons for every word we visit, leading to total work.
Detailed Dry Run
"hit" -> "cog", ["hot", "dot", "dog", "cog"]
- "hit": neighbors? Scan list. Found "hot".
- "hot": neighbors? Scan list. Found "dot", "lot".
- "dot": neighbors? Scan list. Found "dog".
- "dog": neighbors? Scan list. Found "cog". DONE.
Level III: Optimal (BFS with Char Change)
Intuition
Instead of comparing with every word, we can generate all 26 possible neighbors for each character position. This is much faster when the alphabet size is small.
Thought Process
- Store all words in a set for lookups.
- Use a Queue for BFS, starting with
(beginWord, 1). - While the queue is not empty:
- Pop
word, dist. - For each character position in
word:- Replace
word[i]with every letter from 'a' to 'z'. - If the new string exists in the set:
- If it's the
endWord, returndist + 1. - Add to queue and remove from set (marking it visited).
- If it's the
- Replace
- Pop
Pattern: BFS State Generation
Detailed Dry Run
hit -> hot -> dot -> dog -> cog
- hit: change h to i,j,k... change i to a,b,c... change t to a,b,c,s,o (hot!)
- Add hot to queue, remove from set.
- Process 'hot' similarly.
Shortest Path in Binary Matrix
Given an n x n binary matrix grid, return the length of the shortest clear path in the matrix. If there is no clear path, return -1.
A clear path in a binary matrix is a path from the top-left cell (i.e., (0, 0)) to the bottom-right cell (i.e., (n - 1, n - 1)) such that:
- All the visited cells of the path are
0. - All the adjacent cells of the path are 8-directionally connected.
- The length of a clear path is the number of visited cells of this path.
Visual Representation
Grid:
0 0 0
1 1 0
1 1 0
Path: (0,0) -> (0,1) -> (1,2) -> (2,2)
Length: 4Examples
Clear path found with 4 units length.
Level II: Intermediate (BFS without In-place Modification)
Intuition
A standard BFS using a visited set instead of modifying the input grid. This is often preferred in production code to avoid side effects.
Thought Process
- Check starting and ending cells.
- Use a
visitedset to store(r, c). - Queue stores
(r, c, distance). - Standard 8-directional exploration.
Pattern: BFS with Visited Set
Detailed Dry Run
3x3 grid, (0,0) is clear.
- Queue: [(0,0,1)], Visited: {(0,0)}
- Pop (0,0,1). Neighbors: (0,1), (1,1), (1,0). Add all to Visited and Queue.
- Continue until (n-1, n-1) is reached.
Level III: Optimal (BFS with In-place Marking)
Intuition
To optimize space and constant time, we can mark visited cells directly in the grid (e.g., set grid[r][c] = 1). BFS finds the shortest path in an unweighted grid.
Thought Process
- Early exits for blocked start/end.
- Queue for BFS starting at
(0, 0, 1). - Mark
grid[0][0] = 1. - Explore all 8 neighbors using a predefined
directionsarray. - If neighbor is
0, mark as1and queue it.
Pattern: 8-Direction BFS
Detailed Dry Run
2x2 grid [[0,1],[1,0]]
- (0,0) Start. Mark grid[0][0]=1.
- Neighbors: (0,1)-X, (1,0)-X, (1,1)-OK.
- (1,1) reached. Return 2.
Alien Dictionary
There is a new alien language that uses the English alphabet. However, the order among the letters is unknown to you.
You are given a list of strings words from the alien language's dictionary, where the strings in words are sorted lexicographically by the rules of this new language.
Return a string of the unique letters in the new alien language sorted in lexicographically increasing order by the new language's rules. If there is no solution, return "". If there are multiple solutions, return any of them.
Visual Representation
Words: ["wrt","wrf","er","ett","rftt"]
Comparisons:
1. "wrt" vs "wrf" -> t comes before f (t -> f)
2. "wrf" vs "er" -> w comes before e (w -> e)
3. "er" vs "ett" -> r comes before t (r -> t)
4. "ett" vs "rftt" -> e comes before r (e -> r)
Graph: w -> e -> r -> t -> f
Result: "wertf"Examples
From the comparisons, the order is w < e < r < t < f.
Level II: Intermediate (Topological Sort / DFS)
Intuition
Alternatively, you can use DFS to perform a topological sort. For each character, we explore all its 'neighbors' and then add the character to a stack.
Thought Process
- Graph preparation is the same as Level III.
- Use a
statemap: 0 = unvisited, 1 = visiting (checking for cycles), 2 = visited. - In DFS(u):
- If state is 1, return cycle found.
- If state is 2, return OK.
- Set state to 1.
- DFS neighbors. If any fails, return false.
- Set state to 2 and prepend
uto result list.
Pattern: DFS Cycle Detection
Detailed Dry Run
["wrt","wrf","er"]
- t->f, w->e
- DFS(w): DFS(e), prepend e, prepend w. Order: we...
- DFS(r)... final order wertf.
Level III: Optimal (Topological Sort / Kahn's)
Intuition
The sorted list of words gives us relative ordering of characters. This is a classic Topological Sort problem. We extract dependencies from adjacent words and then apply Kahn's algorithm.
Thought Process
- Initialize a graph
adjandindegreemap for all unique characters. - Compare every two adjacent words
word1andword2:- Find the first non-matching character
c1inword1andc2inword2. - Add edge
c1 -> c2and incrementindegree[c2]. - Edge case: If
word2is a prefix ofword1(e.g., "abc", "ab"), return""(invalid).
- Find the first non-matching character
- Apply Kahn's Algorithm (standard BFS topological sort).
- If the resulting string length != number of unique characters, a cycle exists; return
"".
Pattern: Lexicographical Ordering / Kahn's
Detailed Dry Run
["wrt","wrf","er"]
- t -> f, w -> e
- Unique: {w,r,t,f,e}
- Indegrees: f:1, e:1, others:0
- Queue: [w, r, t]
- Pop w: Order "w", e's indegree -> 0, Queue [r, t, e]
- ... final order "wertf"
Minimum Cost to Connect All Points
You are given an array points representing integer coordinates of some points on a 2D-plane, where points[i] = [x_i, y_i].
The cost of connecting two points [x_i, y_i] and [x_j, y_j] is the manhattan distance between them: |x_i - x_j| + |y_i - y_j|.
Return the minimum cost to make all points connected. All points are connected if there is exactly one simple path between any two points.
Visual Representation
Points: [[0,0],[2,2],[3,10],[5,2],[7,0]]
(3,10)
|
|
(0,0)-(2,2)-(5,2)-(7,0)
Min Cost to connect all points: 20Examples
Points are connected with total manhattan distance 20.
Level II: Intermediate (Kruskal's Algorithm)
Intuition
Kruskal's algorithm works by sorting all potential edges by weight and adding them one by one if they don't form a cycle (using Union-Find).
Thought Process
- Generate all possible edges with their Manhattan distances.
- Sort edges by weight.
- Use Union-Find to process edges.
- Add edge to MST if endpoints are in different components.
- Stop when edges are added.
Why it's Level II
It's very intuitive but slightly slower for dense graphs () compared to Prim's .
Detailed Dry Run
Points: [0,0], [2,2], [7,0]
- Edges: (0,1):4, (1,2):7, (0,2):7.
- Sort: [(0,1):4, (1,2):7, (0,2):7]
- Add (0,1): Cost=4. OK.
- Add (1,2): Cost=4+7=11. OK. Result: 11
Level III: Optimal (Prim's Algorithm)
Intuition
For a dense graph (where every point connects to every other point), Prim's algorithm with a simple array is more efficient than Kruskal's or Prim's with a heap.
Thought Process
- Maintain a
minDistarray whereminDist[i]is the shortest distance from the current MST to pointi. - Start with point 0,
minDist[0] = 0. - In each of iterations:
- Find the unvisited point
uwith the smallestminDist. - Add
minDist[u]to total cost. - Mark
uas visited. - For every unvisited point
v, updateminDist[v] = min(minDist[v], dist(u, v)).
- Find the unvisited point
Pattern: Prim's on Dense Graph
Detailed Dry Run
Points: [0,0], [2,2], [3,10]
- Start with node 0.
- Node 0 to 1: dist 4. Node 0 to 2: dist 13.
- Choose node 1 (min dist 4). Total = 4.
- Node 1 to 2: dist 9. 9 < 13. Update minDist[2] = 9.
- Choose node 2. Total = 4 + 9 = 13. Result: 13
Swim in Rising Water
You are given an n x n integer matrix grid where each value grid[i][j] represents the elevation at that point (i, j).
At time t, the water level everywhere is t. You can swim from a square to another 4-directionally adjacent square if and only if the elevation of both squares are at most t. You can swim infinite distances in zero time. You start at the top left square (0, 0) at time 0.
Return the least time until you can reach the bottom right square (n - 1, n - 1).
Visual Representation
Grid:
0 1 2
3 4 5
6 7 8
You start at 0. You need time 8 to reach 8 because the bottom-right corner itself is 8.
At t=8, all cells are <= 8, so a path exists.
Result: 8Examples
At time 3, we can swim along the path 0-1-3.
Level II: Intermediate (Binary Search on Answer + BFS)
Intuition
Since the possible time answers range from 0 to , we can binary search for the minimum time such that there is a path from to using only cells with elevation .
Thought Process
- Search range
low = 0, high = N*N - 1. - For each
mid(candidate time):- Run a BFS/DFS from
(0,0). - Only move to neighbors with
grid[nr][nc] <= mid. - If
(N-1, N-1)is reachable,midis possible; try lower. - Otherwise,
midis impossible; try higher.
- Run a BFS/DFS from
Why it's Level II
It's very robust and often easier to come up with than the Dijkstra/heap-based approach.
Detailed Dry Run
3x3, grid[0][0]=3, grid[2][2]=5
- high=8, low=0. mid=4. Can we reach 2,2? Start (0,0) is 3. OK. BFS explores neighbors <= 4. If path exists, high=4.
- mid=2. grid[0][0]=3 > 2. Impossible. low=3.
- Continue binary search.
Level III: Optimal (Dijkstra's Algorithm)
Intuition
This is a pathfinding problem where the weight of a path is the maximum elevation on that path. We want to find the path with the minimum such weight. This is a variant of Dijkstra where the 'distance' is the max-so-far elevation.
Thought Process
- Use a Min-Heap/Priority Queue to store
(max_elevation_so_far, r, c). - Start with
(grid[0][0], 0, 0). - Keep a
visitedset to avoid cycles. - In each step:
- Pop
(t, r, c). Thistis the minimum time needed to reach this cell. - If
(r, c)is(n-1, n-1), returnt. - For each 4-directional neighbor
(nr, nc):- The time needed to reach neighbor is
max(t, grid[nr][nc]). - If unvisited, push to queue.
- The time needed to reach neighbor is
- Pop
Pattern: Min-Max Path (Dijkstra)
Detailed Dry Run
Grid: [[0,2],[1,3]]
- PQ: [(0, 0, 0)]
- Pop (0,0,0). Neighbors: (0,1): max(0,2)=2, (1,0): max(0,1)=1.
- PQ: [(1, 1, 0), (2, 0, 1)]
- Pop (1,1,0). Neighbors: (1,1): max(1,3)=3.
- PQ: [(2, 0, 1), (3, 1, 1)]
- Pop (2,0,1). Neighbors: (1,1): max(2,3)=3. (Already in PQ with 3, no update).
- Pop (3,1,1). Reached (1,1). Return 3. Result: 3
Critical Connections in a Network
There are n servers numbered from 0 to n - 1 connected by undirected server-to-server connections forming a network where connections[i] = [a_i, b_i] represents a connection between servers a_i and b_i.
Any connected graph with no cycles is a tree. A critical connection is a connection that, if removed, will make some servers unable to reach some other server.
Return all critical connections in the network in any order.
Visual Representation
Servers: 4, Connections: [[0,1],[1,2],[2,0],[1,3]]
0---1---3
\ /
2
Removing [1,3] splits the graph.
Removing [0,1], [1,2], or [2,0] doesn't split it (cycle 0-1-2).
Result: [[1,3]]Examples
[1,3] is the only bridge.
Level I: Brute Force (Edge Removal + Connectivity)
Intuition
Try removing each edge one by one and check if the graph remains connected. If it doesn't, that edge is a critical connection.
Complexity
- Time:
- Space:
Detailed Dry Run
4 nodes, edges [[0,1],[1,2],[2,0],[1,3]]. Remove [1,3]: unreachable. Critical!
Level III: Optimal (Tarjan's Bridge-Finding Algorithm)
Intuition
Tarjan's algorithm uses DFS to find bridges by tracking discovery times and the lowest discovery time reachable via back-edges.
Pattern: Bridge Finding (Tarjan)
Detailed Dry Run
Graph: 0-1, 1-2, 2-0, 1-3. Bridge condition: low[v] > disc[u]. At node 1, child 3 has low[3]=3, disc[1]=1. 3 > 1 is TRUE. [1,3] is bridge.
Longest Path in a DAG
Given a Directed Acyclic Graph (DAG) and a source vertex src, find the distance to the farthest vertex from src.
In a general graph, finding the longest path is NP-hard. However, in a DAG, it can be solved in linear time using Topological Sort.
Visual Representation
Nodes: 0 to 5
Edges: (0,1,5), (0,2,3), (1,3,6), (1,2,2), (2,4,4), (2,5,2), (2,3,7), (3,4,-1), (4,5,-2)
Topological Order: 0, 1, 2, 3, 4, 5
Relax edges in this order to find max distances.Examples
Distances from node 1: to 3 is 9 (1->2->3 is 2+7), to 4 is 8, etc.
Level I: Brute Force (Recursion + Memoization)
Intuition
The longest path from a node u is max(weight(u, v) + longestPath(v)) for all neighbors v. Since the graph is a DAG, we can use memoization to avoid redundant computations.
Thought Process
- Define
dfs(u)as the longest path starting from nodeu. - In
dfs(u):- If
uis already inmemo, returnmemo[u]. - Initialize
max_dist = 0(or some base case). - For each neighbor
vwith weightw:max_dist = max(max_dist, w + dfs(v)).
- Store and return
max_dist.
- If
Complexity
- Time:
- Space:
Detailed Dry Run
src=1. Edges from 1: (3,6), (2,2).
- dfs(3): edge (4,-1). dfs(4): edge (5,-2). dfs(5): 0.
- dfs(4) = -1 + 0 = -1.
- dfs(3) = -1 + -1 = -2. (Wait, let's say base case is 0 for sink).
- dfs(1) = max(6 + dfs(3), 2 + dfs(2))...
Level III: Optimal (Topological Sort + Relaxation)
Intuition
Processing nodes in topological order allows finding the longest path in .
Pattern: DAG Relaxation
Detailed Dry Run
Order: 0,1,2,3,4,5. Relax edges from 1. 1->2->3 is longer than 1->3 directly.
Reconstruct Itinerary
You are given a list of airline tickets where tickets[i] = [from_i, to_i] represent the departure and the arrival airports of one flight. Reconstruct the itinerary in order and return it.
All of the tickets belong to a man who departs from "JFK", thus, the itinerary must begin with "JFK". If there are multiple valid itineraries, you should return the itinerary that has the smallest lexical order when read as a single string.
Visual Representation
Tickets: [["MUC","LHR"], ["JFK","MUC"], ["SFO","SJC"], ["LHR","SFO"]]
Graph: JFK -> MUC -> LHR -> SFO -> SJC
Result: ["JFK", "MUC", "LHR", "SFO", "SJC"]Examples
Standard path starting from JFK.
Level III: Optimal (Hierholzer's Algorithm)
Intuition
This is finding an Eulerian Path in a directed graph. An Eulerian Path visits every edge exactly once. Hierholzer's algorithm finds this by performing DFS and appending nodes to the result only when they have no more outgoing edges.
Thought Process
- Build a graph where each node leads to a PriorityQueue of destinations (to handle lexicographical order requirement).
- Start DFS from "JFK".
- In DFS:
- While the current airport has outgoing tickets:
- Pop the smallest destination and recurse.
- Add current airport to the head of the result list (or append and reverse at the end).
- While the current airport has outgoing tickets:
Pattern: Eulerian Path (Hierholzer)
Detailed Dry Run
JFK -> [MUC, SFO], MUC -> [LHR], LHR -> [JFK]
- DFS(JFK): Pop MUC, DFS(MUC)
- DFS(MUC): Pop LHR, DFS(LHR)
- DFS(LHR): Pop JFK... Final order built in reverse: [SFO, JFK, LHR, MUC, JFK]. Reverse it.
- DFS(MUC): Pop LHR, DFS(LHR)
Shortest Path to Get All Keys
You are given an m x n grid grid where:
'.'is an empty cell.'#'is a wall.'@'is the starting point.- lowercase letters are keys.
- uppercase letters are locks.
You start at the starting point and one move consists of walking one space in one of the four cardinal directions. You cannot walk outside the grid, or through a wall. If you walk over a key, you pick it up. If you walk over a lock and have the matching key, you can pass through it.
Return the lowest number of moves to acquire all keys. If it's impossible, return -1.
Visual Representation
Grid:
@ . a . #
# # # . #
b . A . B
Starting at @. Path:
1. Pick 'a'
2. Use 'a' to open 'A'
3. Pick 'b'
Done! Return shortest path length.Examples
Shortest path to pick up all keys 'a' and 'b'.
Level III: Optimal (BFS with State - Bitmask)
Intuition
The 'state' of our BFS is not just our position (r, c), but also which keys we possess. We can represent the set of obtained keys as a bitmask. This turns the problem into a standard shortest path BFS in a state space of (r, c, mask).
Thought Process
- Find start position
@and count total number of keysK. - Target mask is
(1 << K) - 1. - Queue stores
(r, c, mask, distance). visited[r][c][mask]tracks seen states.- In each step:
- If current cell is a key, update
newMask = mask | (1 << (key - 'a')). - If
newMask == targetMask, returndist. - If current cell is a lock and we don't have the key, skip.
- Standard 4-direction BFS flow.
- If current cell is a key, update
Pattern: BFS with State Bitmasking
Detailed Dry Run
2 keys. Target mask = 11 (3). Start (0,0) mask 00.
- Reach 'a' (0,2). Mask = 01.
- Reach 'A' (2,2). Lock 'A' (bit 0) matches mask bit 0. Passage allowed.
- Reach 'b' (2,0). Mask = 11. Target reached!
Find the City With the Smallest Number of Neighbors at a Threshold Distance
There are n cities numbered from 0 to n-1. Given the array edges where edges[i] = [from_i, to_i, weight_i] represents a bidirectional weighted edge between cities from_i and to_i, and given the integer distanceThreshold.
Return the city with the smallest number of cities that are reachable through some path and whose distance is at most distanceThreshold. If there are multiple such cities, return the city with the greatest number.
Visual Representation
Cities 4, Edges: [[0,1,3],[1,2,1],[1,3,4],[2,3,1]], Threshold 4
0 --3-- 1 --1-- 2
\ /
4---3
(1)
City 0: Neighbors {1,2} (Dist 3, 4) - Count 2
City 3: Neighbors {1,2} (Dist 2, 1) - Count 2
... compare all, find min count.Examples
City 3 has 2 neighbors within distance 4, same as city 0, but 3 > 0.
Level III: Optimal (Floyd-Warshall Algorithm)
Intuition
This problem requires finding all-pairs shortest paths to count how many cities are within the threshold for each city. Floyd-Warshall is the standard algorithm for all-pairs shortest paths in a small number of vertices ().
Thought Process
- Initialize a
distmatrix of size with infinity, anddist[i][i] = 0. - For each edge
(u, v, w), setdist[u][v] = dist[v][u] = w. - Run Floyd-Warshall: three nested loops
k, i, jto updatedist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j]). - For each city
i, count how manyjhavedist[i][j] <= distanceThreshold. - Tracking the minimum count and the maximum index among cities with that count.
Pattern: All-Pairs Shortest Path
Detailed Dry Run
N=4, Threshold=4
- After Floyd-Warshall, dist[0][1]=3, dist[0][2]=4, dist[0][3]=5.
- City 0 count (<= 4): 2 (cities 1 and 2).
- City 3 count (<= 4): dist[3][1]=2, dist[3][2]=1, dist[3][0]=5. Count 2 (cities 1 and 2).
- Since counts are equal, pick city 3 (larger index).
Found an issue or have a suggestion?
Help us improve! Report bugs or suggest new features on our Telegram group.