The Math

You may have noticed a pattern at this point. Perhaps by reading these articles, maybe by looking at the academic requirements of a computer science degree and noticing how mathematically intensive they tend to be. Or, like me, you learned it the hard way through coding interviews that felt like it had been ripped straight from the pages of a calculus textbook you once threw away in frustration. Either way, the conclusion is hard to avoid; computer science, at its core, is rooted in mathematics. That doesn’t mean you can’t thrive without a formal math background, you absolutely can, and I should know. But understanding, or at least appreciating, the math underneath the surface makes a real difference. It demystifies the discipline, and it turns you from someone who can write code into someone who can reason about systems.

Now here’s the plot twist: in a lot of practical software work, the “math” you work with most often isn’t calculus, it’s growth. It’s the uncomfortable moment when a solution that seems fine on a toy input becomes unusable when the input gets 10, 100, 1000, or even 1000000 times bigger than you initially designed for. And it’s why interviewers reach for the same hammer so often: Big-O notation, the language we use to describe how runtime or memory requirements scale as input size grows, typically focusing on the upper bound and ignoring constants. In other words, Big-O is less “do math” and more “tell me what happens when this gets big.”

Now here’s the plot twist: in a lot of practical software work, the “math” you work with most often isn’t calculus, it’s growth. It’s the uncomfortable moment when a solution that seems fine on a toy input becomes unusable when the input gets 10, 100, 1000, or even 1000000 times bigger than you initially designed for. And it’s why interviewers reach for the same hammer so often: Big-O notation, the language we use to describe how runtime or memory requirements scale as input size grows, typically focusing on the upper bound and ignoring constants. In other words, Big-O is less “do math” and more “tell me what happens when this gets big.”

If you want to see why CS is mathematical without turning this into a math lecture, search is the perfect lens. Search is what happens when your problem is not “compute one answer,” but “navigate a space of possibilities.” A maze is the cleanest example: your state is your location, your actions are legal moves, and your goal is the exit. That framing, state, action, goal, generalizes to routing, scheduling, puzzles, gameplay AI, debugging, even planning in complex systems.

The first approach anyone reaches for, human or machine, is brute force: try everything until something works. And brute force is useful because it reveals the true enemy: combinatorial explosion. Every choice fans out into more choices; the search tree doesn’t grow linearly, it multiplies. In Big-O terms, many brute-force searches look like exponential growth (the classic “branching factor to the depth” shape), which is the practical meaning behind “works in a demo, dies at scale.”

And Big-O is really just the vocabulary for that feeling. Now, here’s the part most people don’t admit: you can develop solid instincts about performance through intuition and repetition while still lacking the formal language to discuss it. I started coding in 2012 and didn’t properly learn about Big-O until I started grad school in 2024. But I absolutely paid for it, I couldn’t begin to tell you how many dozens, if not hundreds of times, even something as simple as fizz buzz tripped me up. Because interviews don’t just want code that works; they want you to justify why it will still work when the input gets big. Once you accept that “try everything” won’t scale, you start caring about strategy, and this is where the math shows up as guarantees and tradeoffs.

Now, the two classic strategies are depth-first search (DFS) and breadth-first search (BFS). DFS dives deep and backtracks; it’s simple and often memory friendly. BFS expands outward level-by-level; it tends to use more memory, but it buys you something concrete: in an unweighted graph, BFS is the standard tool for shortest-path discovery. Both DFS and BFS run in linear time relative to the graph size, O(V + E), because they visit vertices and edges in a structured way rather than “wandering” randomly. That’s Big-O doing what it’s supposed to do: giving you a scaling guarantee, not a vibe.

But BFS can still feel wasteful, because it explores lots of places that “obviously” aren’t headed toward the goal. That’s where heuristics come in: informed guesses that prioritize promising directions. A* is the canonical example, think of it as “BFS with a compass.” Under the right conditions (notably a consistent/admissible heuristic), A* preserves optimality while drastically reducing exploration in many real cases. This is one of the most important recurring themes in computer science: the difference between “impossible” and “practical” is often not hardware, it’s structuring the search.

This is also the cleanest place to briefly touch the boundary people gesture at with P vs NP: some problems are easy to check once someone hands you an answer, but hard to find because the search space explodes. You don’t need the formal definitions to feel the intuition, if the only way forward is “search a massive space,” then your Big-O story starts getting ugly fast. And that’s the interview lesson hiding in plain sight: you can’t just code; you have to show you understand how the solution scales—and the common mistakes are predictable (mixing up nested loops with true quadratic growth, ignoring what data structure operations cost, confusing worst-case with average-case).

Which brings us to Week 3’s demo: a Maze Racer→ that makes all of this visible. Toggle DFS and watch it commit, backtrack, and sometimes get “lucky.” Toggle BFS and watch it flood outward, then hand you a shortest path with receipts. Toggle A* and watch it stop wasting time in the wrong parts of the maze. The point isn’t to memorize algorithms, it’s to feel why computer science is mathematical: because it’s fundamentally about modeling possibilities and making credible claims about growth. Big-O is just the label we slap on that reality.

Which brings us to Week 3’s demo: a Maze Solver Playground that makes all of this visible. Toggle DFS and watch it commit, backtrack, and sometimes get “lucky.” Toggle BFS and watch it flood outward, then hand you a shortest path with receipts. Toggle A* and watch it stop wasting time in the wrong parts of the maze. The point isn’t to memorize algorithms, it’s to feel why computer science is mathematical: because it’s fundamentally about modeling possibilities and making credible claims about growth. Big-O is just the label we slap on that reality.