Haskell, Determinism, and the Collision with the Real World

Elliot Luque
4 min

There is a very specific moment in almost every computer science student’s life when something clicks. It usually is not when you learn a new framework, or when you build your first REST API.

In my case, that moment came while studying Languages, Technologies, and Programming Paradigms, specifically when I started working with Haskell.

Until then, programming felt fairly intuitive:

  • write code
  • run it
  • it does what you expect

But Haskell starts asking uncomfortable questions.


The Initial Collision: Pure Functions and Determinism

In Haskell, one very strong idea appears from day one:

A function, if it is pure, always returns the same result for the same arguments.

Not “usually.” Not “on this machine.” Always.

That forces you to think of a program as a mathematical function, not as a sequence of steps where “things happen.”

And then the inevitable question appears:

If this is so clean and so reasonable…why do real programs fail so often?


From Haskell to the Real World

When you leave the academic environment and return to real systems, you find the exact opposite:

  • timeouts that trigger “sometimes”
  • errors that cannot be reproduced
  • bugs that disappear when you add logs
  • different behavior with the same input

And that is where the mental conflict begins:

Wasn’t a program supposed to be a function? Shouldn’t it be deterministic?

The short answer is: no.

The interesting answer is: it cannot be.


A Program Does Not Live in a Vacuum

Haskell teaches you to reason as if the world were ideal:

  • no time
  • no network
  • no concurrency
  • no failures
  • no environment

But a real program does not live there.

A real program is:

a physical process running on a shared machine, governed by an operating system, interacting with a world that constantly changes.

That breaks determinism from every angle.


The Network Does Not Fail: Assumptions Do

One of the first places where this idea becomes obvious is the network.

We say “the network failed” as if it were something simple, but in reality, the network is:

  • hardware
  • drivers
  • operating system
  • protocols
  • timeouts
  • application logic

A network failure can end up being simply:

  • a call returning -1
  • an exception
  • a closed connection

From the program’s point of view, all that external chaos is reduced to a return value.

The world may be on fire, but your code only sees:

ERROR

What About Exceptions? Are They Special?

Here is where intuition clashes again.

In high-level languages, we talk about exceptions as if they were something almost magical, but they are not.

At a low level:

  • exceptions do not exist
  • comparisons exist
  • jumps exist
  • execution-flow changes exist

An exception, whether it is:

  • a NullPointerException
  • a segmentation fault
  • a timeout

always ends up being the same thing:

a non-local transfer of execution control

It is not magic. It is a jump that breaks normal flow.


The Real Origin of Chaos: Concurrency

If Haskell teaches you the ideal world, concurrency teaches you why that world does not exist.

As soon as there are:

  • multiple threads
  • multiple processes
  • interruptions
  • operating-system scheduling

order stops being fixed.

And without fixed order, there is no determinism.

The same code can run in different orders, producing different results, without any obvious “bug” in the code.

The Detail No One Tells You About: Time

Even if you do not share data, time introduces uncertainty:

  • when a thread runs
  • when a packet arrives
  • when a timeout expires
  • how long an operation takes

Two executions never happen at the exact same physical instant.

Therefore, they are never identical.

So… Is Haskell Lying?

No. Haskell is not lying. Haskell bounds the problem.

What it does is separate two worlds:

  • the pure, deterministic, reasonable world
  • the impure world, full of IO, time, and effects

And it forces you to explicitly acknowledge when you cross from one to the other.

More than a limitation, that is a brutal engineering lesson.

From Theory to Engineering

When you understand all of this, your way of programming changes:

  • you stop assuming that “this cannot happen”
  • you stop trusting order
  • you start designing for failure

That is why things exist such as:

  • retries
  • explicit timeouts
  • idempotency
  • queues
  • circuit breakers
  • observability

Not because we are being dramatic, but because failure is not an exception, it is a normal system state

Conclusion

Studying paradigms like pure functional programming does not directly prepare you to write distributed systems, but it gives you something far more valuable:

a clear mental model of how things should be

And understanding why the real world does not fit that model is what makes the difference between writing code that works “on my machine” and designing systems that survive chaos.