← Posts

Linux Has a Built-In Network Simulator

Your program works perfectly on your machine. Fast responses, clean connections, zero errors. You ship it.

Then someone runs it from a hotel WiFi in rural Ohio, or from a corporate network that silently drops one in twenty packets. Your program falls apart — timeouts unhandled, no retry logic, error messages that say “something went wrong” and nothing else.

You never saw it coming because your test environment was perfect. That’s the bug.

A Kernel Capability You Already Have

The kernel has been able to simulate a degraded network since the early 2000s. No tools to install. No Docker container. No external service. It’s part of iproute2, which ships on virtually every Linux distribution, and it operates inside the kernel’s actual network stack — not as a proxy, not as a wrapper, not as a simulation layer sitting in front of your traffic.

It’s called netem — network emulator — and it’s accessed through tc: traffic control.

tc manages the queuing disciplines — qdiscs — that govern how packets move through your network interfaces. By default, your interface runs a simple qdisc that sends packets out as fast as possible in order. netem replaces that default with a qdisc that intercepts every outgoing packet and does something deliberate to it before it hits the wire. Delay, loss, jitter, reordering, corruption — all of it happening at the kernel level, on real packets, on a real interface.

# Add 200ms delay with 20ms jitter to outgoing traffic
sudo tc qdisc add dev eth0 root netem delay 200ms 20ms

# Simulate 5% packet loss
sudo tc qdisc add dev eth0 root netem loss 5%

# Both at once — bad wifi in two lines
sudo tc qdisc add dev eth0 root netem delay 200ms 20ms loss 1%

# Simulate a satellite connection
sudo tc qdisc add dev eth0 root netem delay 600ms 50ms loss 2%

# Remove it — restore normal behavior
sudo tc qdisc del dev eth0 root

No config file. No daemon. You just told the kernel to degrade your own network, and the kernel said fine.

When you curl an API with 600ms of latency applied, you are experiencing exactly what users in bad network conditions experience — because it’s the same kernel path, the same interface, the same packet flow.

What You Can Actually Test

The value isn’t in the novelty — it’s in what surfaces when you use it.

Timeout handling. Set 5 seconds of delay and watch whether your application waits forever or fails gracefully. Most don’t fail gracefully the first time.

Retry logic. Apply 10% packet loss and see whether your program retries, gives up immediately, or enters some undefined state where it thinks it succeeded but didn’t.

Connection resilience. Apply loss mid-session and see whether long-lived connections recover or require a restart.

User-facing error messages. Under bad conditions, what does your application actually tell the user? “Error” is not an answer.

These are not edge cases. They are conditions a real percentage of users encounter on any given day.

The Cleanup Problem

Here’s where it gets important.

netem qdiscs are persistent kernel state. They live on the network interface itself — not in your terminal session, not in your shell environment, not attached to the process that created them. When you close your terminal, the rule stays. When you log out, it stays. When you kill the shell, it stays.

The kernel has no concept of “the session that applied this.” It knows the interface has a qdisc, and it applies it to every packet, indefinitely, until something explicitly removes it.

The only thing that clears it automatically is a full reboot, because the kernel reinitializes network interfaces from scratch on boot.

This matters in practice. Developers who use tc netem directly — usually by copying a command from Stack Overflow — frequently forget to run the del command. They finish their test, close the terminal, get on with their day. An hour later their machine feels slow. SSH sessions are laggy. Downloads are crawling. They restart their browser. They check their router. They don’t think to check tc qdisc show dev eth0, because why would they? Nothing should still be running.

But it is.

You can inspect what’s currently applied at any time:

tc qdisc show dev eth0

On a clean interface you’ll see something like qdisc fq_codel or qdisc pfifo_fast — the kernel default. If you see qdisc netem, you left something behind.

Working With It Safely

A few habits that prevent the forgotten-rule problem:

Always have the delete command ready before you run the add command. Write both in your terminal history in the same session. Don’t run the add and close the terminal.

Alias the cleanup. Put this in your .bashrc or .bash_aliases:

alias tc-clear='sudo tc qdisc del dev eth0 root 2>/dev/null; echo "cleared"'

Adjust the interface name for your system — ip link shows what you have. Now cleanup is one word.

Check before you test. Run tc qdisc show before and after any network chaos session. Two seconds, saves an afternoon of confusion.

Know your interface name. eth0 is traditional but your system might use enp3s0, wlan0, ens33, or something else. Using the wrong name means your rules silently apply to the wrong interface — or fail with a cryptic error. ip link is the source of truth.

What You’re Actually Getting

tc netem has been in the kernel since 2.6. It’s precise, it’s kernel-native, and it requires nothing beyond what’s already on your system. The barrier to using it was never installation or compatibility — it was knowing it existed.

Use it deliberately, clean up after yourself, and find the bugs before your users do.