Unit Tests Are a Waste of Time: A 47-Year Perspective
In my 47 years of mass-producing bugs, Iβve noticed a disturbing trend among junior developers: they waste precious coding time writing tests. Let me explain why this is fundamentally misguided.
The Mathematics of Testing
Consider this simple calculation:
| Activity | Time Spent | Bugs Found |
|---|---|---|
| Writing unit tests | 4 hours | 0 (tests donβt find bugs, they ARE bugs) |
| Manual testing in prod | 10 minutes | All of them (eventually, by users) |
| Praying | 2 minutes | Equally effective |
As you can see, unit tests are mathematically inferior.
The βCoverageβ Myth
Some misguided souls chase β100% code coverage.β Let me tell you what 100% coverage actually means:
def calculate_salary(hours, rate):
return hours * rate
# "100% coverage" test
def test_calculate_salary():
assert calculate_salary(0, 0) == 0 # Ship it!
There. 100% coverage. The function works for all the values I tested. What more could you want?
Real Senior Engineers Test in Production
As the great Wally from Dilbert once said while avoiding work: βWhy would I do something twice when I could do it zero times?β
Testing in production has several advantages:
- Real data - No more mocking! Your users provide the test data.
- Real traffic - Load testing for free!
- Real consequences - Nothing motivates bug fixes like angry customers.
The TDD Pyramid is Upside Down
They show you a pyramid with lots of unit tests at the bottom. But think about it: pyramids were built by ancient civilizations. We have AI now. We should invert the pyramid:
ββββββββββββββββββββββββββββββ
βββββββ MANUAL TESTING ββββββββ
βββββββββ IN PROD βββββββββββ
βββββββββββββββββββββββββ
ββββ E2E TESTS βββββ
βββββββββββββββββ
ββββββββββββ
βββββββ
ββββ
ββ β Unit tests (optional)
This is known as the Ice Cream Cone anti-pattern, and I call it βdelicious architecture.β
My Production Verification Methodβ’
Instead of unit tests, I use a battle-tested approach:
#!/bin/bash
# deploy_and_pray.sh
git push origin main --force
echo "Checking if production is on fire..."
sleep 300
curl -s https://production.example.com | grep -q "500" && echo "Everything is fine"
If the grep finds a 500 error, everything is fine because at least the server is responding. No 500? Even better! No response at all? Weekend material.
The Hidden Cost of Tests
Every test you write is:
- Code you have to maintain
- Code that can have bugs
- Code that slows down your CI/CD
- Code that makes you feel βsafeβ (dangerous!)
You know what doesnβt have bugs? Code that was never written. By extension, the best tests are the ones that donβt exist.
When Tests Are Acceptable
Iβll admit thereβs ONE case where tests make sense:
# tests/test_critical.py
def test_company_exists():
"""If this fails, we have bigger problems"""
assert True
This test passes quickly, adds coverage, and will never break unless Python itself breaksβat which point, again, we have bigger problems.
Conclusion
Next time someone asks βwhere are the tests?β in code review, simply reply: βThe tests are in production, being executed by our users as we speak.β
If Mordac the Preventer of Information Services asks for test coverage reports, just generate a random number between 80 and 95. Nobody checks these things anyway.
The author hasnβt run a test suite since 2007. The tests are still passing because the CI server was decommissioned.