Writing tests can feel like eating your vegetables. You know it's good for you. You know future-you will thank you. But sometimes you just don't want to do it. Or, worse, you do it begrudgingly, rushing gh the task to check it off your list without really stopping to think. We've all been there: you write a quick test that lightly prods at the outer shell of your code, pat yourself on the back for covering your bases, and move on. But then a bug slips through, and you're left wondering, "Didn't I test for this?"
Here's the thing about testing: it's not just an exercise in covering lines of code or getting your "percent coverage" into the green. Effective tests are about understanding why you're testing something and making sure that what you're testing actually matters. If you're not asking yourself, "What exactly am I testing here?" before you start, then you're probably not writing the tests your code really needs. Let's walk through why this matters and how to ask the right questions.
First, let's talk about purpose. What are you trying to achieve with your code? This might seem obvious, but it's the foundational question you need to answer before typing out test assertions. Tests should validate that your code is doing what it's supposed to do — not just that it works in the most obvious way, but that it holds up under the full spectrum of real-world use cases. And while it's tempting to fall back on default mental heuristics like "test the happy path," that's only part of the picture. Sure, test that your perfect scenario works. But what if things aren't perfect? What if the inputs are weird, unpredictable, or downright wrong? Do you want your code to gracefully fail, throw a descriptive error, or plow ahead obliviously? More importantly, how will you know that's what it's doing? Your tests are where you define and enforce these expectations.
But the real kicker? You also need to think about what you don't need to test. Not all code is created equal when it comes to failure risk. Some parts of your codebase are inherently more prone to breakage, whether due to complexity, external dependencies, or just the fact that they're actively evolving. Others are so static or trivial that spending time on them isn't worth your energy. Unless, of course, your organization requires 100% test coverage in a fit of code-aesthetic purity (in which case, my sympathies). Focusing your testing efforts where they will have the highest impact is a time-management superpower. If you wield it correctly, you'll avoid writing mountains of unnecessary tests that no one will ever look at again.
Let's dig into UI testing specifically, because this is a trap I've seen more teams fall into than I can count. UI tests are important, don't get me wrong, but they are also a giant time suck if you let them veer off into the weeds. What's the number one thing people love to test in UI code? You guessed it: how things look. Did the layout render correctly? Is the button blue? Does the page have the expected text? These are easy things to write tests for, and easy traps to fall into because they feel like progress.
But here's the truth: most users will never care if a button changes from blue to green. What users do care about is whether clicking that button does the thing it's supposed to do. Your job as a developer isn't to protect the aesthetics of the UI, it's to protect the behavior of the app. UI tests shouldn't be about pixels. They should be about outcomes.
This means approaching UI tests from a state-and-action perspective. Ask yourself: when I interact with the UI, does the underlying state of the application change the way it's supposed to? If clicking a button fires off a network request and updates a list you don't need a test to check the the exact layout, what you care about is whether interacting with the UI successfully triggers the state change, processes the response, and updates the list correctly. Anything beyond that is window dressing.
Which brings me to edge cases. If happy-path testing is like brushing your teeth, edge-case testing is like flossing: so easy to skip, so essential for preventing disaster. Your edge cases are where bugs breed. When inputs are malformed, states are messy, or events cascade unexpectedly, does your code still hold up? You have to embrace the chaos. Test for the things you think could never happen: extra-long names that break layouts, unsupported file formats, someone mashing buttons like they're playing a video game on turbo mode. These are the scenarios that will bite you in the ass.
Finally, think about the lifespan of your tests. No one likes maintenance-mode testing, but the truth is, your tests should grow and adapt just like your code does. Push yourself to ask: what's likely to change in the future? If a function has volatile business logic, get a test in there. If a critical workflow is prone to iterative updates, lock down its behavior in your suite. Remember, tests are a tool for safekeeping the intentions behind your code as it evolves. They are not sacred documents. Keep them relevant, prune the useless ones, and evolve them alongside your application.
At the end of the day, testing is a skill, and like all skills, it gets sharper with practice. But it starts with awareness: ask yourself what you're testing, why you're testing it, and what matters most in the moment. The more intentional you are, the more value you'll get — not just from your test suite, but from the confidence you carry into every pull request. Because knowing that your code is doing exactly what it's supposed to (and nothing it's not) is a feeling that trumps any arbitrary coverage metric every single time.