Testing Your Tests: How to Review Test Quality Effectively
"We have 90% test coverage" is a dangerous sentence. Coverage only tells you that a line of code was executed during a test—it doesn't tell you that the code was actually validated.
When reviewing a PR, you must review the tests as carefully as the feature code. Here is how.
1. The "False Positive" Check
A test that passes when it should fail is a liability.
- ▹Look for: Tests without
expectorassertstatements. - ▹Watch for: Async tests that don't
awaitthe result, causing them to finish (and pass) before the code actually runs.
2. Testing Behavior, Not Implementation
If you change a function's internal logic but the output stays the same, your tests should still pass.
- ▹Avoid: Over-reliance on mocking internal private methods.
- ▹Prefer: Testing the public API of the module. If I give it X, do I get Y?
3. The "Fragile Test" Problem
Tests that break whenever a CSS class changes or a string is slightly reworded are a maintenance nightmare.
- ▹Tip: In UI tests, use "data-testid" attributes instead of brittle CSS selectors or text matching.
4. Edge Cases & Boundary Conditions
Don't just test the "Happy Path".
- ▹Ask: What happens if the input is
-1?0?null?999999999? - ▹Check: Are we testing the error states as well as the success states?
5. Test Readability
Tests are documentation. If a test fails 6 months from now, will the developer understand what went wrong?
- ▹Bad Name:
test('logic works') - ▹Good Name:
test('should return 400 when user email is invalid')
How Mesrai Reviews Your Tests
Mesrai doesn't just check if tests exist. It analyzes the Semantic Quality of your tests. It can identify "No-Op" tests that aren't actually asserting anything and suggest missing edge cases based on the complexity of your implementation.
