10 Fixable Code Patterns with Testable Examples

10 Fixable Code Patterns with Testable Examples

October 27, 2025

10 Fixable Code Patterns with Testable Examples

Did you know the most damaging flaws often come from small mistakes, patterns in code that are entirely avoidable? These are the kind we now call fixable code patterns.

I’ve seen teams struggle for days because of one overlooked check, and I’ve watched automated tests miss flaws simply because they never covered a pattern that a real attacker would exploit.

Over time I’ve developed a set of code patterns that are both easy to correct and straightforward to test. I’m going to walk you through ten of them, with clear examples you can copy, adapt, and run in your own projects. These are real-world fixes, not theoretical abstractions, drawn from my work.

1. Unchecked Input Leads to Query Injection

When user input flows directly into a query and bypasses validation, a malicious actor can inject unintended commands or queries. I once watched a system where a seemingly harmless search field allowed an attacker to dump entire user records, all because the code treated input as raw query terms.

In one Node/Express example we had:

// vulnerable: input used directly
const name = req.query.name;
const users = await db.collection('users').find({ name: name }).toArray();

Here the name field could include Mongo operators such as {"$ne": null}, allowing retrieval of all records.

By adjusting it to:

const name = String(req.query.name || '').trim();
if (!/^[A-Za-z\s-]{1,100}$/.test(name)) {
  return res.status(400).send('invalid name');
}
const users = await db.collection('users').find({ name }).toArray();

we ensure name is a human-readable string, reject strange structures, and pass only clean data into the query.

A test for this change might use a tool like Jest:

test('rejects malicious query object', async () => {
  await request(app)
    .get('/users')
    .query({ name: '{"$ne": null}' })
    .expect(400);
});

Once this test is in your suite, any regression (e.g., someone removing the validation) will fail fast.

2. Concatenated SQL Statements

When code builds SQL by concatenating strings, you open yourself to SQL injection. I remember debugging a legacy codebase where a simple login drop-in turned into an extract of the whole table, because someone used sprintf instead of a parameterized query.

Here’s a vulnerable Python snippet:

cursor.execute("SELECT * FROM users WHERE email = '%s'" % email)

If email = "x' OR '1'='1" the query returns everything.

Switching to parameterized form:

cursor.execute("SELECT * FROM users WHERE email = ?", (email,))

removes that possibility. A pytest would look like:

def test_sql_injection_prevented(conn):
    result = get_user_by_email(conn, "attacker' OR '1'='1")
    assert result == []

That gives you a concrete check and puts the behavior on automated guard.

3. Missing Ownership or Access Checks

Systems often assume that “if you’re authenticated, you’re allowed”. That assumption is fragile. Early in my testing career I found a blog editing system where any authenticated user could delete any post, the deletion was unauthenticated. That was a late-night find that could easily have gone unnoticed.

In Express:

await db.collection('posts').deleteOne({ _id: req.params.id });

Doesn’t check whether the requester owns the post. Fixing it means:

const post = await db.collection('posts').findOne({ _id: req.params.id });
if (!post) return res.sendStatus(404);
if (post.ownerId !== req.user.id) return res.sendStatus(403);
await db.collection('posts').deleteOne({ _id: req.params.id });
res.sendStatus(204);

And a test:

test('user cannot delete someone else’s post', async () => {
  // setup: post.ownerId !== user.id
  await request(app)
    .delete('/posts/abc123')
    .set('Authorization', 'Bearer token-for-userB')
    .expect(403);
});

With this test, you ensure ownership logic is enforced and remains enforced over time.

4. Secrets Living in Source

I’ve seen .env files with production credentials checked into version control. It happens so easily. Granted vulnerability is well-known, but the practical remedy requires discipline.

Bad:

DB_PASSWORD=supersecret

Good code:

const dbPassword = process.env.DB_PASSWORD;
if (!dbPassword) throw new Error('DB_PASSWORD not set');

And you write a unit test:

test('throws if DB_PASSWORD missing', () => {
  delete process.env.DB_PASSWORD;
  expect(() => require('../dbConfig')).toThrow('DB_PASSWORD not set');
});

Once the test fails when someone leaves a credential in code, you’ve turned a risky habit into a code-enforced rule.

5. Broad, Permissive Defaults

Often a service is shipped with “open” settings for ease of development and then forgotten. One of my clients had CORS enabled for all origins in production for months. That’s a fixable code pattern.

Vulnerable:

app.use(cors());

Safer:

app.use(cors({ origin: ['https://app.example.com'] }));

And you test that only the allowed origin gets the header, other origins don’t receive Access-Control-Allow-Origin. That gives clarity, and prevents misconfiguration creeping into production.

6. Exposing Internal Errors

Stack traces and detailed internal errors can leak architecture, library versions, schema details. Have you ever found a bug page where the stack trace included AWS credentials? That has happened several times, and it’s avoidable.

Vulnerable handling:

app.use((err, req, res) => {
  res.status(500).send(err.stack);
});

Improved:

app.use((err, req, res) => {
  logger.error(err);
  res.status(500).json({ error: 'internal server error' });
});

Test:

await request(app)
  .get('/cause-error')
  .expect(500)
  .then(res => {
    expect(res.body.error).toBe('internal server error');
    expect(res.text).not.toMatch(/at Object/);
  });

Now your logs contain the detail, clients do not, and the test monitors for leakage.

7. Deserializing Untrusted Data

When code uses eval, pickle.loads, or code that reconstructs objects from user input, the risk is high. Think about a feature that allows admins to upload “configuration” which uses pickle , one slip, full remote code execution. Correcting this is possible and testable.

Bad:

user = pickle.loads(request.data)

Safer:

data = json.loads(request.data)
validate_schema(data, USER_SCHEMA)

Test:

def test_invalid_payload_rejected(client):
    res = client.post('/endpoint', data=b'invalidpickle')
    assert res.status_code == 400

Simple, direct. The code rejects the payload, making the route safe and the test verifies it.

8. Only Trusting Client-Side Validation

When you rely on browser JS to validate fields but the server also trusts that, you’re breaking one of the first rules of defensive coding. I’ve seen forms that looked perfect, but attackers posted raw JSON bypassing the UI, and the server happily processed it because “we assumed it was valid”. That assumption is the pattern to fix.

Solution: mirror validation server-side. Use schema validation libraries. Then test bypass cases.

Test:

await request(app)
  .post('/submit')
  .send({ price: -1000 })
  .expect(400);

This test simulates someone skipping the UI. If your server rejects it, you’ve closed the loophole.

9. Unchecked Dependencies and Transitive Risks

We’ve all used libraries without checking their full lifecycle. But one small vulnerable dependency can carry a chain of risk. It’s a fixable pattern if you build in checks.

Practical step: enforce pinned versions, integrate npm audit or pip-audit in CI. Test: your CI should fail when a known vulnerability enters your tree. It’s code-integrated, testable via CI assertion rather than a library function alone.

10. Blocking or Leaking Resources

Async systems can leak memory, hold onto DB connections, or block event loops. Assuming in a project you found requests taking tens of seconds because a forgotten sync loop hung the event loop; the pattern is, “do heavy work inside request handler synchronously”. Fixing it required rewriting to async and adding finally-blocks to free resources.

Example (Node):

// vulnerable
app.get('/heavy', (req,res) => {
  const result = heavySyncWork();
  res.send(result);
});
// fixed
app.get('/heavy', async (req,res,next) => {
  try {
    const result = await heavyAsyncWork();
    res.send(result);
  } catch(err) {
    next(err);
  } finally {
    cleanupResources();
  }
});

Test: simulate multiple concurrent calls and observe no increase in memory usage over time (via test harness or metrics). While slightly heavier than simple unit tests, it’s still a fixable code pattern that you can add into your CI/performance suite.

Pulling it Together

Each of these ten patterns is something you can see, correct, and write tests for. I’ve found that when I apply this mindset (find the small mistake, correct it, then write a test that would catch its recurrence) the maintenance burden drops and vulnerability risk falls. It’s not magic. It’s disciplined code hygiene, accompanied by test coverage and automation.

Warning: The code snippets and guidance in this article are illustrative, designed to show common fixes and tests, not full production implementations. Always run comprehensive tests, review configurations, and perform security assessments before deploying. For internet-facing or sensitive systems, involve a qualified security professional. The author and publisher assume no responsibility for issues arising from untested or unsupervised use.

Author

  • Daniel John

    Daniel Chinonso John is a Tech enthusiast, web designer, penetration tester, and founder of Aree Blog. He writes clear, actionable posts at the intersection of productivity, AI, cybersecurity, and blogging to help readers get things done.

Subscribe
Notify of
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments