We were moving fast.
Features shipped every week.
Stakeholders were happy.
The backlog was finally under control.
Then, almost without noticing, everything slowed down. A feature that should have taken a day took three abd a small change broke something unrelated. Fixing bugs started taking longer than building new features.
And at some point, someone said:
“But this used to be faster, right?”
They weren’t wrong. At some point in the past, things were faster, but that speed came at a cost.
A cost that wasn’t visible at the time.
A cost that quietly accumulated.
This is something I’ve often wanted to explain to non-technical stakeholders:
we didn’t suddenly become slower, we’re just paying back what we borrowed.
TL;DR
- Technical debt behaves like a high-interest loan: it feels cheap at first, but becomes expensive over time.
- The real problem isn’t having technical debt, it’s letting it compound unmanaged.
- Refactoring isn’t a cost, it’s an investment with measurable ROI in team velocity.
Table of Contents
- The Illusion of Speed
- Technical Debt as a Financial Model
- Where the Interest Shows Up
- The Compounding Effect
- When Teams Hit Default
- The ROI of Refactoring
- When NOT to Refactor
- Practical Ways to Manage Technical Debt
- Final Thoughts
The Illusion of Speed
Technical debt often starts as a conscious trade-off:
- You skip a refactor.
- You duplicate a bit of logic.
- You hardcode something “just for now”.
And in the moment, it feels like the right decision because:
- You move faster.
- You deliver sooner.
- You hit the deadline.
That’s why it’s so hard to avoid, because it works, but what you’re really doing is borrowing time from your future self and like any loan, that time comes back.
Technical Debt as a Financial Model
One of the most useful ways to think about technical debt is to treat it like an actual financial system.
Not just a metaphor, but a model.
Principal: the shortcut
The principal is the initial shortcut you take:
- skipping a proper abstraction
- duplicating logic instead of extracting it
- shipping a workaround instead of fixing the root cause
Individually, these decisions are often reasonable.
Sometimes even necessary.
Interest: the friction
The interest is what you pay every time you touch that code again. It shows up as:
- extra time to understand what’s happening
- unexpected side effects
- more effort to implement even simple changes
You don’t notice it immediately.
But it’s there, every time.
Compounding: the multiplier
And then comes the real problem: compounding.
Each new feature built on top of messy code increases the cost of the next one.
Not linearly.
Exponentially.
Where the Interest Shows Up
Interest doesn’t arrive as a big, visible cost, it shows up as friction.
Small things that make your work just a bit slower, every single day.
- A feature takes longer than expected
- A bug fix introduces another bug
- You spend more time reading code than writing it
- Onboarding a new developer becomes difficult
Or even something like this:
// "temporary" workaround from 6 months ago
if (user.role === "admin" && featureFlagX) {
// special case inside special case
}
Nothing here breaks the system, but everything here slows you down.
That’s the interest you’re paying.
The Compounding Effect
This is where things become dangerous.
Technical debt doesn’t just add cost, it multiplies it. Every new feature built on top of unclear or fragile code becomes:
- harder to implement
- harder to test
- harder to change
So the next feature takes longer.
And the next one.
And the next one.
At some point, the system isn’t just complex.
It’s actively resisting change.
And that’s when velocity starts dropping, even if your team hasn’t changed at all.
When Teams Hit Default
In finance, default happens when you can’t repay your debt anymore.
In software, it looks different, but the signal is clear.
You see it when:
- refactoring is always postponed
- certain parts of the codebase are avoided
- every release feels risky
- progress slows down despite increasing effort
At this stage, teams often react the wrong way.
They try to push harder.
More hours.
More pressure.
More “just ship it”.
But the problem isn’t effort.
It’s accumulated complexity.
And no amount of speed can compensate for that.
The ROI of Refactoring
Refactoring is often perceived as a cost, something that slows down delivery, something you “don’t have time for”, but that perspective ignores the return.
Refactoring is an investment.
And like any investment, it pays off over time.
Let’s make it concrete, imagine:
- A feature currently takes 3 days to implement
- After refactoring, similar features take 1.5 days
That’s a 50% improvement.
Over 10 features, you’ve saved 15 days.
That’s not just cleaner code, that’s recovered velocity.
And just like debt compounds negatively, good code compounds positively:
- faster development
- fewer bugs
- easier onboarding
- more confidence in changes
This is where engineering meets business, because velocity is not just a technical metric, it’s a business advantage.
When NOT to Refactor
Not all debt needs to be repaid immediately, and not all code needs to be perfect.
Refactoring everything blindly can be just as harmful.
Avoid refactoring when:
- the code is rarely touched
- the feature is about to be replaced
- you’re still validating a product idea
- the cost clearly outweighs the benefit
The goal is not to eliminate technical debt, but it’s to manage it intentionally.
Practical Ways to Manage Technical Debt
You don’t need a full rewrite to stay in control, you need consistency.
A few habits make a huge difference over time:
- Refactor as you go: improve code while you’re already working on it
- Make debt visible: track it instead of hiding it
- Set a refactoring budget: even 10–20% of time is enough
- Review for maintainability, not just correctness
- Call out complexity early, before it spreads
These are small actions, but they prevent large problems.
Final Thoughts
Technical debt isn’t the enemy, it’s a tool.
Sometimes you take on debt to move faster and that’s a valid decision, but if you ignore it, it becomes a liability and, eventually, it starts slowing everything down.
The real problem isn’t having technical debt.
It’s pretending you don’t.
If this resonated with you:
- Leave a ❤️ reaction
- Drop a 🦄 unicorn
- Share the most “expensive” piece of bad code you’ve seen
And if you enjoy this kind of content, follow me here on DEV for more.
Top comments (16)
I read this and nodded… then winced a bit.
Yes, bad code is like a high-interest loan — but the uncomfortable truth is: most teams don’t realize they’re borrowing. Nobody wakes up thinking “today I’ll write something unmaintainable” — it’s usually deadlines, context switching, and just trying to ship. ()
Where I slightly disagree is this: not all “debt” is the same. Some debt is intentional — you ship fast, learn, and repay it quickly. That’s leverage. The real killer is the silent kind: the hacks that become architecture, the TODOs that become policy, the “we’ll fix it later” that nobody owns. That’s when the interest compounds and starts eating velocity sprint after sprint. ()
I’ve seen teams blame velocity drops on process, meetings, or even people — when the real culprit was a codebase nobody wanted to touch. When adding a small feature takes 3x longer than it should, you’re not slow — you’re paying interest. ()
The takeaway for me:
Debt isn’t the problem — unmanaged debt is
Speed isn’t the enemy — unpaid shortcuts are
And refactoring isn’t “nice to have” — it’s how you stop the bleeding
Good article. Just missing one harsh reality:
you don’t notice technical debt when you take it — you notice it when your best engineers start avoiding parts of your system.
Thank you @paolozero , I really appreciate your comment, especially the distinction between intentional and silent debt. That’s a nuance I didn’t fully unpack, and you’re right: not all debt is created equal.
I like how you framed “leverage vs. liability.” I’ve seen that play out too: teams making conscious tradeoffs to learn fast, then actually circling back to clean things up. When that loop exists, debt can be a tool. When it doesn’t… it becomes exactly the kind of drag I was warning about.
Your point about engineers avoiding parts of the system hits hard. That’s usually the moment when debt stops being an abstract concept and becomes a cultural problem. Once people start routing around the code instead of improving it, velocity loss is just the visible symptom.
If I were to extend my own argument after your comment, I’d say: the real danger isn’t just the “interest rate”, it’s losing the team’s willingness to engage with the codebase at all.
Thanks for adding that layer, this is exactly the kind of discussion I was hoping the article would spark.
I really like the framing of bad code as a high-interest loan, it’s one of the clearest ways to explain technical debt to non-engineers.
What stood out to me is how subtle the “interest payments” are. It’s rarely a dramatic failure, more like a constant tax on everything: slower feature delivery, harder debugging, more regressions. As you mentioned, it quietly eats away at team velocity until it becomes the norm.
One thing I’ve seen work well in practice is making that interest visible. Instead of saying “this code is messy,” framing it like:
“This shortcut is adding ~20% extra effort to every change in this area”
suddenly turns a technical concern into a business decision.
Also, I appreciate the implicit point that not all debt is bad, it’s the unmanaged, high-interest kind that kills teams. Strategic shortcuts with a repayment plan can be valuable, but most teams underestimate how fast “we’ll fix it later” turns into never.
Curious: have you found any effective ways to quantify or surface this “interest” to stakeholders without it feeling hand-wavy?
Thanks @lucaferri, you captured exactly what I was trying to get at with the “invisible tax” idea.
That’s been my experience too. The danger isn’t the big failure, it’s the normalization of friction. When everything feels just a bit slower, a bit harder, a bit riskier, teams stop questioning it. It becomes “just how things are.”
I especially like your framing of making the interest visible as a percentage cost. That shift from “messy code” to “ongoing business expense” is powerful, because it moves the conversation out of opinion and into tradeoffs.
To your question, yes, but I’ll be honest, it’s never perfectly precise, and trying to over quantify it can backfire. What I’ve found works is a mix of lightweight signals rather than a single “number”:
Individually, each of these is a bit fuzzy. Together, they tell a story that’s hard to ignore.
Sometimes I’ll even frame it narratively rather than numerically:
“This feature took 3 days. In a healthier part of the system, it likely would’ve taken 1.”
It’s not scientifically exact, but it’s concrete enough for stakeholders to grasp the cost.
The key, I think, is consistency, not proving the exact interest rate, but repeatedly showing that the same areas incur the same kind of drag. Over time, that pattern builds trust and makes the repay vs defer conversation much easier.
And you’re absolutely right, most teams don’t decide to carry high interest debt, they just underestimate how quickly “later” arrives.
Thank you Gavin for your reply, I really appreciate it
This analogy is spot on, @gavincettolo!
Coming from a Cloud Architecture background, I always think of technical debt as "architectural friction." Like you mentioned with the interest payments, eventually, the team is just burning cycles keeping the lights on rather than building new features.
I particularly liked your point about "The Knowledge Silo Tax." In distributed systems, if the code is "spaghetti" and only one person understands the service's state machine, that’s not just a velocity killer—it's a massive operational risk.
I’ve found that the "interest" is most expensive during a scaling event. If your infrastructure isn't clean, a 10x spike in traffic doesn't just slow you down; it breaks the bank (literally and figuratively).
How do you usually advocate for "debt repayment" sprints when talking to non-technical stakeholders who are focused solely on the roadmap?
Thank you @elenchen for your comment!
I love the “architectural friction” framing, that clicks immediately.
When I talk to non-technical stakeholders, I avoid the word “debt” entirely and reframe it in terms they already care about: risk, speed, and cost.
Instead of saying “we need a refactor sprint”, I’ll say something like:
I also try to tie repayment directly to the roadmap, not against it. For example:
What usually works best is pairing it with a concrete moment, like before a scaling event or a risky launch, exactly like you mentioned. That’s when the cost of not acting is easiest to understand.
Thank you @gavincettolo
I like your POV and I am curious to read your next articles on these topics. You have earned a new follower :)
The financial model framing is spot-on, and Christie's point about cognitive cost being the real blocker resonates hard.
I'll add one dimension that's magnified this for me: programmatic codebases at scale. I maintain an Astro site that generates 89K+ pages across 12 languages. When the original comparison page templates accumulated debt (thin content, bad internal linking, questionable redirects), the interest wasn't "this feature takes an extra day" — it was "Google is crawling 53,000 pages and rejecting them because the template quality is below threshold."
At programmatic scale, every template-level shortcut compounds across thousands of generated pages simultaneously. One bad decision in a stock page template affects 8,000+ tickers × 12 languages. Eventually I had to remove the entire comparison page type — not refactor it, delete it — because the debt had compounded beyond the point where incremental fixes were worth it.
The takeaway for me: in template-driven systems, the compounding interest rate is multiplied by your page count. Debt that's manageable at 10 pages becomes catastrophic at 100K.
Thanks for sharing this, @apex_stack , this is a fantastic extension of the idea, and the example makes it very concrete.
I really like how you push the “high-interest loan” analogy further into programmatic systems. At that scale, the impact of technical debt stops being linear and becomes multiplicative. It’s not just that a change is harder, it’s that every small flaw is instantly replicated across thousands of pages, as you described.
Your point about the type of impact changing is especially insightful. In a typical codebase, we tend to feel the cost as slower development. But in your case, the feedback loop is external and much harsher: search engines effectively act as an unforgiving validator of quality. When template debt accumulates, the penalty isn’t just velocity, it’s visibility and reach.
The fact that the only viable option was to delete the entire page type is telling. That’s the “default moment” of the loan, where incremental repayment is no longer enough and you’re forced into a full reset. It’s a powerful illustration of how ignoring compounding debt can eventually remove optionality altogether.
I also really like your takeaway: in template-driven systems, the interest rate scales with distribution. That’s a great mental model: it suggests we should treat templates and generators as high-leverage assets; the kind where quality standards need to be higher, not lower, precisely because of their amplification effect.
This adds an important dimension to the original argument: technical debt isn’t just about time, it’s about surface area and the larger the surface area you’re projecting onto (like 100K+ generated pages), the less forgiving the system becomes.
Really appreciate you bringing in this perspective, it makes the risks of “small” shortcuts much more tangible.
Really well said, Gavin. The surface area framing is exactly the missing piece in most tech debt discussions. In a traditional codebase, debt slows you down linearly — you ship slower. But in a template-driven system generating 100K+ pages, a single bad decision in the generator compounds across every output. The "interest rate" isn't time, it's distribution.
I learned this the hard way when I had to nuke an entire page type (comparison pages) because the template quality was too low and Google started penalizing the whole domain's crawl signals. That wasn't a refactor — it was exactly the "default moment" you described. The debt had compounded past the point where incremental fixes were viable.
The takeaway for anyone building programmatic systems: your templates are the highest-leverage code you own. Treat them like critical infrastructure, not scaffolding.
Developers don't start avoiding areas because they look bad/messy; they avoid them when the cognitive cost of understanding what's safe to change gets too high. Rebuilding the mental model from scratch just isn't worth it.
That's when people stop "leaving the code better than you found it," because they made the smallest possible change and got out. Anything more than that felt too risky.
Over time, the original author becomes the only person who really understands it, because knowledge isn't distributing across the team. Now you've got a bottleneck and a bus factor problem.
That's why I think of code readability is a performance constraint (especially with the large amounts of code we generate with AI these days).
Thanks so much for this thoughtful comment, @christiecosky . You’ve captured something really important that often gets overlooked.
I completely agree: it’s not the “messiness” of code that drives people away, it’s the uncertainty. When the cognitive cost of rebuilding a mental model gets too high, even experienced developers start optimizing for safety over improvement. At that point, “leave the code better than you found it” quietly turns into “don’t break anything and get out.”
What you’re describing is exactly the moment when technical debt compounds. The system stops being a shared asset and starts becoming territory. And as you said, once knowledge stops distributing, you don’t just have a maintainability issue, you get a coordination bottleneck and a real bus factor risk.
I also really like your framing of readability as a performance constraint. That resonates a lot, especially now. With the rise of AI assisted code generation, we’re producing more code than ever, but not necessarily more understandable code. If anything, the gap between code that works and code that can be safely evolved by a team is getting wider.
To me, this reinforces a subtle shift in how we should think about quality: readability isn’t just a nice to have or a matter of taste, it’s what enables continuous change. Without it, velocity might look fine in the short term, but it’s quietly borrowing against the future, which ties back nicely to the high interest loan analogy.
Really appreciate you adding this perspective. It sharpens the argument in a meaningful way.
The loan analogy hits different when you're in fintech — because we literally deal with loans and interest rates, and the parallels are painfully exact. In our payment platform, we had a "quick fix" in our transaction routing logic from year one. Worked fine at 100 transactions/day. By the time we were doing thousands, that one shortcut was causing cascading retries that inflated our infrastructure costs by 30%. The "interest payment" was invisible until it wasn't. The hardest part isn't identifying the debt — it's convincing yourself to pay it down when shipping new features feels more urgent. What worked for us: we stopped calling it "refactoring time" and started calling it "reducing the cost of the next feature." Same work, but suddenly leadership gets it.
Thank you @mickyarun ! That’s such a perfect real-world example, especially in fintech where the “interest” is literally measurable.
The cascading retries point is exactly it. The system looks fine at low scale, then suddenly the hidden cost surfaces all at once and hits both infra and reliability.
I really like your reframing. “Reducing the cost of the next feature” is the right mental model because it connects directly to delivery, not just code quality.
I’ve seen the same shift work well. As soon as the conversation becomes:
it stops being a tradeoff and starts being an investment.
And you’re right, the hardest part isn’t spotting the debt, it’s choosing to act before it becomes painful. Most teams wait for the spike you described, but the ones that scale smoothly are the ones that treat those early signals seriously.
Something I've been doing that's worked surprisingly well: every PR gets a "debt tag" in the description. Just a quick line like
[debt: new]or[debt: reduced]. It takes 5 seconds but after a few months you can actually graph the trend.We noticed our ratio was like 8:1 (new debt to reduced debt) during a crunch period. That single metric got leadership to approve a dedicated cleanup sprint more than any amount of "we need to refactor" conversations ever did. Numbers talk.
The other thing that really clicked for me was framing it not as "refactoring time" but as "reducing the cost of the next feature." When you tell a PM "this cleanup means Feature X ships in 3 days instead of 8," suddenly it's not maintenance anymore - it's an investment with a clear payoff timeline.