More than bugs: Why quality within the entire SDLC is critical
When people hear "quality," they often think only of "Quality Assurance" and testing. If something breaks in production, the first reaction is usually to question whether QA caught it, or if there was even a QA team involved. But this perspective is far too narrow.
Quality isn’t just about testing. It’s about building things right at every stage of the Software Development Life Cycle (SDLC). That means using the right tools, following good processes, writing quality code, communicating clearly, and thinking critically. A gap in any part of the process can ripple downstream, creating miscommunication, delays, frustration, and ultimately a lower-quality product.
To really drive it home, it helps to walk through how mistakes or oversights in each SDLC phase can affect not just the product, but the entire team and company. This post assumes you’re familiar with the SDLC. If not, there are plenty of quick refreshers out there. While the terminology and structure may vary across organizations, the core stages are largely the same.
One last note: my experience leans more into the requirements, design, development, testing, and software delivery process areas, so that’s where I’ll focus. There’s plenty to unpack in each phase, but I’ve aimed to keep things concise. If you're short on time, skimming the headings and subheadings alone will give you the main takeaways.
Requirements Phase
Insufficient customer or product research
First, it's about building the right thing. Without solid research into customer needs and pain points, the team risks solving the wrong problems. Designers may focus on the wrong UX flows, developers build features no one uses, and QA may validate functionality that ultimately doesn’t matter. The result? A product that doesn't meet the customer's needs, wasting time, money, and reducing customer satisfaction.
Incomplete requirements
By failing to examine requirements from all necessary angles (technical, user, business, edge cases), we leave gaps that can bite the team later. Designers have to rework screens, developers hit unexpected blockers, and QA struggles with unclear test scenarios. This leads to context-switching, last-minute discussions, and a general slowdown. Worse, if no one catches the gap, the feature goes live incomplete—hurting product quality and credibility.
Not using a project management tool
When requirements live in disconnected tools like Google Docs or Excel, we lose visibility, version control, accountability, and traceability. PMs can’t track progress or dependencies, developers don’t have a clear backlog, and QA lacks a single source of truth. A proper tool like Jira or Asana helps with task assignment, status updates, linking user stories to bugs or epics, and ensures the whole team is aligned on what’s happening and when.
Allowing scope creep
Scope creep, while sometimes necessary, needs to be treated very carefully. PMs scramble to realign priorities, developers feel overworked, designers have to redo their work, and QA has less time to test thoroughly. This reactive cycle creates stress, burnout, and bugs—ultimately damaging product quality and slowing delivery.
Planning Phase
Lack of prioritization
The cliché "when everything is a priority, nothing is a priority" is painfully real. Without clear priorities, teams spread themselves thin, constantly switch context and dig themselves into fire-fighting mode. PMs struggle to provide direction, developers build features in the wrong order, designers design screens no one needs yet, and QA scrambles to test whatever shows up. The result is a chaotic process full of mistakes, delays, and a product that feels fragmented and unfocused.
Unrealistic or no timelines
Unrealistic deadlines lead to rushed work, stress, and burnout. Developers cut corners, QA has less time to test, and designs are pushed through without validation. When the pressure becomes the norm, quality drops and team morale suffers. On the flip side, having no timeline means work drags indefinitely. Teams lose urgency, planning becomes sloppy, and features miss critical market windows. Customers see delays, and the company loses its competitive edge.
Poor people allocation
It’s not just about quantity—it’s about fit. Not having enough people forces teams to cut scope or sacrifice quality. But even with enough people, assigning work to someone with the wrong skill sets slows progress. Developers may need hand-holding or rework, designers might miss product context, and QA might lack domain knowledge. The entire SDLC suffers, leading to mediocre output and missed expectations.
Not identifying or considering dependencies
Overlooking dependencies creates invisible blockers. A developer who needs an API that isn’t ready stalls or builds against a mock, which later causes integration issues. QA teams without early access to environments or data delay their testing. Designers waiting on product input can’t finalize their flows. The lack of foresight causes bottlenecks, frustration, and last-minute rushes that hurt product stability and timelines.
Design Phase
Unclear or low-quality designs
When designs lack clarity or fail to address various scenarios (like mobile vs. desktop, error states, or edge cases), they become a bottleneck. Developers are forced to interpret or guess, QA doesn't know what the "correct" behavior looks like, and PMs and Designers must constantly clarify. This confusion leads to rework, inconsistent user experiences, and a drop in overall product polish and cohesion.
Designs without dev or qa input
Without developer input, designers may propose solutions that are complex to build, inefficient, or incompatible with existing architecture. Without QA involvement, potential testability issues or edge cases are overlooked. This disconnect results in delays, brittle implementations, and a user experience that doesn’t align with technical realities.
Lack of technical designs
Skipping technical design or architecture discussions may save time upfront but costs more later. Without planning around scalability, reliability, or integration points, developers may rush into coding with short-sighted solutions. This can lead to performance issues, increased tech debt, or entire features needing rework. As deadlines loom, the team cuts corners to compensate, reducing code quality and product stability. This isn't always needed, but when it's missing its effects are vast.
Development Phase
Not thoroughly reviewing requirements and designs
Developers have another opportunity before coding to validate what they’re building. Skipping a requirements and designs review leads to misunderstood functionality, missed edge cases, and wasted effort. Designers/PMs get unexpected implementation questions, QA discovers missed scenarios (the famous 'we didn't talk about this requirement so it's not a bug' debate), and PMs deal with delivery delays or scope debates.
Lack of impact or risk assessment
Failing to ask “what might this change break?” is a fast track to bugs in production. Without risk assessment, developers miss critical scenarios, QA may not know what to regression test (yes, risk assessment is part of QA's job too), support teams face angry customers, and the whole team's work is disrupted to address production issues.
Neglecting scalability, security, or maintainability
If developers skip core principles like scalability and security, the product may work - for now. But as users grow or threat models evolve, performance issues, data breaches, or instability emerge. QA might struggle to simulate load or security edge cases. Maintenance becomes painful, and the team scrambles to fix flaws that should’ve been considered from the start. Long-term product quality takes a massive hit.
Poor software design practices
Messy and unstructured code that doesn't follow proven design patterns leads to fragile systems. Developers spend more time fixing bugs and less time shipping value. Every small change risks breaking unrelated parts of the system. Without clean architecture, modularity, or reusable components, code becomes hard to test and harder to scale. If unit tests are skipped, even basic failures (like null checks) slip through, causing production incidents and undermining trust in the codebase.
Ineffective or no code reviews
Code reviews catch bugs, ensure requirements are met, and promote shared understanding. Skipping them means low-quality code, increased technical debt, and missed opportunities to improve design.
But improper code reviews are just as bad - reviews that lack depth, context, or accountability waste time and fail to deliver value. The benefits of code reviews are widely documented.
Poor documentation or code comments
Code that lacks clear documentation slows everyone down. Future developers - even the original author - waste time deciphering logic. Important decisions or assumptions get lost, and onboarding new teammates becomes harder and slower. While AI assistants reduce some friction, good in-line comments and concise documentation still save countless hours and avoid missteps. The little time spent documenting pays off quickly across the team.
Lack of unit tests
The absence of unit tests is one of the clearest indicators of a low-quality codebase. It burdens QA, slows down feedback loops, and leads to constant regressions. Relying solely on end-to-end tests makes test suites brittle and hard to maintain. Developers lack confidence in their changes, and features take longer to ship. Unit tests empower faster dev cycles, shift quality left, and help QA focus various types of testing instead of playing "catch the falling knives". Teams that skip them pay the price in bugs, instability, and speed.
Think about it, why would someone write a Playwright test to cover every happy and unhappy path of a login form when you could replicate everything with unit tests that enable higher coverage and are faster and more stable?
In my opinion, this is one of the most important things a team should be doing. Don't listen to me, because yes my primary role is in software quality and I may be biased, listen to Martin Fowler (or Ham Vocke in this case).
Testing Phase
Not thinking through or tracing requirements
When QA doesn’t align testing efforts with documented requirements, it results in test coverage gaps. Critical flows go untested, edge cases are overlooked, and features are validated against assumptions. The result? Incomplete products that don't meet customer expectations, leading to rework, missed deadlines, and a hit to the company’s reputation.
Undocumented test cases and test executions
Without documentation, testing becomes a siloed effort that reduces the entire team's capacity for good quality work. There’s no visibility into what was tested, which scenarios were skipped, or what passed. This creates bottlenecks when handing off work, hampers knowledge sharing, and reduces traceability where compliance is essential. It also makes it hard to identify gaps (because we lose input from others) or improve test coverage over time. It's simple, we tend to think more clearly by documenting our scenarios, helping with the quality of test coverage.
Incomplete or insufficient bugs
Bug reports that lack detail waste time. Developers have to guess how to reproduce the issue or may fix the wrong problem entirely. Other QAs can’t verify or retest effectively. Vague or poorly written tickets reduce team efficiency, increase frustration, and allow issues to linger - sometimes slipping into production unnoticed.
Not performing regression testing
Regression testing is one of QA’s most critical responsibilities. Skipping it means new changes might break existing functionality, silently degrading the product with each release. This decreases user trust and increases the support team's work. Without consistent regression checks, quality becomes unpredictable, especially in fast-moving environments.
Not communicating testing needs or timelines
If QA doesn’t communicate what they need - be it test data, environment access, or enough time to complete their work - they get squeezed when testing time arrives. This leads to rushed testing, missed scenarios, and skipped regression. Developers and PMs are also left in the dark, unable to plan or adjust timelines. If QA doesn't get the time they need to test, they can't effectively manage risk, and quality suffers.
Neglecting automated tests
Without automation, testing doesn’t scale. Manual regression grows with each code update and feature, slowing down releases. QA becomes a bottleneck. Under pressure, tests get skipped. Bugs slip through, deadlines are missed, and confidence in testing and deployments drops. Follow the Test Pyramid and ensure an effective automation strategy.
Software Delivery Process
This is perhaps one of the most overlooked yet most critical aspects of building high-quality software. This is the pot that cooks all the ingredients. A dysfunctional delivery process makes everything else - requirements, design, development, testing, deployments, maintenance - harder to do well. Sure, you can still ship, but it’ll be messy, slow, and painful. Think missed deadlines, increased miscommunication, buggy features, and frustrated teams. We'll focus on Agile, since it’s the dominant approach today.
Not refining the backlog
Without a refined backlog, requirements are vague or missing entirely, technical complexities go unaddressed, and stories aren’t ready for the team at the right time. Developers scramble to figure out what to build, QA doesn't know what to test, and PMs are stuck re-explaining work mid-sprint. Features are either delayed or half-baked, and the product suffers. You're building the ship as you sail it.
Skipping core Agile ceremonies
Ceremonies like standups, refinement, sprint planning, and retrospectives aren’t just boxes to tick. They have a clearly defined purpose. Without standups, blockers go unnoticed. Without backlog refinement and sprint planning, priorities get missed and timelines slip. Without retrospectives, recurring issues go unsolved. You can do all of this via Slack or emails, but in practice, that rarely happens effectively.
Following someone else's Agile process
While Agile has some core practices and principles, every company, team, and individual is different. Copy-pasting Spotify’s model or blindly following a SAFe diagram won’t work if it doesn’t match your team’s size, product, or culture. Misalignment leads to clunky meetings, overloaded roles, and burned-out teams. Morale drops, trust erodes, and the delivery process becomes a source of friction instead of flow, all of which damage quality and velocity.
Not seeing process as an evolving journey
If the team expects perfection from the get-go, they’ll be discouraged the moment things get messy (and they will). Agile thrives on iteration. Teams that treat process improvement as an ongoing journey learn, adapt, and grow stronger. Teams that don’t? They stagnate, blame the framework, and return to bad habits (including Waterfall software delivery).
Making too many process changes too fast
If you’re tweaking everything all at once and constantly - new sprint lengths, new tools, new ceremonies - the team won’t know what’s working and what’s not. Instead of improving, things feel chaotic. People check out. Delivery slows. Quality dips. Sustainable change takes time, reflection, and pacing. Slow down to speed up.