QA needs to be a role in a team, not a separate team. When there's a "QA team", they inevitably end up in an adversarial relationship with development teams. The devs will lie to QA, or withhold the full truth, for a number of reasons: they don't want to be nagged about small things; they don't work for QA so QA has no direct authority; they only interact when things are broken, so the entire relationship is based on negativity; QA comes after the fact and it's too late to change things (or at least too late to change them the right way); QA lacks the engineering background to understand what it is that they're testing (so they test the superficial things and don't test the showstoppers); etc. There are tons of reasons a separate QA teams creates an us-vs-them situation where the path of least resistance is deception and hoping no one finds out what's really broken.
It's ok to have a QA 'team' that represents a group of QA specialists and focuses on their unique interests and areas of responsibility, but the members of that team should be embedded in the development teams, working alongside them as they develop, helping to write test plans, identify problem areas before the code is written, clarifying acceptance criteria, etc. If there's a "QA manager", that person should be more like a mentor, or the kind of good PM who runs interference and supports the development effort, but isn't actively directing the development effort.
If your QA people aren't in the room (covid aside) with the developers, and are instead all gathered together in a separate "QA unit", you've already lost. By the time they identify problems, it's too late to fix them well, and they'll never change the development culture into one that learns to produce fewer bugs in the first place.
I disagree, and partially because of the reasons that you describe. Sometimes the devs are so used to the way the software works (they built it, after all) that it doesn't even occur to them that it sucks for people who aren't starting with a mental model of the system's internal state. Likewise, I've met devs who are apparently happy to ship... poor... software - sometimes an adversarial relationship is proof that the system is working.
I agree, this model seems to work the best for our team. I worked with different "agile" teams with QA member embedded in the team and with separate QA teams.With both models either the relationship between dev/QA becomes so toxic or the QA person is just a rubber stamp depending on the team dynamics. I found that taking away the safety net of QA from developers actually results in better ownership of the product quality by development teams,Because now dev teams don't have some one else to blame for not testing enough.This also added benefit of developers will automate most of the testing because they generally don't want to spend all their time doing manual tests of their code.
I don't think it's inevitable that the QA/dev team relationship be adversarial. I've been at places where it was, sure. But, at other places, QA and dev worked together blamelessly to improve the quality of the software. Like most things, I think this comes down to establishing a culture of collaboration vs competition, and recognizing that the QA manager isn't just "the person who is always delaying the releases."
I'm sure mileage may vary, but I'm lucky to be at a company where the dev team has a great relationship with QA. I think you're right, having a good culture around it is important. I'm more impressed than I am annoyed when QA finds an obscure bug before things hit production.
I agree with all of this, except I will add that having a "QA Infrastructure" team does make sense. This team should not be responsible for product testing, but for building the cross-team infra and tooling needed to make it easier for the product engineers to write integration/functional tests. Of course, this is only needed once a company is above a certain size.
sadly I have seen too many instances where QA integrated into development teams becomes a rubber stamp if not just replaying unit testing the developers used themselves. while they may not feel that way the support staff will certainly make it known through the water cooler.
Some projects have such sensitivity that a separate team which can do full regression testing along with proper testing of deliverable is required. Too many projects short what is actually tested and this can lead to stress on the support teams which in turn will feed back to the development teams
Another big problem is outsourcing QA. It's a role that requires good communication, which has a high chance of breaking down due to cultural differences and use/understanding of terminology.
for the 'adversarial' relationship, I'd say it depends how both teams are managed and how well they communicate. I've been until recently in such situation (a dev team and a dedicated QA team) and I never felt this (in part because the bugs found by QA would have been found by the clients and no one at work wants that and in part because the two managers work very well together).
> The devs will lie to QA, or withhold the full truth, for a number of reasons
Avoiding a QA silo doesn't automatically fix the root issue. They'll do this with their managers or teammates too if things are sufficiently FUBAR.
As a developer, a good QA team is worth their weight to me in gold. Reach out proactively and engage them about new features you've built that you want 'em to hammer on - they'll be better at finding all the edge cases you didn't think about than you will. By engaging them early, you're more likely to get bug reports for code you still remember, and less likely to have them filing bugs right before launch at the 11th hour where your manager might plead for yet more crunch and overtime. If they're thoroughly testing my code for me, that's less time I need to spend carefully hammering my own code to avoid future blame. They'll help me hammer out good repro cases for strange bugs, and test on more varied hardware with different timings and usage patterns.
> they don't work for QA so QA has no direct authority
If devs aren't held to account for shipping broken shit, embedded QA won't necessairly fare much better.
> QA comes after the fact and it's too late to change things (or at least too late to change them the right way)
You can have quick turnarounds with QA teams. They definitely need to be able to get their hands on builds quick enough to provide devs useful feedback before shipping, though, and can't be siloed to the point of blocking direct communication. Even external QA teams employed by another company can succeed here though.
> QA lacks the engineering background to understand what it is that they're testing (so they test the superficial things and don't test the showstoppers);
Hire better QA and/or educate your existing QA better. They can chase superfical things and checklists, but if you can't explain enough of the basics for them to help chase down a crash or misbehavior you might be worried about, you've got a problem. Poor documentation, poor communication, poor guidance, poor understanding, too much bureaucracy... something fixable.
Perhaps the structure of game development has some tips here - we've often got multiple QA teams:
1. Internal studio-wide QA teams, which help chase down bugs internally so you don't have to - probably in the same building, at least. Rarely gatekeepers in and of themselves, but they do have the ear of your production staff.
2. External second party QA teams - possibly contractors, possibly employed by your publisher, maybe in another city entirely. Slower turnaround, but more vast and a fresh set of eyes not ruined by being too close to the project for too long. Can also include not-quite-"QA" specialty stuff like usability testing. Brought on later into the project.
3. External third party QA teams. Employed by the console vendors, not by you or your publisher. These guys can throw a wrench into your release schedule, and then charge your company for the privilige of having another go if you've failed the certification process a few too many times (or at least, they could at one point.)
Even if you have an adverserial relationship with the console vendor's QA teams, you'll buddy up to your second party and internal QA teams to find enough of the bugs to ship on time with minimal crunch if you have the slighest lick of sense, even if you haven't the slighest bit of shame at shipping broken things. They're the ones who will help you avoid failing certification, after all.
I've found that the orgs I've been in that expect Engineers + Product to own QA have had a healthier dev process than orgs that have a specific QA process (this is all for web platforms). I think it helps create that strong sense of ownership I'm hoping for.
This is probably not universal, though - what sides of the fence of org design do people fall into, here?
We sell a product in the QA space and get to talk to a lot companies about this.* The pattern I see is that companies in the 1-10 employee range tend to have a QA process that comes about organically. You start out with zero customers so a bug doesn't really "cost" anything. As you get customers, the cost of a bug grows to a point where it makes sense to do manual test passes - maybe before a feature release, maybe after every deployment. Usually it's in a Google Doc or Google Sheet, and it's either one person running through the whole thing or a bunch of people divide-and-conquering it.
Eventually QA ends up being owned by either the Engineering team or a standalone QA team. I've seen both work, and both not work. Generally if it's not working for Engineering, it means they're not automating enough or the tests are flaky and they're just ignoring it. If the QA team approach is not working, they're probably overworked... Likely because the engineering org has deeper quality issues they need to fix, but they're trying to make up for it by throwing bodies at it in the QA org.
* We have a no-code test automation product that is used by both QA and Engineers: https://reflect.run
I think "manual test pass" is very seductive and very dangerous. Compared to writing good automation, manual testing once is cheaper, easier, and faster. Manual testing indefinitely is more expensive, harder, and slower. Short term thinking gets you stuck with flawed and burdensome manual testing.
In my view, the best approach is rigorous automation and manual exploratory testing which will find the kind of bugs automation misses and inspire new test cases.
I think automation is often a waste of time at the beginning of a project too. Bugs don't cost much (if anything). Building functionality that gets tossed away is routine. Tests don't provide value - getting the code in front of customers and seeing if it's what they really want provides value.
The problem is that once people get used to not having automated tests is can be difficult to build up the habit.
Unit tests are also often hard (if not impossible) to retrofit, although IMO that's a bigger problem with unit tests than it is doing testing at a later stage in the project.
The problem is not about automation or manual but creating and maintaining a good test plan (including exploratory tests). And we just simply cannot rely on engineer for good test plan. After having a good test plan we can decide to how much we should have for manual vs automation.. the simple rule of software dev is use before reuse, manual before automation.. but some are just too much obsessed with automate everything motto.
Every org I've worked for that has had a dedicated QA team has had, by far, the shittiest quality of code and the most amount of bugs in the system/outages/issues in production.
I am not sure _why_ this is but I think it has something to do with the "Not my problem" mentality where engineers say "QA will figure out if this works or not" instead of "I'm responsible for figuring out of this works".
It hasn't become a red flag for me _yet_ but it's definitely _a_ flag.
Be that as it may, a team that would have shitty code with QA will likely have shittier code if you fire QA. Case Study: Windows 10.
That said, an understaffed QA team can be really helpful. Not enough people means you can't ship garbage and expect QA to catch it; they'll catch many things, but not everything. Not enough people also encourages efficiency in testing --- test the most important stuff well, test the other things opportunistically.
A QA team also can really help turn customer problem reports into actionable bug reports.
Agree with this, it's very important for engineers to own the quality and reliability of their own code. The team I was on that had embedded QA on each team had barely any unit tests because developers were used to relying on QA to catch bugs for them.
When I've been on teams without formal QA teams, the team was focusing way more on having good automated test practices in place.
This also applies from the SDET/SDE separation, which I see someone else in an adjacent thread mention. I've found a lot of value from taking ownership of both the code and the automated browser testing code. I found when those roles are separate, the browser tests end up breaking too often due to developers not being aware of how coupled they are to the current implementation. I also found when a dedicated team is writing browser tests, it becomes easy to go overboard with how much you test.
The red flag for me is when QA is considered a non-coding team, divorced from product. QA engineers should be very technical, and should be writing code or automating. A non-automated QA process kills the feedback loop of the product.
Microsoft used to have separate roles for dev and test/QA (SDE and SDET), and is a good case study here. Sometime around 2013-ish, they merged the SDET role into the SDE role, and made SDEs responsible for automated testing and quality. This was widely recognized as an important decision that improved both product velocity and quality by time I had left a couple years later.
I don’t think this is an anomaly. Every engineering team I’ve been on since has given this responsibility to the developers writing the code, and every one has been stronger for it.
Maybe it enhanced velocity, but I can tell you that for sure it did not improve quality for Office, Windows, or Visual Studio (not Code). I work with all of those products, and have worked with them before and after the big layoff. The quality of those offerings took a serious hit when they outsourced QA to their customers.
I try to submit feedback and half the time the feedback tool itself fails to work, the other half of the time my bug is closed or ignored so that the team can 'focus on issues with a broad customer impact'.
Is there any evidence that Microsoft's product quality has improved recently? The only way I can imagine justifying that point of view is if you divide by the cost to get that quality, which is a useless metric to me as a customer.
Maybe I just experience bugs that no one cares about, but I no longer recognize Microsoft Office or Windows as solid software given the massive increase in the volume of bugs I experience.
I have friends who have worked for Microsoft for ~20 years. I'm a small-business IT consultant.
My MS friends have the feeling that it was a painful transition, yet ultimately a good thing, and that quality has gone up.
My experience on the ground, so to speak, is the exact opposite. The May(!) update still has showstopping bugs with a not-small portion of my clients. Last year, one was bitten by the wipe-the-user-folder upgrade bug. The trust in running windows updates is at an all-time low amongst the general users, and I can't blame them.
I trust my friends. I also trust my own experiences. I wonder where the disconnect is?
> "This was widely recognized as an important decision that improved both product velocity and quality by time I had left a couple years later."
I'm sure that's what the execs say at the all-hands meetings but ask the former MS SDETs who still use MS products and you'd hear a much different opinion. ;)
My standard quip when hitting yet another bug in a MS product, which is an all too frequent occurrence nowadays, is "Gosh, maybe MS should consider hiring some SDETs."
Honestly, I think it goes down to organizational momentum around writing tests.
If you have good engineer-written and maintained tests, then you don't need QA.
The problem comes from relying on developers to do manual tests. It works as long as the product is simple and newcomers can learn the entire product. But, developers doing manual testing won't scale once the product becomes too complicated to remember all the corner cases and gotchas. Even worse, newcomers just won't understand the product as well as the people who originally wrote it.
So, if you have a complicated product, and a lot of debt in developer-written tests, then you need QA.
With regard to the web, it's very easy to write automated tests against server-generated HTML or an HTTP API. In contrast, other platforms are all over the place in how easy / hard it is for a developer to write an automated test.
> With regard to the web, it's very easy to write automated tests against server-generated HTML or an HTTP API.
Funny, I find the opposite. Code for the web is hard to write reliable tests for. Typically I want to test that my component actually exists and is visible and does the right thing in a browser, so I need to produce the whole page instead of just the unit under test. This means that bugs - or just changes - in other parts of the page can make my test fail.
I'm not a front end expert and my colleagues rarely have been either, so maybe I'm missing something about best practices here, but I've definitely had the opposite experience to what you describe.
In my experience as a dev and now PM, there's a few issues with devs doing QA:
1) They are expensive. You can hire 3 or 4 QA people for every dev a lot of times. Why waste that money on devs?
2) They aren't that great at. They know how the program works, so most struggle to do things outside of the "happy path". I had a QA (actually my PM) paste in an entire book into a text field and break the site that way. Devs don't think of stuff like that. Another uses the browser's back button constantly, which none of the devs seem to do.
That said, it doesn't excuse crappy code. If a code change/addition fails QA, that's on the Dev. However, if that failure hits Prod, that's on QA...
Developers aren't paid to test, the majority of their time is spent making new code because that's what management/customers want. Having one person have to wear 2+ hats doesn't work when they can never take one of the hats off.
If they were legitimately given the amount of time to test as they get to code, your project could take twice as long, but it would be more thoroughly tested. And, likely, it wouldn't take twice as long because they'd actually have to think about testability and design earlier. And by "test" I do not mean just unit tests. Designing and implementing integration tests, assembling regression test suites, etc.
Devs focused on just unit tests will never catch the errors that a dedicated V&V team can find.
Once you assign QA to your devs, they will (should) figure out ways to automate as much as possible to reduce the burden that QA has on the team.
In my experience, QA teams are usually starved of engineering resources, so a lot of things that can be automated are done manually or the automation is janky and constantly breaks. Once these issues are brought to the forefront, it becomes a priority to fix so that your expensive engineers aren't wasting time chasing bugs or fixing broken automation.
> You can hire 3 or 4 QA people for every dev a lot of times. Why waste that money on devs?
Attitudes like this will get you...not great QA. In my experience a sufficiently strong QA person can both cost and be worth more than adding another dev to the team.
Personally, I think one QA person on dev team, which the dev can discuss test plans before writing code/tests is really useful. The QA person will always come up with test cases the dev wouldn't come up with and together you have a solid basis to start the work and write the appropriate tests. I really enjoyed working that way. The dev also learns over time what to consider when doing new work. Especially, when the QA is a subject matter expert (e.g. forex etc) and you not.
At the company I work for QA is part of the development team; alongside the Engineers and Product Manager, where our products are a set of mobile applications for iOS, Android and tvOS.
I think we have a good code-quality/speed-to-release ratio with well known tech debt that is actively being cleaned out, that is also tested after refactored out.
Having QA Tester be part of the team makes it really easy to adjust features, adjust the release if an update is needed and it is not passing quite yet quality wise. It also makes it really frictionless specifying what to expect from a ticket/feature and how to baseline test it, the tester also performs some tests of his/her own, the QA tester runs unit, automated UI and performs manual regression tests.
I've been part of teams that have QA testers outsourced as a dedicated independent team and it definitively impacted code quality and the release turnaround, mostly because of delayed communication and mis-alignment on what to test or the expectations of the test.
I agree, the best testing process I've experienced still had people whose role was QA, but that was because they were considered experts and would lead the direction of QA and do final acceptance testing before a release.
The process we used was the three amigos process, where the ticket dev, BA and a QA would have a 20-30 minute sit down and draw up a mind map of test cases linked to the acceptance criteria.
Once the mind map was created, anyone technical could pick up the actual writing of the tests/exploratory testing, whether that's the original dev, a QA, or any other dev. Because devs were all testing each other's tickets instead of just hoofing them over the fence to QA I found that I got a much better understanding of the different areas of the projects.
Only downside was it required a lot of meetings to really flesh out the tickets and make the acceptance criteria as detailed as possible.
When I was in pharmaceuticals, QA and QC were notably different processes. I wish software would embrace this. A common maxim was "you can't test quality into a product".
Quality Assurance is (by analogy) design and code reviews, coding standards, and design for success.
Quality Control is testing -- just making sure things are as intended.
A QA is not the same as a tester but this argument has been going on for 20+ years and will never go away. I'm a tester and I fit into a QA process (which I may or may not have been involved in)
Tester if that is what they are doing, if they are setting up the QA process then they are QA. My title is Exploratory Tester as that's what I've been doing for the last 8 years where I am, working on multiple projects at a time finding the (few) bugs the devs missed with their automated and manual testing
I'm fond of V&V (verification and validation) but I have to thank my first boss for insisting on us using that term. Our job was to develop the tests based on specs and requirements, and sometimes run them. The machines were the testers unless we didn't have it automated yet. We also supplied input in requirements and design discussions because of what we knew of the system, even if we weren't technically the ones writing the software (or even, strictly, designing it).
> That’s because the testers’ role is to hunt for untidy coding and to ensure that any user interaction with your software is free from error.
If the requirement can be stated formally, in an ideal world, this should not be the case. You can write automated tests and verify systems with a level of precision that humans cannot rival and it's not expensive to do so. If your QA teams are catching errors of this variety it's a symptom of another problem: communication issues. Your QA people shouldn't be catching errors of this variety -- it's way late in the development process to realize that your developers didn't understand the requirements or didn't write tests to specify them.
There are a lot of reasons why your QA team is doing this kind of work. Common causes are:
1. The development team is pressured to prioritize budgets and deadlines
2. The requirements are not communicated well, never gathered to begin with, or no single person knows what exactly is being built or why
In the first case developers might not be given the time and space to scrutinize the requirements, write tests specifying the appropriate behaviors, and making the effort to build a system to that specification. Your QA department is basically picking up the slack: the development team pushes what they think meets the requirements and waits for the QA team to give them the thumbs up or down.
The second is also pernicious and makes the former worse.
In an ideal world the QA team should be saving their time and energy for informally-specified requirements and difficult to qualify goals such as, "It must look good in IE 11." There's no way to formally specify that requirement so it must be checked by a human who does understand what that means.
I'd caution that if your QA team is growing with your development team then you should treat it as a symptom of a larger problem. In the real world QA teams end up doing a bit of both and development teams don't always have clear requirements. Aiming for the ideal goes a long way to keeping stress down.
> If the requirement can be stated formally, in an ideal world, this should not be the case.
Probably, but I submit that nobody can do that perfectly. By all means do as much automated testing as you can! But you still need humans to sanity-check it and introduce new behaviors.
For my employer's B2B product, most of the "technical" QA is done under the direction of the product management team (which encompasses dev), and most of the "functional" QA happens with the "premium support" staff who are contracted to & spend time understanding the requirements of the biggest customers.
I haven’t worked with QA people specifically and haven’t worked in an org that hired them specifically. Though I did consult with a large enterprise that had QA specific people, and they seemed to be doing mostly manual testing or some very limited automated testing started manually. Is this the norm for QA people?
I'm a manual tester who's worked on many different projects. The larger ones have had a separate automation QA team.
On the smaller projects, I'll fetch the code base and suggest missing unit testcases to the developers.
Without writing a wall of text, I feel that dismissing the value of manual QA is like dismissing the value of a business analyst or project manager who doesn't write code.
My value comes from helping make sure the customer got what they wanted and it works - and I can do that without writing much code. But that's not to say I don't use tools and scripts to automate some of my work.
The vast majority of testers are manual testers. Testers with software development skills are used in the FAANGs and other segments where one pretty much has to be a developer in order to understand and write tests for the product but they're as expensive as a junior developer.
For one company I interact with it makes sense. It's a situation where they're taking several different software and hardware products and integrating them into a single stack that has even more variables of deployments in the field. What's more, the installations are all on site, not easily changed out due to security policies, and any downtime will impact revenue. Dedicated testing teams make sense that do a lot of manual testing makes sense.
That said I don't know if that situation is unique or the norm out in industry.
Depends on a system you are testing. It can be that it is sufficiently complex that the dreaded manual testing will require a lot of expertise and effort. People sometimes seem to generalize IT to developing/testing websites, due to it being most common area of work but it is not everything of course.
As a QA Engineer of 15 years, I see some good simple points here.
I would always hire a little more QA resource than you think you need. Nothing worse than signing off new features but not having the time to think outside the box and find overlooked issues.
QA here. Any time you find a bug, write a test for it. One of the simplest strategies.
Also there are so many static analysis tools. FXCop etc. Been at so many companies where there are thousands of warnings in the code, many of which were genuine bugs. At one, we fixed 16,000 warnings, and managed to close off a few dozen sev 1 crashes that people had been experiencing for ages.
Start by letting your customers report bugs. Give them an easy way to contact you. Cost will be free.
Ensure that you fix the bugs and write some automated test for this case if possible. Again free.
Pay someone to do a little QA of your product. I'm sure it'll hurt to give away money that you could pocket yourself... but it's the price of getting work done.
It's ok to have a QA 'team' that represents a group of QA specialists and focuses on their unique interests and areas of responsibility, but the members of that team should be embedded in the development teams, working alongside them as they develop, helping to write test plans, identify problem areas before the code is written, clarifying acceptance criteria, etc. If there's a "QA manager", that person should be more like a mentor, or the kind of good PM who runs interference and supports the development effort, but isn't actively directing the development effort.
If your QA people aren't in the room (covid aside) with the developers, and are instead all gathered together in a separate "QA unit", you've already lost. By the time they identify problems, it's too late to fix them well, and they'll never change the development culture into one that learns to produce fewer bugs in the first place.