Showing posts with label agile. Show all posts
Showing posts with label agile. Show all posts

Premature Passes: Why You Might Be Getting Green on Red



Red, green, refactor. The first step in the test-driven development (TDD) cycle is to ensure that your newly-written test fails before you try to write the code to make it pass. But why expend the effort and waste the time to run the tests? If you're following TDD, you write each new test for code that doesn't yet exist, and so it shouldn't pass.

But reality says it will happen--you will undoubtedly get a green bar when you expect a red bar from time to time. (We call this occurrence a premature pass.) Understanding one of the many reasons why you got a premature pass might help save you precious time.
  • Running the wrong tests. This smack-your-forehead event occurs when you think you were including your new test in the run, but were not, for one of myriad reasons. Maybe you forgot to compile it, link in the new test, ran the wrong suite, disabled the new test, filtered it out, or coded it improperly so that the tool didn't recognize it as a legitimate test. Suggestion: Always know your current test count, and ensure that your new test causes it to increment.
  • Testing the wrong code. You might have a premature pass for some of the same reasons as "running the wrong tests," such as failure to compile (in which case the "wrong code" that you're running is the last compiled version). Perhaps the build failed and you thought it passed, or your classpath is picking up a different version. More insidiously, if you're mucking with test doubles, your test might not be exercising the class implementation that you think it is (polymorphism can be a tricky beast). Suggestion: Throw an exception as the first line of code you think you're hitting, and re-run the tests.
  • Unfortunate test specification. Sometimes you mistakenly assert the wrong thing, and it happens to match what the system currently does. I recently coded an assertTrue where I meant assertFalse, and spent a few minutes scratching my head when the test passed. Suggestion: Re-read (or have someone else read) your test to ensure it specifies the proper behavior.
  • Invalid assumptions about the system. If you get a premature pass, you know your test is recognized and it's exercising the right code, and you've re-read the test... perhaps the behavior already exists in the system. Your test assumed that the behavior wasn't in the system, and following the process of TDD proved your assumption wrong. Suggestion: Stop and analyze your system, perhaps adding characterization tests, to fully understand how it behaves.
  • Suboptimal test order. As you are test-driving a solution, you're attempting to take the smallest possible incremental steps to grow behavior. Sometimes you'll choose a less-than-optimal sequence. You subsequently get a premature pass because the prior implementation unavoidably grew out a more robust solution than desired. Suggestions: Consider starting over and seeking a different sequence with smaller increments. Try to apply Uncle Bob's Transformation Priority Premise (TPP).
  • Linked production code. If you are attempting to devise an API to be consumed by multiple clients, you'll often introduce convenience methods such as isEmpty (which inquires about the size to determine its answer). These convenience methods necessarily duplicate code. If you try to assert against isEmpty every time you assert against size, you'll get premature passes. Suggestions: Create tests that document the link from the convenience method to the core functionality, demonstrating them. Or combine the related assertions into a single custom assertion (or helper method).
  • Overcoding. A different form of "invalid assumptions about the system," you overcode when you supply more of an implementation than necessary while test-driving. This is a hard lesson of TDD--to supply no more code or data structure than necessary when getting a test to pass. Suggestion: Hard lessons are best learned with dramatic solutions. Discard your bloated solution and try again. It'll be better, we promise.
  • Testing for confidence. On occasion, you'll know when you think a test will generate a premature pass. There's nothing wrong with writing a couple additional tests: "I wonder if it works for this edge case," particularly if those tests give you confidence, but technically you have stepped outside the realm of TDD and moved into the realm of TAD (test-after development). Suggestions: Don't hesitate to write more tests to give you confidence, but you should generally have a good idea of whether they will pass or fail before you run them.
Two key things to remember:
  • Never skip running the tests to ensure you get a red bar.
  • Pause and think any time you get a premature pass.

Simplify Design With Zero, One, Many


Programmers have to consider cardinality in data. For instance, a simple mailing list program may need to deal with people having multiple addresses, or multiple people at the same address. Likewise, we may have a number of alternative implementations of an algorithm. Perhaps the system can send an email, or fax a pdf, or send paper mail, or SMS, or MMS, or post a Facebook message. It's all the same business, just different delivery means.

Non-programmers don't always understand the significance of these numbers:

Analyst: "Customers rarely use that feature, so it shouldn't be hard to code."


Program features are rather existential--they either have to be written or they don't.  "Simplicity" is largely a matter of how few decisions the code has to make, and not how often it is executed.


The Rule of Zero: No Superfluous Parts
We have no unneeded or superfluous constructs in our system.
  • Building to immediate, current needs keeps our options open for future work. If we need some bit of code later, we can build it later with better tests and more immediate value. 
  • Likewise, if we no longer need a component or method, we should delete it now. Don't worry, you can retrieve anything you delete from version control or even rewrite it (often faster and better than before).

The Rule of One:  Occam's Razor Applied To Software
If we only need one right now, we code as if one is all there will ever be.

  • We've learned (the hard way!) that code needs to be unique. That part of the rule is obvious, but sometimes we don't apply "so far" to the rule. Thinking that you might need more than one in a week, tomorrow, or even in an hour isn't enough reason to complicate the solution. If we have a single method of payment today, but we might have many in the future, we still want to treat the system as if there were only going to be one.
  • Future-proofing done now (see the "options" card) gets in the way of simply making the code work. The primary goal is to have working code immediately. 
  • When we had originally written code with multiple classes and we later eliminate all but one, we can often simplify the code by removing the scaffolding that made "many" possible. This leaves us with No Superfluous Parts, which makes code simple again.

The Rule of Many: In For a Penny, In For a Pound
Code is simpler when we write it to a general case, not as a large collection of special cases.

  • A list or array may be a better choice than a pile of individual variables--provided the items are treated uniformly. Consider "point0, point1, point2." Exactly three variables, hard-coded into a series of names with number sequences. If they had different meanings, they would likely have been given different names (for instance, X, Y, and Z).  What is the clear advantage of saying 'point0' instead of point[0]? 
  • It's usually easier to code for "many" than a fixed non-zero number. For example, a business rule requiring there are exactly three items is easily managed by checking the length of the array, and not so easily managed by coding three discrete conditionals. Iterating over an initialized collection also eliminates the need to do null checking when it contains no elements.
  • Non-zero numbers greater than one tend to be policy decisions, and likely to change over time.
  • When several possible algorithms exist to calculate a result we might be tempted to use a type flag and a case statement, but if we find a way to treat implementations uniformly we can code for "many" instead of "five." This helps us recognize and implement useful abstractions, perhaps letting us replace case statements with polymorphism
Naturally, these aren't the only simple rules you will ever need. But simple, evolutionary design is well supported by the ZOM rules regardless of programming language, development methodology, or domain.

Is Your Unit Test Isolated?





(Kudos to the great software guru Jeff Foxworthy for the card phrasing.)

An effective unit test should follow the FIRST prescriptions in order to verify a small piece of code logic (aka “unit”). But what exactly does it mean for a unit test to be I for Isolated? Simply put, an isolated test has only a single reason to fail.

If you see these symptoms, you may have an isolation problem:

Can't run concurrently with any other. If your test can’t run at the same time as another, then they share a runtime environment. This occurs most often when your test uses global, static, or external data.

A quick fix: Find code that uses shared data and extract it to a function that can replaced with a test double. In some cases, doing so might be a stopgap measure suggesting the need for redesign.

Relies on any other test in any way. Should you reuse the context created by another test? For example, your unit test could assume a first test added an object into the system (a “generous leftover”). Creating test inter-dependencies is a recipe for massive headaches, however. Failing tests will trigger wasteful efforts to track down the problem source. Your time to understand what’s going on in any given test will also increase.

Unit tests should assume a clean slate and re-create their own context, never depending on an order of execution. Common context creation can be factored to setup or a helper method (which can then be more easily test-doubled if necessary). You might use your test framework's randomizer mode (e.g. googletest’s --gtest_shuffle) to pinpoint tests that either deliberately or accidentally depend on leftovers.

You might counter that having to re-execute the common setup twice is wasteful, and will slow your test run. Our independent unit tests are ultra-fast, however, and so this is never a real problem. See the next bullet.

Relies on any external service. Your test may rely upon a database, a web service, a shared file system, a hardware component, or a human being who is expected to operate a simulator or UI element. Of these, the reliance on a human is the most troublesome.

SSDD (same solution different day): Extract methods that interact with the external system, perhaps into a new class, and mock it.

Requires a special environment. “It worked on my machine!” A Local Hero arises when you write tests for a specific environment, and is a sub-case of Relies on any external service. Usually you uncover a Local Hero the first time you commit your code and it fails during the CI build or on your neighbor’s dev box.

The problem is often a file or system setting, but you can also create problems with local configuration or database schema changes. Once the problem arises, it’s usually not too hard to diagnose on the machine where the test fails.

There are two basic mitigation strategies:
  1. Check in more often, which might help surface the problem sooner
  2. Periodically wipe out and reinstall (“pave”) your development environment

Can’t tell you why it fails. A fragile test has several ways it might fail, in which case it is hard to make it produce a meaningful error message. Good tests are highly communicative and terse. By looking at the name of the test class, the name of the method, and the test output, you should know what the problem is:
CSVFileHandling.ShouldToleratedEmbeddedQuotes -
   Expected "Isn't that grand" but result was "Isn"

You shouldn't normally need to dig through setup code, or worse, production code, to determine why your test failed.

The more of the SUT exercised by your test, the more reasons that code can fail and the harder it is to craft a meaningful message. Try focusing your test on a smaller part of the system. Ask yourself “what am I really trying to test here?”



Your test might be failing because it made a bad assumption. A precondition assertion might be prudent if you are at all uncertain of your test’s current context.


Mocks indirect collaborators. If you are testing public behavior exposed by object A, and object A interacts with collaborator B, you should only be defining test doubles for B. If the tests for A involve stubbing of B’s collaborators, however, you’re entering into mock hell.

Mocks violate encapsulation in a sense, potentially creating tight coupling with implementation details. Implementation detail changes for B shouldn’t break your tests, but they will if your test involves test doubles for B’s collaborators.

Your unit test should require few test doubles and very little preliminary setup. If setup becomes elaborate or fragile, it’s a sign you should split your code into smaller testable units. For a small testable unit, zero or one test doubles should suffice.



In summary, unit tests--which we get most effectively by practicing TDD--are easier to write and maintain the more they are isolated.

Test Abstraction Smells


In our article Test Abstraction: Eight Techniques to Improve Your Tests (published by PragPub), we whittled down a convoluted, messy test into a few concise and expressive test methods, using this set of smells as a guide. We improved the abstraction of this messy test by emphasizing its key parts and de-emphasizing its irrelevant details.

Stripped down, the "goodness" of a test is largely a matter of how quickly these questions can be answered:
  • What's the point of this test?
  • Why is the assertion expecting this particular answer?
  • Why did this test just fail?
That's a nice short-form summary, but it is descriptive rather than prescriptive. In the Pragmatic Bookshelf article, a real-world code example was used to painstakingly demonstrate the techniques used to improve the clarity of tests by increasing their level of abstraction. Here we provide the "cheat sheet" version:
  • Unnecessary test code. Eliminate test constructs that clutter the flow of your tests without imparting relevant meaning. Most of the time, "not null" assertions are extraneous. Similarly, eliminate try/catch blocks from your tests (in all but negative-case tests themselves).
  • Missing abstractions. Are you exposing several lines of implementation detail to express test set-up (or verification) that represents a single concept? Think in terms of reading a test: "Ok, it's adding a new student to the system. Fine. Then it's creating an empty list, and then adding A to that list, then B, and ..., and then that list is then used to compare against the expected grades for the student." The list construction is ugly exposed detail. Reconstruct the code so that your reader, in one glance, can digest a single line that says "ensure the student's expected grades are A, A, B, B, and C."
  • Irrelevant data. "Say, why does that test pass the value '2' to the SUT? It looks like they pass in a session object, too... does it require that or not?" Every piece of data shown in the test that bears no weight on its outcome is clutter that your reader must wade through and inspect. "Is that value of '2' the reason we get output of '42'?" Well, no, it's not. Take it out, hide it, make it more abstract! (For example, we'll at times use constant names like ARBITRARY_CHARGE_AMOUNT.)
  • Bloated construction. It may take a bit of setup to get your SUT in the state needed for the test to run, but if it's not relevant to core test understanding, move the clutter out. Excessive setup is a design smell, showing that the code being tested needs rework. Don't cope with the problem by leaving extensive setup in your test, but tackle the problem by reworking the code as soon as you have reasonable test coverage. Of course, the problem of untestable code is largely eliminated through the use of TDD.
  • Irrelevant details. Is every step of the test really needed? For example, suppose your operational system requires the presence of a global session object, pre-populated with a few things. You may find that you can't even execute your unit test without having it similarly populate the session object. For most tests, however, the details of populating the session object have nothing to do with the goal of the test. There are usually better ways to design the code unit, and if you can't change your design to use one of those ways, you should at least bury the irrelevant details.
  • Multiple assertions. A well-abstracted unit test represents one case: "If I do this stuff, I observe that behavior." Combining several cases muddies the abstraction--which setup/execution concepts are relevant to which resulting behaviors? Which assertion represents the goal of the test case? Most tests with multiple assertions are ripe for splitting into multiple tests. Where it makes sense to keep multiple assertions in a single test, can you at least abstract its multiple assertions into a single helper method?
  • Misleading organization. You can easily organize tests using AAA (Arrange-Act-Assert). Remember that once green, we re-read any given test (as a document of SUT behavior) only infrequently--either when the test unexpectedly fails or when changes need to be made. Being able to clearly separate the initial state (Arrange) from the activity being tested (Act) from the expected result (Assert) will speed the future reader (perhaps yourself in 5 months) on his way. When setup, actions, and assertions are mixed, the test will require more careful study. Spare your team members the chore of repeatedly deciphering your tests down the road--spend the time now to make them clear and obvious.
  • Implicit meaning. It's not all just getting rid of stuff; abstraction requires amplifying essential test elements. Imagine a test that says "apply a fine of 30 cents on a book checked out December 7 and returned December 31." Why those dates? Why that amount? If we challenged you to tell us the rules that govern the outcome of this test, you'd likely get them wrong. A meaningful test would need to describe, in one manner or another, these relevant concepts and elements:
    • books are normally checked out for 21 days
    • Christmas--a date in the 21-day span following December 7--is a grace day (i.e. no fines are assessed)
    • The due date is thus December 29
    • December 31 is two days past the due date
    • Books are fined 20 cents for the first day late and 10 cents for each additional day.
    • 30c = 20c + 10c
    (Consider that each of these concepts could represent a separate unit test case, and that this detailed scenario represents more of an end-to-end test case.)
    A unit test that requires you to dig through code to discern the reasons for its outcome is wasteful.
You could easily survive without our list of test abstraction smells--and figure out on your own what to do about your tests--as long as you keep one thought in your head: "My goal is to ensure that each unit test clearly expresses its intent, and nothing else, to the next person who must understand this system behavior."

Management Theater

Great managers can improve teams in meaningful ways: smoothing the workflow, selecting targets worth hitting, wisely managing budget and schedule, and working to align teams with organizational goals. We have fond memories of great managers, technical or otherwise, who led and mentored us, who helped us reach new plateaus of understanding and productivity.

We're not talking about those great managers today.

Instead, we'll discuss a particular form of management dysfunction often seen in development shops. Daniel Pink (in Drive) points out that programming shops are full of people who are motivated by the work and excited to make progress. Intrinsic motivation tends to be quite high, though exceptions exist (see Esther Derby's Stop Demotivating Me). Most shops face problems with procedure, organization, technological limits, overly ambitious schedules, and shortage of knowledge or practice. Less astute managers don't understand the problems in their teams, and misinterpret these as motivational issues. When the problem is technical, it does not help to attempt solving it through motivation.





You've probably been a witness to most of these. Just in case they're not obvious:
  • Begging: "Please, I just really need you to work harder to get this done. Just this one time."
  • Browbeating: "Stop your whining and get it done before I replace you with someone who gives a darn about their job!"
  • Cheerleading: "I have faith in you, you're the best! Go Team!"
  • Punative Reporting: "I want to see daily status reports! Twice daily!"
  • Publicity Stunts: "I want every last one of us in this meeting. We need a show of force!"
Such motivational tactics tend to be ineffective. To people struggling with difficult organizational and/or technical problems, emotional appeals seem to be a kind of buffoonery. Of course, if the team succeeds despite the management theater, it merely strengthens the causal connection in the manager's mind. By simply not failing, the team locks their manager into a pattern that ensures that all future crises will be met with emotional and ineffective responses.

We should not be asking how to make managers behave. We should be asking what a team can do to ensure that a manager can provide effective servant leadership. Management theater is not a manager's first choice of action, but rather a tactic of last resource. When a manager does not have sufficient information or timely opportunity to be effective, she must use whatever ethical means remain. Management theater is, therefore, primarily a process smell not of management but of the development team.

Rules for Distributed Teams



As the Greatest Agile-In-A-Flash Card Wielding Coaches So Far(tm), we're often asked for advice (or drawn into arguments) about how to make agile work with distributed teams. Sometimes it's for more humble reasons, though: We've both been remote members of a team, pair-programming with peers daily. We prescribe the following rules for distributed development:
  1. Don't. Traditional organizations should avoid the extra strain, trouble, and expense of remote members without a significant reason. For example, building a better team using remote rockstars might provide some justification, but you might also be better off with a capable, local team that works well together. It's often out of your control, though, and in the hands of a really big boss, bean counter, or entrenched culture. You can make it work: Virtual organizations, startups, and the like find great success using virtual tools, pairing, and non-traditional communication pathways. Make sure you really mean it, because it's not a trouble-free add-on to the way your large organization does things now.
  2. Don't treat remotes as if they were local. Treat your "satellite" developers as competent people with physical limitations. A remote is visually impaired, since he can only see through a web cam, and only when it is on. He cannot see the kanban board, the other side of the camera, and so on. Likewise, he only hears what is said into the microphone and not at all if simultaneous conversations occur. A remote cannot cross the room and talk to people, so interoffice chat (Skype, Jabber, etc) is essential. The local team has to make concessions, repeat conversations, and be the eyes and ears of the remote employees. Do not treat unequal things as equal--accept that there are compromises.
  3. Don't treat locals as if they were remote. You certainly can install electronic kanban boards, online agile project management tools, instant messaging, cameras, email, and shared document management so that every local can sit alone in a cubicle or office and behave just like a remote employee. Rather than being empowered, you are all equally limited (see point #2). Never limit so that all are equal. Allow all to rise to greatness. The power of people working closely together in teams is significant (see first bullet above).
  4. Latitude hurts, but longitude kills. Just being remote hurts (see first two bullets), but the complications can be overcome when employees share the same time zone and working hours. The further you move the team across time zones, the fewer common hours they have, and the longer any kind of communication takes to make a round-trip. Agile is predicated on short feedback loops, so 24-hour turn-around is out of the question. If you can't be a single team that works together, create separate agile teams.
  5. Don't always be remote. Begin your engagement with a nice long on-site visit. A week is barely enough. Two weeks starts to make it work. Visiting for one week a quarter or even a week a month can keep a feeling of partnership alive. Dealing with difficulties is easy among people who know and respect each other. While constant telepresence (Skype, Twitter, IM, etc.) can minimize the problem of "out-of-sight, out of mind," studies show that distributed team success requires teams with strong interpersonal relationships, built best on face-to-face interaction. The bean counters may not be able to comprehend it, but the investment is well worth the return.
Tim: Would I remote again? In fact, I do most of my Industrial Logic work remote. We are in constant touch with each other and pair frequently. I converse with Joshua Kerievsky more in the course of a week than I have any other "boss" (he'll hate that I used that word!) in my employment history. We even work across time zones. We would work more fluently and frequently side-by-side, but we'd probably not get to work together if we all had to relocate. It is a compromise that has surprisingly good return on investment for us. We also have the advantage that we are all mature agilists; it would be very hard for first-timers. It's hard for me, as I have a tendency to "go dark" sometimes.


Jeff: I enjoyed a year of remote development with a now-defunct company one time zone to the east. Pairing saved it for me. The ability to program daily in (virtually) close quarters with another sharp someone on the team helped me keep an essential connection with them. On the flip side, however, I never felt like I was a true part of that team. I missed out on key conversations away from the camera, and I felt that debating things over the phone was intensely ineffective. You do what you must, but I'd prefer to not be remote again.

How to use the deck

People have been asking how to best use the cards in their teams.

We've had several notes via twitter or email about teams reading and discussing them at retrospectives or standup, and even using them in lightning talks at conferences.  We are happy to see them used in this way.

But people have been asking how normal, individual, non-coach team members can use the cards. Jeff provides our recipe for use of Agile in a Flash:
  • Pick a card that's relevant and read its front
  • Read the back
  • Re-read the front to help ingrain the short list
  • Consider tacking the card up, so that it's in your face, helping you ingrain the concepts
  • Seek and re-read related cards to get a more complete picture of the topic
  • Visit the corresponding post here on the blog site if you seek more detail on the topic and why we said some of the things we did
  • Visit the links located on many of the cards and at the blog site if needed
Don't forget to google the subject matter, because there is more written than we could fit in either the deck or the blog. Other opinions and recommendations are equally valid.

Please share and discuss the cards with your community, starting with the local software team and its neighbors but extending to conferences, user groups, or any other willing listeners. New insights can come from anywhere.

Card-Carrying agile author


Jeff Langr, programmer geek and computer addict shows off his Agile in a Flash deck in front of a warm fire in Colorado Springs. Look, Ma, no shrink-wrap! Tim, have you opened yours yet???

Promo Video Contest

Tim and I aren't videographers (although I do like his video, see the previous post in this blog). Nor are we in real, day-to-day teams right now (I'm doing on-customer-site consulting, tough to get permissions and such). So we're looking for someone to provide us with a cool video that shows your team's use of Agile in a Flash, and helps promote the book.

We're offering as a prize a ten-pack of cards ($110 at PragProg) for the video we choose as the best. In return, you'd grant us rights and permissions to use the video and whoevers' likenesses for promotional use.

Please email me (jeff at langrsoft.com) if you have any further questions about the contest.

Contest deadline: February 28, 2011

Legal stuff: This is all arbitrary, and Tim and I reserve all rights, including the right to not choose a video if none suits our promotional needs.

Basic Agile Flow


Tim Ottinger and Jeff Langr

There are many ways to conduct an agile project. Some work with huge backlogs, some with spur-of-the moment requirements, some have continual release, some have non-time-boxed continual flow. We recommend starting with the structure shown on this Agile in a Flash card.

Following this plan, the customer team puts together a prioritized list (feature backlog) of desired features for the upcoming product release. The release is broken into iterations, and the team and customer agree on what will be delivered at the start of each iteration (no sooner). The iteration is of fixed length, something that allows the team to begin gathering consistent data, which in turn they feed back into their estimates and subsequently a larger plan. Upon incrementing and iterating the software a few times, the software reaches a state that it may be released to the customer. Does the system implement the agreed-upon features (did it pass all of its acceptance tests)? Yes: Release to production!

The flow outlined above is a reasonable starting point for a team transitioning to agile. It represents a kind of "Shu" in the Shu-Ha-Ri cycle, where one follows a certain technique or style for a while to build up their ability to perform it. In fact, both of us started with this basic pattern and found that it worked just fine.

As you move to more "Ha" stage, you might experiment with reducing the size of stories so that more of them are done and "in the can" before the end of the iteration.  You might work on making the software releasable at every iteration boundary. You might shorten your iteration period so you can gather data more often, provide smaller increments to certification, and get feedback from users more quickly. You might pick fewer stories per iteration. You may experiment with self-organizing to get work done.  It is a waste to spend a lot of time detailing features which may be done in the remote future or not at all, so you may reduce the entire feature backlog to perhaps a handful of stories. You may learn (as Deming recommends) to use more effective quality practices and eliminate a "certification" stage, as indeed many software shops are doing (research topic: continual release).

Once you are into the Ha and the Ri stages, using agile principles and values should lead you to more informal yet more effective approaches. Here's an "agile flow" card for the more seasoned agile team:

In a bit more detail:
- The customer describes a small subset of needs orally, to the team.
- Through negotiation with the customer, the team commits to completing code that satisfies some, most, or all those needs in a given period.
- The team agrees on a working set of rules that define how they will deliver quality code, under good, sustainable working conditions, in the specified period. (Hint: The team might use retrospectives to help derive and tweak the rules.)
- Repeat. This magic word allows the introduction of things like projects containing releases, and releases containing iterations. Or not.

Acceptance Test Design Principles


Jeff Langr and Tim Ottinger

Acceptance tests (ATs) are as enduring and important an artifact as your code. The proper design of each will minimize maintenance efforts. You'll recognize some familiar concepts--Kent Beck's rules for simple design and some classic OO principles apply well to the design of acceptance tests.

Abstract. A test is readable as a document describing system behavior. Uncle Bob's definition for abstraction applies equally well to tests: Amplify your test's essential elements, and bury its irrelevant details and clutter. Anyone, including non-DBA and non-programming staff, should be able to follow the steps taken in the test, and understand why it passes. Extracting duplicate behavior to a common place--the AT analog of programming's extract method--is the main workhorse that allows you to increase abstraction at the same time you remove duplication.

Bona fide. To ensure continual customer trust, a test must always truly exercise a system as close to production as possible. Passing acceptance tests tell the customer that what they asked for is complete and working. But once the customer has doubt as to what your tests exercise, you have severely damaged your ability to continue using them as contracts for completion.

Cohesive. A test expresses one goal accomplished by interacting with the system. Don't prematurely optimize by combining multiple scenarios into a single test. Keep single-goal tests simple by splitting common content into separate fixtures. Yes, this will mean your acceptance test suite runs more slowly, but it's far more important to avoid compromising clean test design. (It's also why your unit test suites should be as fast as possible.)

Decoupled. Each test stands on its own, not depending upon or being impacted by results of other tests. A failure caused by problems in another test can be difficult to decipher.

Expressive. A test is highly readable as documentation, not requiring research to understand. Name it according to the goal it achieves. As with unit tests, refactor each test to improve the ability of a third party to understand its intent. You should always eliminate magic numbers, replacing them with constants as appropriate. Improve visual accessibility by formatting your test using Arrange-Act-Assert (AAA). You should also make it clear what context is being set up in the test; one way is to incorporate additional assertions that act as preconditions.

Free of duplication. Eliminate duplication across tests (and even within the same test) before it eliminates you! Duplication increases risk and cost, particularly when changes to frequently-copied behavior ripple through dozens or more tests. Duplication also reduces the use of abstraction, making tests more dense and difficult to follow.

Green. Once a story is complete, its associated ATs must always pass. A failing AT should trigger a stop-the-line mentality. Don't allow your test suite to fall into disarray by allowing failures to be ignored!

Agile Success Factors


Jeff Langr and Tim Ottinger
Font: Brown Bag Lunch

Pop quiz, hotshot.

Q. "You're not agile if you don't ... "
A. Select one more of the following:
a.Have daily stand-up meetings
b.Pair program
c.Do TDD
d.Employ a metaphor
e.Have a ScrumMaster
f.Run iteration planning meetings
g.Use index cards

The answer is H, none of the above. Practices are just that--specific techniques a team might or might not employ to aid them in being agile--whatever agile means. Here's our definition: Agility means you are able to frequently and continually deliver high-quality software that meets the customer's needs.

The agile manifesto lays out four core values ("working software over comprehensive documentation," e.g.) and a dozen or so principles. But what factors truly make an agile team successful?

Freedom to change. Incremental change, one of the other success factors, can only occur if your teamis able to change how they work without outside interference. Meddling and micromanaging, never mind the intentions, usually divert the team from what should be everyone's goal of shipping quality software. Get the right people in place in your organization to support the team's rightful decisions, and to not try to change them. We'll be blunt: Conversely, this may mean removing the wrong people from the chain of influence.

Energized team. The successful agile team just gets it. They want to work this way. We can walk into a room and generally see whether or not a team will succeed. A good team is highly transparent and visibly enjoying what they're doing. They've built a true team spirit, and no one talks about "my code" or "my stories." They collaborate without being told; they hold their own stand-ups without a project manager having to crack the whip. They always act to protect the product and its integrity--they don't discard quality controls even when under intense pressure to deliver. They also look for ways to make life better for everyone--"How can I rework this test so that the next developer will understand my intent?" They'll step in and help anyone as needed to deliver quality product, even if it's "not their job."

While we like to think a good, energized team is all it should take, lack of freedom to change will demoralize even the best teams, to the point where your guys who "get it" choose to move on to something less oppressive.

Commo (line of communication) to customer. A product is an intricately detailed ship that must be well understood and constantly steered. The best teams we've seen have been steered by a strong, enthusiastic single-person, dedicated customer who truly understands what needs to be built. This customer has the time to ensure that everyone else can learn from them what needs to be built. While the strong customer can have a supporting cast, a large, committee-style product management team simply doesn't work. (It's always unfathomable to us that most companies are willing to staff development teams with no end of apathetic dregs, but are unwilling to pay well for strong people who know what product to build.)

Collaboration. So many teams want to be agile but can't get past cube mentality. Sometimes we think stand-ups were designed solely to compensate for cube walls, to make people feel better because they got together for a few minutes in the morning. Collaboration isn't just meetings. It's working together, and more importantly, figuring how to change workflow so that you have to work together. Due to the heavy intricacy of hundreds of thousands of lines of code interacting together as a single unit, software projects cannot be individual efforts ("you do your code, I'll do mine"). We must learn how to collaborate in the code. Really--collaborate. Work together. We mean it. Those who treat coding as an individual activity don't get it.

Attention to quality. The code is your product, and, unlike most other products, one you will continually build on and shape to meet continual new customer demand. If you fail to pay attention to quality, you will eventually slow down in your ability to meet demand, sometimes to the point of stagnation. The team must ensure that the code is clean enough to accommodate new incremental customer needs. Attention to quality is never a separate task on a plan; it must be embodied in everything you do to build your product, including coding, design, documentation, testing, automation, tracking, communication, and so on. Quality must be incrementally and continually addressed.

Incrementalism Most of the practices you employ in agile have something to do with ever-smaller steps. Instead of a massive requirements document, you allow the customer to provide features just in time, in small chunks described on little index cards. Instead of a comprehensive up-front design document, you learn how to design on a task-by-task and test-by-test basis. And so on. You must learn to think incrementally.

You must also look to correct course continually and incrementally. For every few lines of code added or changed, take time to ensure the design is as good as you can get it. (Which you can only do if you have enough controls in place to allow frequent code improvement. Best way we know how to get there: TDD.) Not only do you need to correct course in the product, you need to always correct course in your team. You should always be introspecting about your team, probing for ways it's not performing optimally, and working to correct these problems. Retrospectives are a good start.

A successful agile project is not a bunch of hare-like sprints to the finish line. It is a cool-headed, tortoise-like, slow 'n' steady approach of small, well-reasoned steps. Each step is an opportunity to look up and see where it got you--closer to the finish line or further away? It's easy to correct a single misstep. In contrast, a single mad sprint in the wrong direction can take you pretty far off course from the finish line.

Automation. There are numerous ways to waste people's time on a software development effort--running automatable regression tests manually, for example, or suffering a build process that unnecessarily requires multiple manual steps or manual verification. Agile cannot work unless you automate as many menial, tedious, and error-prone tasks as possible. There's simply not enough time across a two-week period to get any real work done if you have to slow down for numerous manual gating processes.

Information Radiators


Tim Ottinger and Jeff Langr
Font: Brown Bag Lunch

In keeping with the agile value of Communication, agile teams often place large charts and graphs in their workspaces to radiate important information such as defect rates, rate of completion, and measures of code goodness (CRC, bugs, test counts). Much is made of Informative Workspaces or Big Visible Charts (BVCs).

You'll find numerous graph types in the agile literature. Some types, such as burn-up, burn-down, and cumulative flow are commonly used to graph progress against a goal such as an epic story. Defects and velocity are also useful things to track, but don't limit yourself. Let your team determine--and let these information radiator principles guide--what you track and publicize.
  • Simple
    A chart or graph should not require minutes or hours of study. Ensure it is brief and concise. Anything dense or complicated will fail to communicate. Also, don't use highly derivative, biased, weighted information. Simple facts speak more profoundly than clever algorithms.
  • Stark
    BVCs don't exist to convince the public of your team's excellence. Nor should you mask errors or problems with them. The purpose of the graphs is to display progress and expose problems. Don't subvert the honesty of the graphs, otherwise your team will stop trusting them, and therefore stop using them to improve their work.
  • Current
    Any information that is not kept current is quickly ignored. You may want to have the CI build graphs based on automatically-generated stats. Information more than a few days old is too stale to have evocative powers.
  • Transient
    Radiators that only expose problems shouldn't be up there long, otherwise it's clear that you aren't solving your problems. Choose a BVC to highlight a current challenge for which you can demonstrably show improvement over the next several days or iterations. Once it's clear the team has gotten the message, and things are back on track, take down the BVC.
  • Influential
    A good information radiator influences the team's daily work. It may also influence managers, customers, developers, or other stakeholders. Ideally it will empower the whole team to make better decisions, otherwise it is not worth preparing and presenting.
  • Highly visible
    An effective information radiator needs to not only have information on it, but must transfer it to team members, stakeholders, and passers-by.
  • Minimal in number
    Having too many graphs or charts will cause all of the charts to lose evocative power. Also, exposing too many problems on the wall at one time can demoralize any team, so choose either the most important, timely, or fixable problem to highlight, and defer the rest. Sometimes less truly is more.

Development Iteration



Jeff Langr and Tim Ottinger
Font: Brown Bag Lunch

These half-dozen items provide the core set of principles around the iteration workflow for delivering stories to an eager customer. (Most of the principles would also apply to a kanban environment, with a bit of tweaking.)

Team commits to stories they can complete. Each iteration begins with a short planning session. The primary goal of this session is to ensure that the development team (hereafter "the team" in this blog post) and customer are in sync on what the team will deliver by iteration end. The customer and team jointly produce acceptance criteria that act as the contract for completion. The team commits to only the set of stories they can confidently deliver by iteration end. The workload is thus fixed for the duration of the iteration, although the customer can negotiate a change to the workload with the team. Hopefully this occurs only rarely, and hopefully the team has the sense to insist on work being taken off the iteration's plate.

Team works stories from prioritized card wall. The customer is responsible for managing the flow of work. Their job is to ensure that available stories are presented in priority order. Available developers grab the card with highest priority and move it into an "in progress" bucket. The card wall, whether real or virtual, is visible to everyone involved in the project.

Team minimizes stories in process. Applying non-collaborative approaches to agile asks for the same, lame results you got from waterfall. A typical anti-pattern we've seen (we'll call it "Indivigile"): Every story is close to a full iteration in size, and each developer grabs their "own" story to work on. So much for any of the significant benefits that you might have gained from collaboration (primarily better solutions, increased mindshare, and thus minimized risk). Worse, you dramatically increase the probability that stories will not be delivered by iteration end. A better approach: Team members work on the smallest sensible number of stories at one time, maximizing collaboration and ensuring that each story is truly done before moving on to the next.
Following this approach, you should see almost half of the committed stories 100% complete by midway through the iteration. Overall, the number of stories fully complete by iteration end should increase. Also, many activities that would have otherwise been blocked until iteration end can start earlier (additional testing, clarification/correction, documentation, etc.).

Team collaborates daily. A daily stand-up meeting is a good start for getting the team on the same page, but never sufficient. If the stories-in-process are few, the team must find ways to collaborate frequently throughout each day, to avoid wasting additional time.

Customer accepts only completed stories. Stories must pass all programmer and acceptance tests before the customer looks at the the software. The customer needs to play hard-ball at this point: Any story shy of 100% passing gets the team zero credit: Incomplete work provides no business value. The lesson for the team to learn (and apply in subsequent iterations) is to not over-commit.

Team reflects on iteration and commits to improvements. Agile is built around frequent incremental delivery, in order to maximize feedback, which in turn provides opportunities to improve. Iterations are not only opportunity points to improve the product, but also for the team to reflect on what the team needs to change, whether to improve quality, throughput (which improving quality will do), team morale, or any other execution concerns that the team recognizes. See our Retrospectives card for guidance on this critical agile practice.

Everything else that happens with respect to iteration execution is "implementation details," and thus up to your team to determine. We don't care if you use software tools or physical card walls. We don't care if your team is distributed, whether you run a formal stand-up meeting, whether you use velocity, planning poker, load factors, yesterday's weather, Scrum Master whip cracking, or any other specific practices for planning and organization. All is generally good as long as you adhere to the above principles for flowing work.

The Only Agile Tools You'll Ever Need


Source: Tim Ottinger and Jeff Langr
Font: BrownBagLunch

Perhaps you were expecting a list of software products?

Agile's raison_d'ĂȘtre stems from backlash against "heavyweight" methodologies that insisted upon voluminous documentation, never-ending and never-useful meetings, and vendor-lovin' high end tools. Yet in 2010, there are at least three major high-price, high-profile agile project management tools and dozens of lower-end ones.

We contend that using one of these expensive software tools says, "you don't get it, and maybe shouldn't bother, because agile may not work well for you." No doubt this will perturb those of you who have found some value in agile management tools, so let's step back and talk through the issues.

Agile is primarily about team communication and collaboration. It is predicated on continual face-to-face interaction. Every step in the opposite direction decreases the chance of success. Kent Beck had it right: The best opportunity for success, still, is having all project members sit in a single room and work as a team, instead letting a bunch of self-interested individuals retreat to the isolated comfort of their cube.

There is no room in agile development for "avoidance technology." If you can't find a way to meet its demand for close-contact, continuous communication, then you are not ready. If you don't want your programmers sitting together, talking through problems, writing code socially, then try something else.  Attempting to bolt avoidance technology onto agile methods is like castrating a stud horse.

Once you diminish teamwork,  you have to compromise other agile principles. If we're not in the same room, we need more meetings to talk to one another. We need to write more things down and produce more centrally-stored documentation. We need to have more contractually-oriented agreements between people who really work on the same team, because we can't trust one another, because we're can't see each other face-to-face every day. The agile practices topple like dominoes.

There is always questionable rationale about why things must be that way. "Every project has to be split evenly between low-cost [read: offshore] labor and high-cost [local] labor." That's not a definition of project success, just a broad-brush attempt at cost control. Having some remote team members should never force all team members to work as if they were remote.

Perhaps the executives could use their hard-won corporate smarts to organize the teams differently. "Half our projects must be low-cost" should appease the bean-counters, and could just work. Here come the excuses: "But we don't have business experts who can act as customers in Bangalore." Really, that's a problem you can't solve? Having two truly agile teams producing quality code should be better than having two teams pretending to be, and failing at, agile.

If you want to succeed at agile, find a way to put each team in its own room. Give them the tools we list on the card, and they'll rarely ask for more. We guarantee they won't ask for a high-end agile tool, because they don't need one and don't want to be bothered with one. Executives of various stripes might ask for one, but all they really want is a simple way to view and present status across significant numbers of projects. You don't need a high-end, specialized, complex tool to do that, and you definitely don't need to waste the time of everyone across every agile team to do that (hint: see Tracker and Coordinator on Agile Roles card. This is what you pay project managers to do).

Agile Roles


Some of you might be surprised that there are no farm animals on this card. We'll talk about that later.

You'll also note that each role definition is preceded with the word "helps." Everyone on an agile team can act as any role at any given time, helping the team toward its ultimate goal of producing quality product.

In agile, roles are just that, roles. We don't look to protect our role from being usurped. While we are most comfortable playing in our primary role, we have no qualms about stepping into other roles as needed or appropriate. A tester might act as a tracker to gather useful team metrics; a programmer might assist the customer and testers in defining acceptance criteria for a story; a manager might help execute any remaining manual acceptance tests; and so on.

Extreme Programming (XP) originally designated two roles, programmer and customer (for whom XP described little--it is extreme programming after all). Each camp had specific rights, to protect each camp's interests against likely infringements by the other. A bit divisive? Indeed. You will be hard-pressed to find much remaining literature that mentions "XP Rights."

Within the programmer camp, XP eschewed specialists and hierarchies, fostering instead experts and team collaboration. But things other than programming and steering as a customer still needed to be done. Various adjunct needs, such as coaching and tracking, emerged. Who would do these things? These part-time needs were fulfilled by XP team members (customers or programmers) shifting in and out of secondary roles.

Proponents of Scrum (which predates XP a bit) were never so trusting in a teams' ability to self-manage, so they introduced the Scrum master, whose job is to keep the team on track. As for XP, the reality of human behavior brought its lofty goals down a peg, more so as Agile took off. Organizational interests and self-preservation took over. The concept of a self-organizing, title-ignoring team threatened HR, project managers, managers,  and others who had vested interests in promoting and maintaining specific titles. In an attempt to keep us from falling back into isolationist and divisive behavior, Kent Beck began referring to the collection of all team members as "the whole team," instead of discussing specific roles.

Today, the agile organization doesn't usually look terribly different than the pre-agile organization, except within some programming teams. You'll still hear long-standing, specific names for specific customer roles, such as "business analyst," "product manager," "UI design specialist," "subject matter expert (SME)," and "stakeholder." But on the programming side of the fence, a team working within an open workspace and pairing usually has lost any real concern over titles and artificial divisions such as "front end programmer vs. back-end programmer."  Since these roles are temporary, it is futile and undesirable to to tie hierarchical positions or increased benefits to them.

It is best for us to concentrate more on putting out quality product, and least on relative positions. We understand that it may not be realistic in your organization to shed your title, but you should still consider the ego-less organization a general direction to move toward. Sometimes simply thinking that there are no titles can help you find an answer to your problems.

Astute readers will note that "manager," "project manager," and "Scrum Master" do not appear on this list. We have instead substituted "team coordinator," someone who buffers the day-to-day development team from outside interference and distraction. A team coordinator can communicate scheduling issues,  handle incoming requests, and smooth interpersonal problems. Within Scrum, this team coordinator also takes on the responsibility of enforcing the rules, something that a team is certainly capable of doing on their own. We suggest that the best Scrum Masters plan their own obsolescence: Their primary job should be to help us mature to the point where the team no longer needs a Scrum Master.

The lack of official leaders and managers in agile is off-putting to some. We've noted that most software teams do not suffer from a shortage of management, but a drastic overage as different departments try to exert control over the isolated "programmers" and "testers."  In an agile team we come together as a team, and the work has us busy enough that little traditional management is necessary.

As far as the chicken and pig roles that some of you might have expected to see on this card: Everyone we've met on a well-functioning agile team has been unsurprisingly human. We don't believe people who might have something important to say should be stifled. If someone says inappropriate things at inappropriate times, we muster the courage to ask them to take it elsewhere. "Chicken" was deliberately chosen as a derisive moniker, the spirit of which has no place on a real team. (We quote Jeff Sutherland here, because his original blog post has been archived and may soon rightfully be buried entirely: "People who are not committed to the project and are not accountable for deliverables at the meeting do not get to talk. They are excess overhead for the meeting. ... Whatever we call them it should have a negative connotation because they tend to sap productivity. They are really suckers or parasites that live off the work of others. Sorry to be political [sic] incorrect but others should come up with a euphemism that conveys the right balance between being "nice" and communicating clearly that eavesdroppers must minimize their impact on the productivity of the team."  Perhaps the goal is laudable, but the sentiment is so wrong as to be repulsive.)

Weinberg's (First Three) Laws of Consulting


Font: AndrewScript 1.6
Source: Gerald M. Weinberg, The Secrets of Consulting


I re-read Gerald Weinberg's book The Secrets of Consulting (Dorset House, 1985) once every year or two. The first time I read it, about a dozen years ago, half of it seemed obvious while the other half seemed counter-intuitive. But I discovered that I too often ignored its obvious advice (i.e. "common sense"), and that its counter-intuitive advice is spot on.

Weinberg spills so many secrets ("give away your best ideas" being one of them) that it seems unfortunate to mention only the first three (and their corollaries), but getting past these is key to understanding the rest. And yes, I highly recommend getting a copy of the book so that you can discover the rest of the secrets, even if you don't consider yourself a consultant. Substitute "problem solver" or "agile coach" for "consultant," and most of the book will apply equally well (except perhaps the parts about marketing and pricing).

I actually lost a potential new client because of my inattention to the first law, or more specifically to its corollary. After having a phone conversation about the client's interest in transitioning to XP (this was about 5 years ago, when people still uttered the term XP), I met with the leadership team at their offices. On the phone, the person looking to bring me in talked about some of the serious challenges they had. But when I arrived, I heard little about these serious problems, only some vague notions that they wanted to improve how they did things.

Never mind, I was too wrapped up in my grandiose scheme to solve all problems for them using XP. I mentioned the challenges I had heard about on the phone, and indicated that I'd be able to help them fix it all. "What makes you think we have such serious problems?" Oops!

What Weinberg points out is that it's very difficult for us to admit it when we have serious problems. I didn't get the gig, because were I to have waltzed in and solved many large problems, it would have been far too embarrassing for the people in that room. Since then, I've promised as much improvement as their pride is willing to admit--per Jerry, 10%. (Of course, there's nothing that says you can't deliver more, as long as you are cautious about who gets the credit--see rule #3). Just last week, I found the rule to be dearly applicable in my personal life.

As far as the second law is concerned, we all tend to follow comfortable patterns, and this is often the cause of the problem: Once you're in a rut, it can be hard to get out. "We've always done things this way, that's just the way it has to be." The trick for a consultant is to help someone get out of a rut--which requires a change in direction--without themselves starting to fall into the same rut. You don't want to stay in one place too long as a consultant.

Some people have found Weinberg's "secrets of consulting" to be nothing short of greedy and cynical. They suggest the third law is all about taking as much money as possible from the client on an hourly basis. But Weinberg points out that this notion of not getting paid by the solution hearkens back to the first law: A solution expensive enough to require a consultant requires a problem too big to admit. Is it cynical to work in a manner that meshes realistically with normal human behavior? I don't think so.

I hadn't looked at The Secrets of Consulting in about two years. That's evidence of the rut I was tracking in. I've recently been shoved out of my rut, and I am thankful that someone reminded me (with the utmost subtlety) to revisit the book. I'm looking forward to re-amplifying my impact!

Stopping the Bad Test Death Spiral





Fonts: Daniel, Daniel Black
Source (SCUMmy Cycle): Ben Rady, Rod Coffin of Improving Works from their Agile2009 presentation.
Source (Remedies): Tim Ottinger & Jeff Langr


The "SCUMmy Cycle" is all-too-common. A team with legacy (untested) code tries TDD hoping they will be able to continue making improvements. First efforts result in integration tests, perhaps because the code is tightly coupled and not cohesive. The team intends to someday replace them with proper unit tests. A team lacking essential understanding of the qualities of a good unit test will write integration tests unwittingly.

Months or years later the tests are abandoned, with a significant investment in their construction and maintenance having gone to waste. How does this happen?

Here's how the cycle generally plays out:
  • Only integration tests are written. One common cause: business logic is intertwined with UI or database code, perhaps as a reflection of examples found in framework and library tutorials.
  • More tests are added, until running them is slow/painful. Fifty to 100ms to interact with a database doesn't seem bad. But multiply that by a few hundred or thousand tests, and a even small test suite execution takes several minutes.
  • Tests are run less often because developers can't afford to run them. Developers will resist running ten-minute test suites more than a few times a day. Less-frequently-run tests are much harder to resolve when they fail. Large tests tend to be fragile and fail intermittently. They have runtime dependencies on external elements that are not controlled by the tests, and perhaps dependencies on side-effects other tests.
  • Tests are disabled because they are unreliable, obsolete from lack of maintenance, or simply too slow to tolerate.
  • Bugs become commonplace just as they were before the team started doing automated testing. Disabling too many tests lowers coverage and the remaining tests become ineffective.
  • Value of automated testing is questioned--"we're no better off than before!" And yet the team still wastes time writing and (sometimes) running tests.
  • Team quits testing in disgust, or managers mandate a stop to testing. The experiment is deemed an expensive failure. Teams are now free to return to the good old days of rapid coding and expensive manual testing. As W. Edwards Deming said, "Let's make toast the American way: I'll burn, and you scrape."
A sad progression, and it's real. Both of us (Tim and Jeff) have experiences confirming Rod and Ben's SCUMmy Test Progression. At each step along the progression it becomes harder to salvage the testing effort. Plenty of teams have started rough, but have recovered before reaching the bottom of the progression.

One possible lesson: it's cheaper to have no tests than to have bad tests. A better lesson: life is too short to settle for crummy tests.

The flip side of this card lists some therapeutic strategies for each downward step:
  • Only integration tests are written -> Learn unit testing. This also ties in with a better understanding of (and adherence to) the SRP and LoD. Consider hiring a short term coach to teach healthy habits in the team, or invest in better reading materials and the time to absorb the material. Attend a training session.
  • Overall suite time slows -> Break into "slow/fast" suites. Establish a time limit for the fast suite, and strive to keep the fast suite large and fast. Thousands of unit tests can easily run in under 10 seconds. Consider a tool like Infinitest to help keep tests running fast (but note that everything works better in a system that exhibits low coupling).
  • Tests are run less often -> Report test timeouts as build failures. The measures you institute will be arbitrary, but the key focus is on continually monitoring the health of your test suite. If the suite slows dramatically, developers will soon skimp on testing.
  • Tests are disabled -> Monitor coverage. New functionality should have coverage in the mid-to-high-90% range, and the rest of the system should exhibit stable or increasing coverage. System changes resulting in reductions in coverage should be rejected. Integration tests provide broad coverage, but you should either replace these with unit tests or elevate them to acceptance tests. You should otherwise delete disabled tests.
  • Bugs become commonplace -> Always write tests to "cover" a bug. These tests should always be written first, a la TDD. A defect is evidence of inadequate test coverage. Make sure you always track defects and understand the root cause of each and every one! Insist that these tests be fast tests.
  • Value of automated testing is questioned -> Commit to TDD, Acceptance Testing, Refactoring. Committing to TDD means learning how to do it properly--it is of low or negative value otherwise! Also note that many "regressions" are rooted in code duplication. Refactoring to eliminate duplication is critical for quality improvement. It is reasonable that a quality crisis causes a reduction in new features production.
  • Team quits testing in disgust -> Don't wait until it's too late! If a team's gotten to this point of admitting defeat, it's often too late--management won't normally tolerate a second attempt at what they think is the same thing.
In the adoption of unit testing, a few training sessions and a little time with a good coach can make all the difference in the world.

If you must self-coach, then ensure that team members don't view TDD as simply "management forcing programmers to test their code." Ensure that programmers understand the significant design and documentation benefits that TDD can deliver. Ensure they understand the scalability advantages of fast automated tests over manual testing.

A team that understands TDD and strives to attain the benefits it offers will avoid the Bad Test Death Spiral.