On an early Monday flight I pretty much have the whole United club to myself…
If you have ever thought about United Club membership before I offer a little food for thought:
If you spend as much time at the airport as I do (or even way less) think of club membership in terms of free “faster” Wifi, free drinks, soups and salads as well as a relatively quiet space to relax or work and wait for your delayed flight.
Imagine this scenario:
You have an hour and 25 minute wait for your next flight or enough time to have 2 glasses of wine a bowl of soup (or 2) and a salad. While enjoying your wine and eating your soup and salad you return a few emails and finish that proposal on your laptop. What’s that your flight was canceled and you need to talk to customer service but so do the other 247 people on your flight? No problem go to the club and talk to customer service there “members only”.
So how much does all of this cost?
First let’s do the math…
2 airport glasses of wine – $18
1 airport bowl of soup – $9
1 airport salad – $12
Quiet workspace with space for laptop and faster wifi – $priceless
For a grand total of $39 actual dollars per leg of your trip plus at least $50 worth of less noise and airport DRAMA. What is your peace of mind worth??
So you get stuck on that flight from SFO to DCA and have to go through ORD because you booked last minute. That’s $39 at SFO $39 at ORD and $39 at DCA (Because it’s better to wait for your bags in the club with a free drink in your hand) or $117 each way. A grand total of $234 round trip. Did I mention you get a plus 1?
The membership is $550 which sounds expensive until you do the math. If you travel for work at least once per month club membership represents a cost savings. Or if you travel for fun with a companion more than 4 times per year or if you just drink and snack A LOT. You have a full year to eat and snack your way to your $550 membership fee.
The cost of defects increases exponentially as they slip down the delivery pipeline a shift left in quality, security and stability goes a long way toward reducing costs. We must make a shift left in quality, security and stability by surfacing defects earlier in the Delivery pipeline as defects get exponentially more expensive as they slip down the pipeline. A defect caught after development in quality assurance is about 15x costlier than a defect found in design.
From a development perspective, a defect found during implementation is 6.5x costlier than if it were found in design but still about half the cost of finding the same defect in QA. If we have so many gaps in our process that a defect makes it from design, through implementation and QA and somehow finds itself in production the cost is 100x higher than finding that same defect in design.
A more elusive fact we must also take into account is the soft costs associated with the very public statement that the dev (and quality assurance) team “is a joke” as colleague of mine would say. All joking aside this is a team effort it is not only the developers fault but also QA, infrastructure, release management and anyone else responsible for any part of the pipeline. We sink or swim as a team and quality is everyone’s responsibility. If we deliver a crappy piece of software and the stock tanks and the company goes out of business we are ALL out of a job. Reputation cost is a little difficult to quantify but suffice it to say that releasing software riddled with defects and poor user experience does not build confidence in your team’s ability to deliver quality solutions to its customers. http://www.isixsigma.com/industries/software-it/defect-prevention-reducing-costs-and-enhancing-quality/
Implementing processes such as Test-Driven Development, Continuous Integration and Automated Acceptance Testing / Behavior Driven Development can help teams make that “Shift Left” in quality that reduces the cost of their debt. As teams will typically incur from 10% – 30% technical debt to new work ratio per sprint it’s a good idea to do everything we can to find the defects and reduce debt before is slips down the pipeline towards Maintenance / Production where the costs skyrocket.
What I find most compelling about the DevOps metrics listed below is the fact that control of almost all of the metrics can be gained by mastering the first metric in the list. The metric that seems to tame all others is the Number and Frequency of deployments / software releases. If we strive for the seemingly extreme goal of 10+ Deploys per Day the milestones that we much achieve to reach this goal end up covering all of the remaining metrics. For example, if we are able to deploy 10 times per day without losing or pissing off every single one of our customers we must have reduced the volume of defects, number and frequency of outages as well as cost of those outages. As the story goes in order to deploy 10 times a day most everything in the delivery pipeline needs to be automated therefore automated testing should reduce the Number of Defects and the cost associated with fixing them. By the same token if the deployment is automated the Number and Cost of Resources associated with deployment should be reduced. Lastly in order to deploy 10 times during a normal workday of 8 hours (480 minutes) our deployments must be done every 48 minutes. If we intend for our customers to actually use our solution the actual deployment must take significantly less than 48 minutes! As a result, Mean Time to Recovery and Mean Time to Change must go down as changes and fixes can go out in the next deployment within 48 minutes.
Number and frequency of software releases
Volume of defects
Time/cost per release
MTTR (Mean Time to Recover)
MTTC (Mean Time to Change)
Number and frequency of outages / performance issues
Revenue/profit impact of outages / performance issues
Number and cost of resources
When working on an Agile Project, solution requirements are gathered through meetings between the customer and the Product Owner and potentially a Business Analyst. Unlike requirements from a traditional Waterfall project the solution requirements for an Agile Project are written is a simple nontechnical format that is easily consumable by all project stakeholders. Project stakeholders from the customer, nontechnical product owner and executive team to the technical implementation team members should be able to read and understand the meaning behind the simplified requirement. This plain English, nontechnical, easily consumable format is known as a User Story. User Stories follow a basic format that is consistent regardless of who writes the User Story and their technical background (or lack thereof). This simple format includes the type of user performing the action, the action they are performing and the results they hope to gain by performing said action for example:
As a customer I want to log in so that I can order the items in my shopping cart without reentering my information for every order.
In this example the type of user is a customer, the action is logging in and the goal is to avoid entering customer details when making a purchase. It’s a nice standard format easily understood by all parties involved but where are the details? As a developer how do I know “exactly” what the customer is asking for? As a tester how do I know what I should test for to ensure that we deliver what the customer really wants? The answer is Acceptance Criteria! Acceptance Criteria is where the details of our User Story are fleshed out through further discussion with the customer. For example, the Product Owner might ask the Customer “What information does the user not want to reenter? To which the customer might respond “Shipping address, billing address and payment details”. To further clarify the customer’s intentions for the desired feature the Product Owner may dig deeper and ask “what do you mean by payment details”. To which the customer might respond “credit card number, name on card, expiration date and CVS number on the back of the card”.
Once the Product Owner has collected enough detail that they have a firm grasp on the functionality requested by the customer the information is documented in the User Story as Acceptance Criteria. There is no requirement to use a standard format for the Acceptance Criteria, however for the same reasons we use a standard format for the User Story we should also adopt a standard format for Acceptance Criteria so no matter who writes the Acceptance Criteria it is in a consistent format that is easily understood by all project stakeholders. A good candidate for this standard Acceptance Criteria format is Gherkin format. Gherkin format is used in an acceptance testing tool called cucumber, hence the name. A Gherkin is a smaller variety of cucumber used to make pickles so it’s fitting that it is the name of the syntax used to write Acceptance Criteria in an application called “Cucumber”. Gherkin use a standard format that describes the scenario in question, the action to be performed and the expected result keeping the statement short and to the point using the Given…When…Then… format.
Given a valid username and password When I login Then I am taken to the profile page
Without this standard format, Acceptance Criteria could appear as a bulleted list, as a long winded paragraph with no structure or as some other nonstandard format making it difficult to consume both by technical and nontechnical team members.
During sprint planning when the delivery team is reviewing the User Stories to be committed to the sprint the team will first play Planning Poker to come to a consensus on the Story Points assigned to a User Story. Then the team will begin to flesh out the necessary tasks to get the story to its Definition of Done. It’s worth noting that typically the sprint planning meeting should be no more than 2-4 hours per week of the sprint. During this 4-8 hour planning meeting (for a typical 2 week sprint) we should expect the team to identity 60-75 percent of the delivery team tasks necessary to get the User Story to the teams stated Definition of Done. The amount of additional time necessary to identify the 25-40 percent of delivery teams task is far beyond the effort vs accuracy curve. The more detailed the Acceptance Criteria the more likely we are to capture a larger percentage of the necessary Delivery Team tasks.
Generally speaking if a User Story has more than 15-20 Gherkin formatAcceptance Criteria is probably too large (an Epic) and should be sliced to make it more manageable. As the theory goes the larger and more complex the User Story, the more Acceptance Criteria it will have and the more likely we are to miss more of the detailed delivery team tasks during sprint planning.
We do not consider a User Story “Done” until, at a minimum, all Acceptance Criteria has been met.
So, write good User Stories, write detailed Acceptance Criteria and use Gherkin format to keep the Acceptance Criteria consistent regardless who writes them.
Any feature or method that we add to our solution will have happy and one or more unhappy paths. The happy path describes how the feature will behave when everything goes according to plan. If there are any errors exceptions or omissions then the unhappy path is taken.
Consider a standard authentication User Story for a website with members only content:
As a member I want to log in with my account so that I can access members only content.
If we assume a standard login procedure we can identify a happy path and several possible unhappy paths.
As a member I want to log in with my account so that I can access members only content (Happy)
As a member I want to be able to request a new password when my login fails, so that I can try to log in again (Unhappy)
As a new member I need the option to register a new account so that I can access secured content (Unhappy)
As a site owner I want to block users with 3 failed log in attempts in a row so I can protect the site against hackers (Unhappy)
As you can see the unhappy path describes omissions or exceptions in our process. By identifying the various happy and unhappy paths we ensure that the delivery team fully understands the functionality required by the customer. The more detailed and granular user stories also provide an opportunity for more accurate Acceptance Criteria for each sliced user story. The sliced User Stories also give the product owner the opportunity to prioritize functionality at a more granular level. For example, in the first release there may only be a few users so locking accounts after 3 failed logins may not be important yet. The password reset feature may also be overkill in the first release as someone in Operations or Support can easily reset a user’s password upon receipt of an email request from the user.
Stay tuned for 6 more methods for slicing User Stories…
In an Agile Development Project, the Solution Requirements are communicated from the customer to the delivery / development team using a standard notation easily understood by the delivery team and all stakeholders. This standard notation is known as a User Story. See our post on What Makes a Good User Story for more details.
When committing a User Story to a sprint in an agile project it is best that all the tasks necessary to take the User Story to the teams stated definition of done can be completed within a single sprint. In most cases a User Story so large that it cannot be completed within a sprint is a feature or epic that should be broken down into smaller components before being committed to the sprint.
There are several different ways we can go about breaking down or slicing a User Story. We call it slicing to invoke the “Layered Cake Metaphor”.
As the theory goes we can only truly enjoy our cake if we take a vertical slice of the cake ensuring that we get all of the flavors from each layer including the frosting between layers. Taking that concept to our layered application architecture this simply means that to really call a story “Done” we must be able to test and use the features introduced by the completion of the User Story. If we don’t get each layer of the application framework in our “slice” then we can’t use the feature. For example, a login feature is only useful if we have the login form at the user interface layer, some authentication logic at the business rules layer and data layer logic to compare the given username and password with values stored in a credential store. We need each layer of the cake to complete the story. If we only have the user interface layer we could enter the username and password but there would be nothing to compare it with. With this in mind “how” we slice our cake / User Stories is as important as the slicing itself.
When migrating your organizations culture to the DevOps way automation is a key component. Not only automation of builds and testing but also automation of infrastructure components. As I’m sure most readers are aware the build out of infrastructure components usually requires elevated permissions using credentials that we would prefer not be widely published. How do we accomplish this level of automation while still keeping the necessary elevated permissions secure and still allow team members that don’t necessarily have required permissions to run the scripts?
Below are a few examples of secure credentials storage in infrastructure scripts.
The Test-Driven Development (TDD) Processes follows a pattern known as the TDD Rhythm which dictates the order in which elements of the solution should be created / edited.
Before we can successfully implement TDD a few key agile constructs must exist. Most importantly we must have Tasks derived from a User Stories (or requirements) that define the details of required system feature. These defined details would include the Acceptance Criteria for feature described in the User Story. We would then use the task details and acceptance criteria to define our tests.
The TDD Rhythm
1. Write a Failing Test
The first step in the TDD Rhythm is to Write a Failing Test. Using the Task Details, we write a test that exercises the functionality defined by the Users Story and expects that the value that is returned is the same as the value that is expected based on the Acceptance Criteria defined in the Task Details.
2. Run the Failing Test
Run the test to see it fail. This is an interesting step as depending on your application architecture may require some minimal project structure be created and project references made for your Failing Tests to even compile before they can run and fail. For example, if you are storing all of your Business Logic in a Class Library Project called BusinessRules that compiles as a Windows .dll and your Tests are centrally stored in a Test Project then your Class Library Project will have to Exist and the Namespace, Class and Method will have to exist before your Test Project will compile and the Tests will run and fail. Fortunately, Visual Studio includes code generation tools that will create the Classes and Methods as long as the Class Library Project Exists and at least one class with a Namespace statement exists. The Failing Test generated by Visual Studio will throw a NotImplementedException which will obviously cause the method to fail.
3. Write just enough code to pass the test
This can be a difficult concept to get your mind around for especially when the common simple TDD example code is used. For example, take a method simply returns a Boolean value to illustrate simple TDD method creation if we start with a test that runs the required Boolean method and expect the method to return a true the code to pass the test would simply be return True;
public void TestGetBool()
Figure A. Test Method to test GetBool Method
public static bool GetBool()
Figure B. Minimal code needed to pass the test
With this example it may seem like a waste of time to write this minimal code to pass the test as it is clear that the test needs a returned value of true in order to pass so where is the value is writing this useless passing test? Without writing the test that tests the “unhappy path” through our method (aka the test that expects a return value of false) it is hard to see the value is a method that simply returns true with no additional implementation logic. An eager developer may want to just skip to writing implementation logic without wasting time on the simplest code step of the TDD Rhythm but follow the pattern young Jedi. As seen in a slightly more complex method that returns a formatted string, understanding how the output should be formatted in order to pass the test can potentially be much more difficult than just returning a Boolean value of true.
public void TestWelcomeBack()
string expected = “Welcome Back Antoine! Your last visit to the site was 02/01/2016.”;
string actual = WebSite.BizRules.WelcomeBack(user);
string message = “We should get “ + expected;
Assert.AreEqual(expected, actual, message);
Figure C. Test Method to test WelcomeBack Method
public static string WelcomeBack(object user)
return “Welcome Back Antoine! Your last visit to the site was 02 / 01 / 2016.”;
Figure D. Minimal code needed to pass the test
In this example the Literal String returned including the user name Antoine and the last visit date would obviously need to be updated for each user and on each daily visit but the formatting and welcome statement may also be important and could possibly come from a configuration file somewhere. The point is that the minimal code required to pass the test in this case acts as documentation for the method including formatting requirements for returned values. On the next refactoring pass we would update the code in the method to include the code necessary to extract the user name from the user object passed to the method and retrieve the date of their last visit from the membership database and return it in the expected format. The String Literal is our formatting template as we create the implementation code we know what the expected result format looks like.
4. Run the Passing Test
At this point our method has just enough code to pass the test but does not necessarily meet the business requirement nor does it allow us to perform the task described by the user story. This will become obvious as more tests are developed to tests the “unhappy path” or as varying return values are expected by other tests. But at this point we understand what must be done for the test to pass and can keep that in mind as we refactor the code to make it meet the business requirement or for optimization purposes.
5. Refactor the Code
Depending on “minimal” code we wrote to pass the test our first refactoring pass may be to add required functionality or if the required functionality already exist we may be refactoring for Maintainability, Scalability or Performance Optimization. In any event as we refactor the code for whatever reason we can do so with the confidence that any changes that we make have tests in place to ensure that we have not made a change that would break existing functionality already passing tests. If you make a change and all of the sudden tests that were passing stop passing you know you have a problem. The tests can also be used for Gated Check-ins that require that any changes a developer makes to code must pass existing tests before it can be checked into Source Control allow bugs to be identified before they make into our build and potentially out to customers.
The Expanded TDD Rhythm
6. Run All Tests
Once we have refactored our code to include desired functionality or optimize for maintenance, scalability or performance we need to run all tests to ensure that our changes did not break the method that we were working on but also for any methods that depend on this method or its results. This is a necessary step to avoid failed check-ins on Source Control or Continuous Integration (CI) Servers where Gated Check-ins are used. With gated check-ins your check-in cannot break the automated build and all tests must pass or your check-in will be rejected and your code not allowed into source control until the issues are resolved.
As changes are required we can continue to repeat this process of writing failing tests, coding, passing tests and refactoring until our code for the features we are adding for our updates is “perfect”