Acceptance Criteria, Are we DONE yet?

When working on an Agile Project, solution requirements are gathered through meetings between the customer and the Product Owner and potentially a Business Analyst. Unlike requirements from a traditional Waterfall project the solution requirements for an Agile Project are written is a simple nontechnical format that is easily consumable by all project stakeholders. Project stakeholders from the customer, nontechnical product owner and executive team to the technical implementation team members should be able to read and understand the meaning behind the simplified requirement. This plain English, nontechnical, easily consumable format is known as a User Story. User Stories follow a basic format that is consistent regardless of who writes the User Story and their technical background (or lack thereof). This simple format includes the type of user performing the action, the action they are performing and the results they hope to gain by performing said action for example:

As a customer
I want to log in
so that I can order the items in my shopping cart without reentering my information for every order.

In this example the type of user is a customer, the action is logging in and the goal is to avoid entering customer details when making a purchase. It’s a nice standard format easily understood by all parties involved but where are the details? As a developer how do I know “exactly” what the customer is asking for? As a tester how do I know what I should test for to ensure that we deliver what the customer really wants? The answer is Acceptance Criteria! Acceptance Criteria is where the details of our User Story are fleshed out through further discussion with the customer. For example, the Product Owner might ask the Customer “What information does the user not want to reenter? To which the customer might respond “Shipping address, billing address and payment details”. To further clarify the customer’s intentions for the desired feature the Product Owner may dig deeper and ask “what do you mean by payment details”. To which the customer might respond “credit card number, name on card, expiration date and CVS number on the back of the card”.
Once the Product Owner has collected enough detail that they have a firm grasp on the functionality requested by the customer the information is documented in the User Story as Acceptance Criteria. There is no requirement to use a standard format for the Acceptance Criteria, however for the same reasons we use a standard format for the User Story we should also adopt a standard format for Acceptance Criteria so no matter who writes the Acceptance Criteria it is in a consistent format that is easily understood by all project stakeholders. A good candidate for this standard Acceptance Criteria format is Gherkin format. Gherkin format is used in an acceptance testing tool called cucumber, hence the name. A Gherkin is a smaller variety of cucumber used to make pickles so it’s fitting that it is the name of the syntax used to write Acceptance Criteria in an application called “Cucumber”. Gherkin use a standard format that describes the scenario in question, the action to be performed and the expected result keeping the statement short and to the point using the Given…When…Then… format.

Given a valid username and password
When I login
Then I am taken to the profile page

Without this standard format, Acceptance Criteria could appear as a bulleted list, as a long winded paragraph with no structure or as some other nonstandard format making it difficult to consume both by technical and nontechnical team members.
During sprint planning when the delivery team is reviewing the User Stories to be committed to the sprint the team will first play Planning Poker to come to a consensus on the Story Points assigned to a User Story. Then the team will begin to flesh out the necessary tasks to get the story to its Definition of Done. It’s worth noting that typically the sprint planning meeting should be no more than 2-4 hours per week of the sprint. During this 4-8 hour planning meeting (for a typical 2 week sprint) we should expect the team to identity 60-75 percent of the delivery team tasks necessary to get the User Story to the teams stated Definition of Done. The amount of additional time necessary to identify the 25-40 percent of delivery teams task is far beyond the effort vs accuracy curve. The more detailed the Acceptance Criteria the more likely we are to capture a larger percentage of the necessary Delivery Team tasks.
Generally speaking if a User Story has more than 15-20 Gherkin format Acceptance Criteria is probably too large (an Epic) and should be sliced to make it more manageable. As the theory goes the larger and more complex the User Story, the more Acceptance Criteria it will have and the more likely we are to miss more of the detailed delivery team tasks during sprint planning.
We do not consider a User Story “Done” until, at a minimum, all Acceptance Criteria has been met.
So, write good User Stories, write detailed Acceptance Criteria and use Gherkin format to keep the Acceptance Criteria consistent regardless who writes them.

Advertisements

Slicing User Stories Method 1

Slice by Happy vs. Unhappy Path

Any feature or method that we add to our solution will have happy and one or more unhappy paths.  The happy path describes how the feature will behave when everything goes according to plan.  If there are any errors exceptions or omissions then the unhappy path is taken.
Consider a standard authentication User Story for a website with members only content:

As a member I want to log in with my account so that I can access members only content.

If we assume a standard login procedure we can identify a happy path and several possible unhappy paths.

As a member I want to log in with my account so that I can access members only content (Happy)

As a member I want to be able to request a new password when my login fails, so that I can try to log in again (Unhappy)

As a new member I need the option to register a new account so that I can access secured content (Unhappy)

As a site owner I want to block users with 3 failed log in attempts in a row so I can protect the site against hackers (Unhappy)

As you can see the unhappy path describes omissions or exceptions in our process.  By identifying the various happy and unhappy paths we ensure that the delivery team fully understands the functionality required by the customer.  The more detailed and granular user stories also provide an opportunity for more accurate Acceptance Criteria for each sliced user story.  The sliced User Stories also give the product owner the opportunity to prioritize functionality at a more granular level.  For example, in the first release there may only be a few users so locking accounts after 3 failed logins may not be important yet.  The password reset feature may also be overkill in the first release as someone in Operations or Support can easily reset a user’s password upon receipt of an email request from the user.

Stay tuned for 6 more methods for slicing User Stories…

Slicing User Stories 7 Methods ***** Next Method 2 – Slicing by Workflow Steps

Slicing User Stories 7 Methods

In an Agile Development Project, the Solution Requirements are communicated from the customer to the delivery / development team using a standard notation easily understood by the delivery team and all stakeholders.  This standard notation is known as a User Story.  See our post on What Makes a Good User Story for more details.

When committing a User Story to a sprint in an agile project it is best that all the tasks necessary to take the User Story to the teams stated definition of done can be completed within a single sprint.  In most cases a User Story so large that it cannot be completed within a sprint is a feature or epic that should be broken down into smaller components before being committed to the sprint.

There are several different ways we can go about breaking down or slicing a User Story.  We call it slicing to invoke the “Layered Cake Metaphor”.

As the theory goes we can only truly enjoy our cake if we take a vertical slice of the cake ensuring that we get all of the flavors from each layer including the frosting between layers.  Taking that concept to our layered application architecture this simply means that to really call a story “Done” we must be able to test and use the features introduced by the completion of the User Story.  If we don’t get each layer of the application framework in our “slice” then we can’t use the feature.  For example, a login feature is only useful if we have the login form at the user interface layer, some authentication logic at the business rules layer and data layer logic to compare the given username and password with values stored in a credential store.  We need each layer of the cake to complete the story.  If we only have the user interface layer we could enter the username and password but there would be nothing to compare it with.  With this in mind “how” we slice our cake / User Stories is as important as the slicing itself.

Common methods for slicing user stories are:
Slicing by Happy vs. Unhappy Flow
Slicing by Workflow Steps
Slicing by Test Scenarios
Slicing by Acceptance Criteria Rules
Slicing  byData Types or Parameters
Slicing by Operations
Slicing by Roles
See the posts on each method for details on how to slice or size your user stories for completion in a single sprint.

Storing Infrastructure Secrets in Script

When migrating your organizations culture to the DevOps way automation is a key component. Not only automation of builds and testing but also automation of infrastructure components. As I’m sure most readers are aware the build out of infrastructure components usually requires elevated permissions using credentials that we would prefer not be widely published. How do we accomplish this level of automation while still keeping the necessary elevated permissions secure and still allow team members that don’t necessarily have required permissions to run the scripts?
Below are a few examples of secure credentials storage in infrastructure scripts.
PowerShell:https://blogs.technet.microsoft.com/robcost/2008/05/01/powershell-tip-storing-and-using-password-credentials/
AWS KMS:https://blog.fugue.co/2015-04-21-aws-kms-secrets.html
CyberArk:http://www.cyberark.com/solutions/by-project/application-credential-security/

SharePoint Blog Font changes when publishing from MS Word

If you are using Office365 and hosting a Blog in addition to your “Public Website” it’s a good idea to make yourself aware of the default Fonts and Styles and how they differ from the default Fonts and Styles in Microsoft Word if you are using MS Word to author your blog content.
Recently while discussing templates used to publish FaceBook Posts, Blog Posts, Articles, Courses and Labs the issue of the difference in appearance of certain Fonts and Styles as they are published to a SharePoint Blog from MS Word. There is a fix for this if you are willing to dig a little deeper but for the purpose of this post I will simply illustrate the key differences and a basic work around that will make your posts look as you intended when they are published to SharePoint.
When we start in Microsoft Word with the Font Styles in Figure 1 we end up with published content on our SharePoint Blog that looks like Figure 2.

Figure 1. Font Styles as they appear in Microsoft Word before publishing to SharePoint Blog.
First the SharePoint Blog Post Title uses the same style as a Heading 2 in the Styles list in MS Word so using Heading 2 anywhere in your blog post is probably not a good idea. For topic titles I recommend Heading 3, Normal for body text and Intense Emphasis for notes and callouts with the keywords / phrases in bold.

Figure 2. Font Styles, Colors and Sizes as they appear in Microsoft Word before publishing to SharePoint Blog.
After publishing we end up with changed Font Styles, Colors and Sizes most things appearing larger than they did in MS Word, however some appear smaller. So it’s good to know in advance what those changes will look like so that you and your readers are not “unpleasantly” surprised by the format of the new content you have just published to your SharePoint Blog.

The Test Driven Development Rhythm

The Test-Driven Development (TDD) Processes follows a pattern known as the TDD Rhythm which dictates the order in which elements of the solution should be created / edited.

Before we can successfully implement TDD a few key agile constructs must exist.  Most importantly we must have ​Tasks derived from a User Stories (or requirements) that define the details of required system feature.  These defined details would include the Acceptance Criteria for feature described in the User Story.  We would then use the task details and acceptance criteria to define our tests.

The TDD Rhythm

1.  Write a Failing Test

The first step in the TDD Rhythm is to Write a Failing Test.  Using the Task Details, we write a test that exercises the functionality defined by the Users Story and expects that the value that is returned is the same as the value that is expected based on the Acceptance Criteria defined in the Task Details.

2. Run the Failing Test

Run the test to see it fail. This is an interesting step as depending on your application architecture may require some minimal project structure be created and project references made for your Failing Tests to even compile before they can run and fail. For example, if you are storing all of your Business Logic in a Class Library Project called BusinessRules that compiles as a Windows .dll and your Tests are centrally stored in a Test Project then your Class Library Project will have to Exist and the Namespace, Class and Method will have to exist before your Test Project will compile and the Tests will run and fail. Fortunately, Visual Studio includes code generation tools that will create the Classes and Methods as long as the Class Library Project Exists and at least one class with a Namespace statement exists. The Failing Test generated by Visual Studio will throw a NotImplementedException which will obviously cause the method to fail.

3.  Write just enough code to pass the test

This can be a difficult concept to get your mind around for especially when the common simple TDD example code is used.  For example, take a method simply returns a Boolean value to illustrate simple TDD method creation if we start with a test that runs the required Boolean method and expect the method to return a true the code to pass the test would simply be return True;

[TestMethod]


public
void TestGetBool()

{


Assert.IsTrue(BoolApp.BoolHost.GetBool());

}

Figure A. Test Method to test GetBool Method


public
static
bool GetBool()

{


return
true;

}

Figure B. Minimal code needed to pass the test

With this example it may seem like a waste of time to write this minimal code to pass the test as it is clear that the test needs a returned value of true in order to pass so where is the value is writing this useless passing test? Without writing the test that tests the “unhappy path” through our method (aka the test that expects a return value of false) it is hard to see the value is a method that simply returns true with no additional implementation logic. An eager developer may want to just skip to writing implementation logic without wasting time on the simplest code step of the TDD Rhythm but follow the pattern young Jedi. As seen in a slightly more complex method that returns a formatted string, understanding how the output should be formatted in order to pass the test can potentially be much more difficult than just returning a Boolean value of true.


public
void TestWelcomeBack()

{


string expected = “Welcome Back Antoine! Your last visit to the site was 02/01/2016.”;


string actual = WebSite.BizRules.WelcomeBack(user);


string message = “We should get “ + expected;


Assert.AreEqual(expected, actual, message);

}

Figure C. Test Method to test WelcomeBack Method


public
static
string WelcomeBack(object user)

{


return
“Welcome Back Antoine! Your last visit to the site was 02 / 01 / 2016.”;

}

Figure D. Minimal code needed to pass the test

In this example the Literal String returned including the user name Antoine and the last visit date would obviously need to be updated for each user and on each daily visit but the formatting and welcome statement may also be important and could possibly come from a configuration file somewhere. The point is that the minimal code required to pass the test in this case acts as documentation for the method including formatting requirements for returned values. On the next refactoring pass we would update the code in the method to include the code necessary to extract the user name from the user object passed to the method and retrieve the date of their last visit from the membership database and return it in the expected format. The String Literal is our formatting template as we create the implementation code we know what the expected result format looks like.

4. Run the Passing Test

At this point our method has just enough code to pass the test but does not necessarily meet the business requirement nor does it allow us to perform the task described by the user story. This will become obvious as more tests are developed to tests the “unhappy path” or as varying return values are expected by other tests. But at this point we understand what must be done for the test to pass and can keep that in mind as we refactor the code to make it meet the business requirement or for optimization purposes.

5. Refactor the Code

Depending on “minimal” code we wrote to pass the test our first refactoring pass may be to add required functionality or if the required functionality already exist we may be refactoring for Maintainability, Scalability or Performance Optimization. In any event as we refactor the code for whatever reason we can do so with the confidence that any changes that we make have tests in place to ensure that we have not made a change that would break existing functionality already passing tests. If you make a change and all of the sudden tests that were passing stop passing you know you have a problem. The tests can also be used for Gated Check-ins that require that any changes a developer makes to code must pass existing tests before it can be checked into Source Control allow bugs to be identified before they make into our build and potentially out to customers.

The Expanded TDD Rhythm

6. Run All Tests

Once we have refactored our code to include desired functionality or optimize for maintenance, scalability or performance we need to run all tests to ensure that our changes did not break the method that we were working on but also for any methods that depend on this method or its results. This is a necessary step to avoid failed check-ins on Source Control or Continuous Integration (CI) Servers where Gated Check-ins are used. With gated check-ins your check-in cannot break the automated build and all tests must pass or your check-in will be rejected and your code not allowed into source control until the issues are resolved.

7. Repeat

As changes are required we can continue to repeat this process of writing failing tests, coding, passing tests and refactoring until our code for the features we are adding for our updates is “perfect”

https://en.wikipedia.org/wiki/Test-driven_development

https://en.wikipedia.org/wiki/User_story

https://en.wikipedia.org/wiki/Code_refactoring

Scrum / Agile Resources

A few of my favorite Scrum and Agile related videos…
Do have favorites not in my lists?  Please leave a link in the comments.

Role of the Product Owner

Planning Poker

The Daily Scrum

Intro to Agile

Agile Project Ownership in a Nutshell

S**t Bad Scrum Masters say

Distractions – What the internet is doing to our brains

The Pomodoro Technique

Pair Programming

TDD

QUnit

Hitler Scrum

I want to run an Agile Project