Acceptance Test Automation with Cucumber / SpecFlow in Visual Studio

Acceptance Criteria, Test Automation and Gherkin

What do Acceptance Criteria, Test Automation and a Cucumber have in common? Well for the uninitiated in may seem that these 3 things have nothing in common however if you are an “Agilist” or Test Automation specialist than you are probably very familiar with the similarities. Let’s start with the Cucumber specifically the Gherkin. The Gherkin as it happens is the most common type of cucumber used to make pickels. Gherkin is also the language used to Automated Acceptance Tests in a tool called Cucumber. Acceptance Criteria are the conditions for success for a new feature or part of a feature being added to solution. If the Acceptance Criteria is written in Gherkin format most of the heavy lifting of Test Automation has already been done. To automate acceptance tests in Visual Studio we use the Visual Studio version of Cucumber: SpecFlow. To get started using Gherkin Acceptance Criteria to Automate Acceptance Testing the Visual Studio IDE needs to be configured to use the Cucumber / SpecFlow extension.

Create a Virtual Machine in AWS install SpecFlow Extension in Visual Studio Community Edition and create an Automated Acceptance Test for a basic Stack object. Detailed Step By Step Instructions below video

Install SpecFlow Extension for Visual Studio

  1. In Visual Studio 2019 Community Edition
  2. In the Launch dialog choose Continue without code
    VS 2019 Continue without code
    VS 2019 Continue without code
  3. In the Extensions Menu Select Manage Extensions
    VS 2019 Manage Extensions
    VS 2019 Manage Extensions
  4. Select Online from the Navigation menu
  5. In the Search box type “SpecFlow”
  6. Select SpecFlow for Visual Studio 2019 and click Download
    VS 2019 Install SpecFlow Extension
    VS 2019 Install SpecFlow Extension
  7. Click Close to close the Manage Extensions dialog
    VS 2019 Close Visual Studio to complete SpecFlow Extension Installation
    VS 2019 Close Visual Studio to complete SpecFlow Extension Installation
  8. Exit Visual Studio
  9. Click Modify in the VSIX Installer dialog
    VSIX Installer
    VSIX Installer – Modify
  10. When Installation completes click Close in the VSIX Installer dialog
    VSIX Installer
    VSIX Installer – Installation Complete
  11. Restart Visual Studio

Note: Visual Studio will not Install the Cucumber Extension until all Visual Studio windows are closed.

Create a Unit Test Project to Map Acceptance Criteria

  1. In order to Add Gherkin Feature Descriptions we will need to add a Unit Test Project.
  2. In Visual Studio from the File menu select New Project
  3. In the New Project Dialog under Language select C# under Platform select Windows under Project Type select Test
  4. Select Unit Test Project (.NET Framework) and click Next
    Create new Test Project
    Create new Test Project
  5. Name the Project Stack App Acceptance Tests
  6. Name the Solution Stack App
  7. Click Create

    Configure new Project and Solution
    Configure new Project and Solution names
  8. In the Solution Explorer
  9. Right Click UnitTest1.cs and select Delete
  10. Right click the Project and select Add New Item
  11. Select the SpecFlow folder on the left
  12. Select SpecFlow Feature File
  13. Name the File StackAppFeatures.feature
  14. Click Add

    Create new Feature File
    Create new Stack Feature File
  15. In the Solution Explorer select StackAppFeatures.feature
  16. In the Properties windows remove SpecFlowSingleFileGenerator from the Custom Tool Property
    Remove Custom Tool Property value
    Remove SpecFlowSingleFileGenerator from the Custom Tool property

Note: The Custom tool error will disappear once the SpecFlowSingleFileGenerator text is removed from the Custom Tool Property of the StackAppFeatures.feature file.

Add SpecFlow NuGet Packages to Test Project

  1. Right click the Stack App Acceptance Tests Project
  2. Select Manage NuGet Packages… from the Menu
  3. In the NuGet Stack App Acceptance Tests window select Browse form the menu
  4. In the Search (Ctl+L) box type SpecFlow and press enter
  5. Select SpecFlow by TechTalk
  6. In the Detail window to the right click Install

    Install SpecFlow NuGet package
    Install SpecFlow NuGet package in Test Project
  7. In the License Acceptance dialog click I Accept

    License Acceptance - I Agree
    License Acceptance – I Agree to accept license terms
  8. Select SpecRun.SpecFlow
  9. In the Detail window to the right click Install
  10. In the License Acceptance dialog click I Accept
  11. In the Search (Ctl+L) box type SpecFlow.Tools and press enter
  12. Select SpecFlow.Tools.MsBuild.Generation
  13. In the Detail window to the right click Install

Step Definitions

In the Solution explorer select StackAppFeatures.feature and replace the Math feature (User Story) and Acceptance Criteria with text below:

Feature: StackAppFeatures
    As a User
    I need to Stack stuff
    So that I can move it around

@EmptyStack
Scenario: IsEmpty should be true
Given An Empty Stack
When I check IsEmpty
Then IsEmpty should be “true”

@NonEmptyStack
Scenario: IsEmpty should be false
Given A nonEmpty Stack
When I check IsEmpty
Then IsEmpty should be “false”

@PushTests
Scenario: Push Check IsEmpty
Given A Empty Stack
When I Push “Bugga Boo” onto the Stack
And I check IsEmpty
Then IsEmpty should be “false”

@PushPopTests
Scenario: Push Gets Popped
Given An Empty Stack
When I Push “Item 1” onto the Stack
And I Pop a value off the Stack
Then The result should be “Item 1”

@PushPeekTests
Scenario: Push Gets Peeked
Given An Empty Stack
When I Push “Item 1” onto the Stack
And I Peek at a value on the Stack
And I check IsEmpty
Then The result should be “Item 1”
And IsEmpty should be “false”

The purple text indicates that the Gherkin statement does not yet have associated step definitions. We’ll use the Code Generations tools in the SpecFlow extension generate the step definitions from the Gherkin Acceptance Criteria.

  1. Right click on Given an Empty Stack and select Generate Step Definitions
  2. In the Generate Step Definition Skeleton – SpecFlow dialog click Generate

    Generate Step Definitions
    Generate Step Definitions
  3. In the Select target step definition class file dialog accept the defaults and click Save

    Accept Default location for Step Definitions class
    Accept Default location for Step Definitions class file

Feature Implementation – Test First

Now we will replace the ScenarioContext.Current.Pending(); placeholder code in each of the Given…When…Then… functions in the StackAppFeaturesSteps.cs class file with calls to our code under test

  1. In the Solution Explorer select the StackAppFeaturesSteps.cs class file
  2. Add the following code to the StackAppFeaturesSteps class
    Stack stack;
    String _actual;
    Boolean _isEmpty;
  3. In the public void GivenAnEmptyStack() function replace the ScenarioContext.Current.Pending(); with the code below
    stack = new Stack();
  4. Replace the ScenarioContext.Current.Pending(); code in the GivenANonEmptyStack function with the code below
    stack = new Stack();
    stack.Push(“Hello, World!”);
  5. Replace the ScenarioContext.Current.Pending(); code in the WhenICheckIsEmpty function with the code below
    _isEmpty = stack.IsEmpty();
  6. Replace the ScenarioContext.Current.Pending(); code in the WhenIPushOntoTheStack function with the code below
    stack.Push(p0);
  7. Replace the ScenarioContext.Current.Pending(); code in the WhenIPopAValueOffTheStack function with the code below
    _actual = stack.Pop();
  8. Replace the ScenarioContext.Current.Pending(); code in the WhenIPeekAtAValueOnTheStack function with the code below
    _actual = stack.Peek();
  9. Replace the ScenarioContext.Current.Pending(); code in the ThenTheResultShouldBe function with the code below
    Assert.AreEqual(p0, _actual);
  10. Replace the ScenarioContext.Current.Pending(); code in the ThenIsEmptyShouldBe function with the code below
    Assert.AreEqual(Boolean.Parse(p0), _isEmpty);

Now that we have our tests we can use code generation tools to create the class under test. At this point we will also have 5 syntax / reference errors, we will resolve these in the next steps.

  1. In the solution explorer right click Solution ‘Stack App’ (1 of 1 project) select Add > New Project
    In the Solution Explorer you should now see the Stack Utility project
    Add new Class Library project
    Add new Class Library project
  2. In the Solution Explorer select the StackAppFeaturesSteps.cs class file
  3. In the StackAppFeaturesSteps.cs class file hover the mouse over the Stack type with the red squiggles
    In the Quick Actions menu (The lightbulb) select Generate type ‘Stack’ > Generate new type…

    Generate Stack Type
    Use Quick Actions to Generate Stack Type
  4. In the Generate Type dialog select Stack Utility from the Project dropdown, select the Create new file radio button and click OK
    Note:
    the red squiggles under Stack are now gone
    No more red Squiggles
    Stack exists – No more red Squiggles
  5. In the StackAppFeaturesSteps.cs class file hover the mouse over the call to the stack.Push method and select Generate method ‘Stack.Push’ from the Quick Actions menu
    Generate Push method
    Use Quick Actions to Generate Push method
  6. Repeat the same process for the IsEmpty, Push, Pop and Peek methods
  7. In the StackAppFeaturesSteps.cs class file hover the mouse over the Assert in Assert.AreEqual and select using Microsoft.VisualStudio.TestTools.UnitTesting; from the Quick Actions menu
    Note: You may now run the tests and see them fail
  8. From the Test menu select Run > All Tests
    Note:
    In the Test Explorer that of 6 Tests 3 failed. Only 5 were actual tests 1 was a SpecRun delay for the evaluation version. 3 Failed and the 2 Tests are ignored. The ignored tests call methods that were already tested and failed in this test run.
    Run the Tests and see them Fail
    Run the Tests and see them Fail

Now that class skeleton has been created, we can implement the 4 methods, IsEmpty, Push, Pop and Peek.

  1. In the Solution Explorer select the Stack.cs class file in the Stack Utility project
  2. In the Stack class create a new stack variable of type ArrayList by adding the code below:
    ArrayList stack = new ArrayList();
  3. Hover the mouse over the ArrayList type and select using System.Collections; from the Quick Actions menu
  4. In the Push method replace throw new NotImplementedException(); with the code below
    stack.Insert(0,v);
  5. In the IsEmpty method replace throw new NotImplementedException(); with the code below
    return stack.Count == 0;
  6. In the Pop method replace throw new NotImplementedException(); with the code below
    String result = stack[0].ToString();
    stack.Remove(0);
    return result;
  7. In the Peek method replace throw new NotImplementedException(); with the code below
    return stack[0].ToString();
    Note:
    You may now run the tests and see them pass
  8. From the Test menu select Run > All Tests
    Note:
    All test should display a green check mark indicating a pass.
    Run the Tests and see them Pass
    Green = Good Red = Bad – Run the Tests and see them Pass


    Now that we have the Step Definitions (Glue Code) defined adding new Automated Acceptance tests is a simple a pasting in new Gherkin statements.  If the Product Owner adds new Acceptance Criteria to a user story we can simply copy and paste from the Work Item Tracking (Project Management) tool into our Feature file and we are done.  No new coding.  For example the Gherkin statement below can simply be added to the feature file with

    @MultiPushPopTests
    Scenario: Multi Push Pop
    Given An Empty Stack
    When I Push “Item 1” onto the Stack
    And I Push “Item 2” onto the Stack
    And I Pop a value off the Stack
    Then The result should be “Item 2”

no additional changes necessary.

Advertisements

Technical Debt Pay It Forward

Pay it Forward

You can pay now or pay later but trust me you’re gonna pay! I’m talking about Technical Debt… Technical Debt like any other Debt has Interest, so you can pay now or pay later but if you pay later it will be much more expensive. See this post on the Increasing cost of Technical Debt for more info. 

The increasing cost of technical debt
Increasing cost of technical debt

The short version is defects are said to have a 1x cost at design time but costs increase exponentially as you build, test approaching production. The simple point is that the earlier that we find and resolve issues and defects the less it costs. Anything that we can do simplify or speed up this process of finding and documenting defects reduces our Debt. We want to make a Shift Left in quality by moving quality checks closer to the beginning of the delivery pipeline where defects are cheaper to fix. By creating Test Cases based on the customer requirements and success criteria we can ensure that our tests are mapped to business value. This is no substitute for Unit Tests written at the Function or Method level as used in Unit Testing.  Plan for chaos, write test to detect it and buffer team capacity to fix it (more on that in a future post).

Slicing User Stories Method 6

Slicing by CRUD or ISUD (AKA Slicing by Operations)

Any User Stories involving a managed entity, such as a Customer, Order, Employee or Product, will almost always require some level of management functionality.  This management functionality will provide the ability to perform a number of operations including at a minimum operation, such as Create, Read, Update or Deleted.  These operations are commonly referred to as CRUD but that is such an unfortunate acronym as it sounds like something you get between your toes…  Not to mention the fact that in most Relational Database systems such as MySQL and Microsoft SQL Server the operations are actually called Insert, Select, Update and Delete making the acronym ISUDISUD sounds better, soapy and clean to wash away the CRUD between your toes.  So forever more on this site CRUD operations will be referred to as ISUD operations!
ISUD operations are very prevalent when functionality involves the management of entities, such as products, users or orders:
As a Specialty Kite Maker
      I want to manage Kites in my ecommerce website
So I can update Kite details and pricing info if it is changed

If we consider the ISUD typically associated with Product management, we can derive the following more specific and granular User Stories:

As a Specialty Kite Maker
I want to add new Kites to my product list
So customers can purchase them;


As a Customer

     I want to view a list of Kites available for purchase
     So that I can buy one;

As a Specialty Kite Maker

     I want to list the Kites in my product list
     So I know what Kites are currently in stock;

As a Specialty Kite Maker

     I want to update existing Kites in my product list
     So I can adjust for changes in Kite details and pricing info;

As a Specialty Kite Maker

     I want to delete Kites from my product list
     So I can remove Kites that I no longer sell;

As a Specialty Kite Maker

     I want to hide Kites in my product list
     So they cannot be purchased for the time being;

When discussing this method, the question often becomes, “do these more granular User Stories actually provide business value?”.  Is our solution really useful if we cannot update or delete products from the system?  If we consider that in the current scenario we are dealing with a “Specialty Kite Maker” odds are there are a limited number of Kites and Kite Accessories that will be in the product list.  If this is the case then adding, editing or deleting the Kites could be done manually through a database management tool like SQL Server Management Studio for the first few Sprints.  So, for the first Sprint we may just add the list (Select) functionality to support customer purchases and delay the other Update, Delete and Insert User Stories for a later Sprint.  This way we get business value sooner by minimizing “Work In Progress” (WIP) we are able to increase delivery date confidence and deploy only features necessary to deliver value to the customer.  In this scenario, the lack of Insert, Update and Delete functionality will not be noticed by the customer because these are admin only features therefore we deliver just the customer facing User Stories.  This allows us to get to market faster and begin collecting customer feedback while we work to complete additional features.  In the case of discontinued or deleted Kites it may be easier to simply add a checkbox that allows the Kite Maker to mark an item as discontinued or deleted.  This approach may keep the record in the database but simply hide it from the customer view making it easier to implement than an actual Delete operation that may require additional operations to enforce referential integrity.
In short if we break the User Story down by operation we can implement only those operations that provide immediate business value in early Sprints and add other more specific stories once the base functionality is deployed to customers and providing them with “Value”.  “Customer Value” = “Business Value” which of course in almost every case translates to “Business Revenue” to pay for all of the Solution Development.
Slicing User Stories – Method 5 ***** Slicing User Stories – Method 7

Slicing User Stories Method 5

Slicing by Input Parameter (Datatypes)

In most cases a business process or whatever function that the new feature is intended to automate requires some data to perform its actions.  For the sake of this discussion we will refer to this data as Input Parameters.  Data of different types in most cases will need to be processed differently.  For example, a search for a customer’s last name would most likely require a String Comparison against the LastName field in a database while a search for a customer by their Customer ID would require an Integer Comparison against the CustomerID field in a database. Some User Stories can be split based on the datatypes they return or the parameters they are supposed to handle.

Take, for example, a search function for a standard ecommerce website:

As Customer

I want to search through available products
So I can view their details and order them;

Since there are potentially many different ways a customer might want to search for a product that they need or have previously ordered, each one of these search methods could be considered as a unique User Story:

As a Customer
I want to search for a product by the order date
So that I can find products that I have ordered before;

As Customer
I want to search for a product by its Product Id
So that I can find a product that I am familiar with;

As Customer
I want to search for products within a Price Range
So that the search results are relevant;

As Customer
I want to search for products by Color
So that the search results are more relevant;

As Customer
I want to search for products by Category
So that the search results are more relevant;

As we begin to think on a more granular level about the search function we can more clearly understand the kinds of search criteria the customer might be used. This allows us to more accurately implement the customers desired functionality, but it also allows a Product Owner to make decisions about priority within the feature and not just at the story level. For example, with just a few products in a new ecommerce web application paging 10 products at a time may not be necessary. Or maybe some of the search functionality can be implemented in a simplified manner for the time being.  Another example is breaking down the User Story based on how the returned data is displayed.  Perhaps our ultimate goal is to have sales results and product ratings displayed as beautiful 3D charts and animated graphics dynamically produced based on real-time sales data.  But for the first release the sales manager will simply import the sales data into excel and manually export 3D charts and graphs from excel on a weekly basis.
Slicing User Stories – Method 4 ***** Slicing User Stories – Method 6

What makes a good User Story

A User Story is intended to be a method of communicating business or application requirements between potentially nontechnical customers, team members who are not developers and the development / operations teams that must implement the required application or features. In other words, the User Story needs to be understood by all but still provide enough detail to allow the technical teams to actually understand the requirements and build the solution. So what makes a good User Story? A User Story should be a concise description of a piece of functionality that will be valuable to a user or owner of the software. To put it another way a User Story is a more universal way of writing a software requirement or deliverable.
A good User Story should describe [Who] the user of the feature is, [What] they need to do and [Why] they need to do it.  User Stories typically follow the format below:

As a [Who]
I need [What]
So that [Why]

or we could say

As a [Type of User]
I need to [Perform some action]
So that [business value received]

Following the INVEST acronym User Stories should have the following characteristics:
[I]ndependent Should not depend on any other User Story to be considered complete
[N]egotiable The delivery team needs to discuss User Stories before committing them to the “sprint” backlog. This discussion should happen during the sprint planning meeting
[V]aluable The completed User Story should add value for the customer. Customer should understand the resource and time cost associated with a given User Story based on Story Points estimated by the Product Owner and confirmed by the delivery team. This will help the customer and Product Owner decide if the value of the User Story is worth the cost in time and resources and prioritize the User Story accordingly
[E]stimable The User Story should be specific and granular enough that it can be completed within a single sprint. If the User Story is so complex that it cannot be completed within a sprint, then it is an Epic or Feature and should be broken down into small components / User Stories before being added to the backlog.  See this post on Story Point Estimation
[S]mall User Stories should be small enough that the specifics of the implementation of the User Story should not take more than 10 Developer (or Delivery Team) Tasks to flesh out
[T]estable The story should have Acceptance Criteria that describes what is required to consider the story complete. These Acceptance Criteria should present a clear path to testing requirements

Below is what seems to be a relatively simple example of a User Story

As a User
I need to register
So that I can manage my account

While the previous User Story seems pretty straight forward at first glance, as we analyze User Stories we may find that we have very large and complex Epics masquerading as User Stories.  In these cases it will be necessary to properly size or “Slice” the User Story or Epic into smaller chunks that can be completed in a single sprint.  See the Slicing User Stories 7 Methods Blog Series for more details.  I wouldn’t go as far as to say the previous User Story is Epic in nature but it could use a little clarification.  For example what exactly does account management entail?  What constitutes valid registration information?  We can break the story down to be a little bit more specific without getting technical.

As a new user
I need to register with my Facebook account
so that I can access members only content

As a new user
I need to register with my Google+ account
so that I can access members only content

As a new user
I need to register with my Email Address and Password
so that I can access members only content

As a new user
I need to add my address to my Profile
so that I don’t have to re enter it for every order

By breaking the story down we ensure that the tasks necessary to get the story to the Done will take less than the length of the Sprint.  We have describe the [What] in enough detail that the implementation team can complete the work without telling them [How] to do their jobs.
The [How] or the “technical details” are captured in tasks [Work Items] linked to a User Story in a Project Management / Work Item Tracking tool such as Jira or Team Foundation Services. Delivery Team task / technical details should be capture as specific tasks that can be completed in 2 hours or less. In a Continuous Integration environment as these tasks are checked into Source Control a build trigger would cause an automated build and automated test run. If any of the automated tests fail the system can automatically log a Defect and assign it back to the developer checking the code into source control. This rapid feedback keeps technical debt from slipping down the delivery pipeline.

Common User Story Issues
User Stories are too formal or contain too much detail – Keep the story simple and to the point. The detail should be fleshed out in tasks linked to the User Story during Sprint Planning
Technical tasks masquerading as stories Remember User Stories should be understood by the customer and the user as well as all non-technical stakeholders. Technical details are for the nested developer tasks.
The conversation is skipped – The team should evaluate User Stories provided by the Product Owner from the Customer in their Sprint Planning meeting. This is a good opportunity to play Planning Poker and make sure all team members are in agreement about the size and complexity of the User Stories planned for the coming sprint.
So remember keep User Stories simple and to the point, work out the details in the nested Delivery Team Tasks and be sure the team discusses User Story complexity, Story Points and Acceptance Criteria during Sprint Planning before committing the User Story to the Sprint Backlog.

Happy storytelling…

For more details on User Stories and estimating story points see the Story Points Estimation post

 

Slicing User Stories Method 3

Slice by Test Cases

Slicing User Stories by Test Case is useful when it is hard to break down an Epic based on functionality alone. With a large Feature or Epic, it is helpful to look at possible Test Cases as a way to break the Epic down into smaller chunks that can be completed within a single sprint. Analyzing which Acceptance Criteria Scenarios have to be checked to get the Epic to its Definition of done will provide a good framework for identifying manageable user stories.
Take an e-Commerce websites Order Entry Feature:
As a customer I want to Order the Items in my shopping cart
If we consider this functionality based on potential scenarios, we can break down the item into:
Test Case 1: If a customer is signed in Shipping Information from the profile is automatically added to their order
Test Case 2: If a customer is signed in Billing Information from the profile is automatically added to their order once the credit card verification number is confirmed
Test Case 3: If a customer is not registered or signed in they must manually enter their shipping information
Test Case 4: If a customer is not registered or signed in they must manually enter their billing information
Test Case 5: If a product in the customers shopping cart is out of stock it should be automatically added to their wish list
Test Case 6: Orders can be entered using a touchscreen monitor
Using this method for Slicing User Stories can actually help you apply the other methods implicitly. For example, by analyzing potential test cases, you will expose a number of business rules (#1, #2, #3, #4 and #4), (un)happy flows (#3, #4 and #5) and even input options (#6). Occasionally, Test Cases will be very complex due to the work involved in setting up and completing the tests. Once we have created a list of possible test cases we can prioritize based on frequency of use of the feature being tested.  If a Test Case is not high on the priority list (not very common) or does not present a high enough risk, a Product Owner could decide to shelve the feature for the time being and focus on Test Cases that deliver more value. In the case of a very complex Test Case, we may decide to simplify (or Slice) the Test Case to prioritize and complete the most urgent feature elements. In any case the most relevant Test Cases can be easily translated into User Stories and added to the Sprint Backlog or Product Backlog.
Previous Method 2 – Slice by Workflow Steps *******Next Method 4 – Slice by Business Rules

Slicing User Stories Method 2

Slicing by Workflow Steps

Most anything that we would add to a solution and describe in a User Story is a process that has some sort of workflow.  In most cases these workflows can be broken down into individual steps. A large User Story with several workflow steps can be broken down into smaller users stories based on these workflow steps.

Consider the following User Story for an ecommerce website.

As registered customer I want to purchase the items in my shopping cart so that my products can be delivered to an address I specify.

If we assume a fairly standard shopping cart and order entry process, we could identify the following steps:

As a registered customer I want to log in with my account so I don’t have to re-enter my personal information every time;
As a registered customer I want to review and confirm my order, so I can correct mistakes on my order before I pay;
As a registered customer I want to pay for my order with a credit card, so that I can confirm my order;
As a registered customer I want to pay for my order with a wire transfer, so that I can confirm my order;
As a registered customer I want to receive a confirmation e-mail with my order, so I have proof of my purchase;

As you see, there was more to this seemingly simple User Story than was originally apparent.  By breaking the User Story down into its individual workflow steps and considering the different options that a user may use to pay we make the customers intentions much clearer to the developer implementing the functionality.

We must keep in mind the reason for creating User Stories in the first place; to engage the customer in conversation about desired functionality and clarify expectations before passing requirements on to the delivery team.  The more granular our User Stories, the more specific our discussion of desired functionality and Acceptance Criteria can be.

Knowing the details of the workflow the team can prioritize the functionality based on business value.  For example perhaps in the first release we only allow customers to pay with a credit card and we send the order confirmation manually or perhaps customers are required to enter their address information manually until saving addresses is available in release 2.

In any event having more granular user stories allows the delivery team to have detailed discussions about the desired functionality without missing key workflow steps or Acceptance Criteria.

Slice dice and discuss!

Previous Method 1 – Slice by Happy vs Unhappy Flow ****** Next Method 3 – Slice by Test Scenarios

Slicing User Stories 7 Methods

In an Agile Development Project, the Solution Requirements are communicated from the customer to the delivery / development team using a standard notation easily understood by the delivery team and all stakeholders.  This standard notation is known as a User Story.  See our post on What Makes a Good User Story for more details.

When committing a User Story to a sprint in an agile project it is best that all the tasks necessary to take the User Story to the teams stated definition of done can be completed within a single sprint.  In most cases a User Story so large that it cannot be completed within a sprint is a feature or epic that should be broken down into smaller components before being committed to the sprint.

There are several different ways we can go about breaking down or slicing a User Story.  We call it slicing to invoke the “Layered Cake Metaphor”.

As the theory goes we can only truly enjoy our cake if we take a vertical slice of the cake ensuring that we get all of the flavors from each layer including the frosting between layers.  Taking that concept to our layered application architecture this simply means that to really call a story “Done” we must be able to test and use the features introduced by the completion of the User Story.  If we don’t get each layer of the application framework in our “slice” then we can’t use the feature.  For example, a login feature is only useful if we have the login form at the user interface layer, some authentication logic at the business rules layer and data layer logic to compare the given username and password with values stored in a credential store.  We need each layer of the cake to complete the story.  If we only have the user interface layer we could enter the username and password but there would be nothing to compare it with.  With this in mind “how” we slice our cake / User Stories is as important as the slicing itself.

Common methods for slicing user stories are:
Slicing by Happy vs. Unhappy Flow
Slicing by Workflow Steps
Slicing by Test Scenarios
Slicing by Acceptance Criteria Rules
Slicing  byData Types or Parameters
Slicing by Operations
Slicing by Roles
See the posts on each method for details on how to slice or size your user stories for completion in a single sprint.

The Test Driven Development Rhythm

The Test-Driven Development (TDD) Processes follows a pattern known as the TDD Rhythm which dictates the order in which elements of the solution should be created / edited.

Before we can successfully implement TDD a few key agile constructs must exist.  Most importantly we must have ​Tasks derived from a User Stories (or requirements) that define the details of required system feature.  These defined details would include the Acceptance Criteria for feature described in the User Story.  We would then use the task details and acceptance criteria to define our tests.

The TDD Rhythm

1.  Write a Failing Test

The first step in the TDD Rhythm is to Write a Failing Test.  Using the Task Details, we write a test that exercises the functionality defined by the Users Story and expects that the value that is returned is the same as the value that is expected based on the Acceptance Criteria defined in the Task Details.

2. Run the Failing Test

Run the test to see it fail. This is an interesting step as depending on your application architecture may require some minimal project structure be created and project references made for your Failing Tests to even compile before they can run and fail. For example, if you are storing all of your Business Logic in a Class Library Project called BusinessRules that compiles as a Windows .dll and your Tests are centrally stored in a Test Project then your Class Library Project will have to Exist and the Namespace, Class and Method will have to exist before your Test Project will compile and the Tests will run and fail. Fortunately, Visual Studio includes code generation tools that will create the Classes and Methods as long as the Class Library Project Exists and at least one class with a Namespace statement exists. The Failing Test generated by Visual Studio will throw a NotImplementedException which will obviously cause the method to fail.

3.  Write just enough code to pass the test

This can be a difficult concept to get your mind around for especially when the common simple TDD example code is used.  For example, take a method simply returns a Boolean value to illustrate simple TDD method creation if we start with a test that runs the required Boolean method and expect the method to return a true the code to pass the test would simply be return True;

[TestMethod]


public
void TestGetBool()

{


Assert.IsTrue(BoolApp.BoolHost.GetBool());

}

Figure A. Test Method to test GetBool Method


public
static
bool GetBool()

{


return
true;

}

Figure B. Minimal code needed to pass the test

With this example it may seem like a waste of time to write this minimal code to pass the test as it is clear that the test needs a returned value of true in order to pass so where is the value is writing this useless passing test? Without writing the test that tests the “unhappy path” through our method (aka the test that expects a return value of false) it is hard to see the value is a method that simply returns true with no additional implementation logic. An eager developer may want to just skip to writing implementation logic without wasting time on the simplest code step of the TDD Rhythm but follow the pattern young Jedi. As seen in a slightly more complex method that returns a formatted string, understanding how the output should be formatted in order to pass the test can potentially be much more difficult than just returning a Boolean value of true.


public
void TestWelcomeBack()

{


string expected = “Welcome Back Antoine! Your last visit to the site was 02/01/2016.”;


string actual = WebSite.BizRules.WelcomeBack(user);


string message = “We should get “ + expected;


Assert.AreEqual(expected, actual, message);

}

Figure C. Test Method to test WelcomeBack Method


public
static
string WelcomeBack(object user)

{


return
“Welcome Back Antoine! Your last visit to the site was 02 / 01 / 2016.”;

}

Figure D. Minimal code needed to pass the test

In this example the Literal String returned including the user name Antoine and the last visit date would obviously need to be updated for each user and on each daily visit but the formatting and welcome statement may also be important and could possibly come from a configuration file somewhere. The point is that the minimal code required to pass the test in this case acts as documentation for the method including formatting requirements for returned values. On the next refactoring pass we would update the code in the method to include the code necessary to extract the user name from the user object passed to the method and retrieve the date of their last visit from the membership database and return it in the expected format. The String Literal is our formatting template as we create the implementation code we know what the expected result format looks like.

4. Run the Passing Test

At this point our method has just enough code to pass the test but does not necessarily meet the business requirement nor does it allow us to perform the task described by the user story. This will become obvious as more tests are developed to tests the “unhappy path” or as varying return values are expected by other tests. But at this point we understand what must be done for the test to pass and can keep that in mind as we refactor the code to make it meet the business requirement or for optimization purposes.

5. Refactor the Code

Depending on “minimal” code we wrote to pass the test our first refactoring pass may be to add required functionality or if the required functionality already exist we may be refactoring for Maintainability, Scalability or Performance Optimization. In any event as we refactor the code for whatever reason we can do so with the confidence that any changes that we make have tests in place to ensure that we have not made a change that would break existing functionality already passing tests. If you make a change and all of the sudden tests that were passing stop passing you know you have a problem. The tests can also be used for Gated Check-ins that require that any changes a developer makes to code must pass existing tests before it can be checked into Source Control allow bugs to be identified before they make into our build and potentially out to customers.

The Expanded TDD Rhythm

6. Run All Tests

Once we have refactored our code to include desired functionality or optimize for maintenance, scalability or performance we need to run all tests to ensure that our changes did not break the method that we were working on but also for any methods that depend on this method or its results. This is a necessary step to avoid failed check-ins on Source Control or Continuous Integration (CI) Servers where Gated Check-ins are used. With gated check-ins your check-in cannot break the automated build and all tests must pass or your check-in will be rejected and your code not allowed into source control until the issues are resolved.

7. Repeat

As changes are required we can continue to repeat this process of writing failing tests, coding, passing tests and refactoring until our code for the features we are adding for our updates is “perfect”

https://en.wikipedia.org/wiki/Test-driven_development

https://en.wikipedia.org/wiki/User_story

https://en.wikipedia.org/wiki/Code_refactoring