Follow ProDataMan on Facebook, YouTube and Twitter

We geek out with no limits on Facebook @ProDataMan there will be posts about underwater hotels that have nothing to do with Programming, SQL Sever or DevOps but it will always be cool High Tech stuff…

Follow us on Twitter @ProDataMan to be notified when we add a new video to a Playlist on the YouTube channel. Currently Curating Cucumber Acceptance Testing Videos for a course I’m working on.  Twitter follower will get a notification every time I add a new Cucumber Video to the Agile Testing playlist.

The ProDataMan YouTube channel @ProDataManTrains focuses mainly on Information Technology Topics with Demos and How To Videos.  ProDataManTrains also Live Streams Agile and DevOps related topics from live courses.

On the ProDataMan blog we try to stick to Information Technology related topics.  There will occasionally be a topic too good to pass up like the recent sale of Google Assistant Smart Speakers at Best Buy for $29.  The Insignia (Best Buy’s Brand) Smart Speaker with Google Assistant with far better sound and bass response than the $129 Google Home.

Facebook – www.Facebook.com/ProDataMan
Twitter – www.Twitter.com/ProDataMan (@ProDataMan)
YouTube – www.YouTube.com/ProDataManTrains
Blog – http://www.prodataman.com/Home/Blog

Best Agile / DevOps Open Source Tool Chain

Historically I have been a Microsoft C# guy but the more I work with non-Microsoft shops with Hybrid environments and Java guys running around everywhere the more curious I have become about open source tool chains for Agile and DevOps.
We use Team Foundation Services for Work Item Tracking, Planning, Continuous Integration, and Continuous Deployment to QA and Stage in Azure. That’s all fine and good for projects built almost entirely on the Microsoft Platform but when there are more Java guys on the team than C# guys the holy wars begin.
I love the deep Integration between the tools on the Microsoft stack obviously born from vendor lock in but I am totally open to a more open-source, vendor agnostic solution. I just haven’t been able to find one that provides the required features I’m looking for.
Base level requirements are as follows:
A tool that provides Epic / Story management and visualization (Kanban / Burndown).
A tool for Source / Version control that integrates well with the work item tracking tool and CI server to allow gated check-ins (reject check in if build or tests fail)
A Continuous Integration server that can notify source control of failed builds and tests so check in can be rejected and notifies the work item tracking tool so that a bug work item can be created and assigned to the user who performed the commit of bad code.
A release automation tool / plug in that can trigger a release based on successful CI build and test.
Does this tool chain only exist in the land of flying reindeer and unicorns?

Git and GitHub work fine for source / version control and integrates with almost everything but gated check-ins and automatic bug creation had been elusive thus far.

Anyone have this working already? Any suggestions?

Create a Time Dimension using the SQL Server Data Tools Dimension Wizard

If you database has no Time or Date table you can use the Dimension Wizard in SQL Server Data Tools (SSDT) to generate your Time Dimension.  You can have the tool generate a Time Table either in the data source (if you have permissions) or on the Server.  When creating your Time Table using the Wizard you have the option to specify the Time or Date range the Table will include dates / times between your specified start and end points.

See the article on MS Docs below for more details on creating Time Dimensions automatically using the Dimension Wizard

https://docs.microsoft.com/en-us/sql/analysis-services/multidimensional-models/create-a-time-dimension-by-generating-a-time-table?view=sql-server-2017

Currently Reviewing Open Source Agile Tools

Looking for the best open source tools for running agile projects.  The goal of this little experiment is to create a CI / CD pipeline including planning, task management, source control / versioning, triggered build and test and deployment to the cloud.

Today I’m experimenting with Taiga an open source planning and task management tool.  So far the interface is intuitive and it has most of the features and data points that I would expect to capture during planning.

For free you can have 3 team members and 1 private project (unlimited public projects).  There are Epics, Stories and Sub-tasks to track.  There are Sprints, Backlogs and Kanbans to view.  It even has an issue tracker and a wiki.  You can even link your project timeline to a slack channel to share project updates.

So far this tool is looking pretty good for free.  Are there other free tools that I should be looking at?  Looking for integration with Git and Jenkins to automate builds and tests.  The golden feature is Gated Checkins!  If there is a free open source solution that allows association of an assigned sub-task on checkin to version control then triggers a build in Jenkins and creates an issue (bug) in work item tracking if the build or tests fails or deploys to the cloud if successful the contest is over!  If you know of this magical free toolset please leave links in the comments.

I’ll post a video and screenshots shortly with a more detailed review.

Insignia Voice Smart Portable Bluetooth Speaker and Alarm Clock with Google Assistant Multi NS-CSPGASP2 – Best Buy

I have a few smart speakers scattered around my place from both Amazon and Google and both have their strengths and weaknesses. However when it comes to usefulness as a nightstand alarm clock devices from both companies fall short.

First the Google home and Google Mini do not have screens to view the current time so you are left to ask what time it is and hope the volume isn’t so loud that you wake everyone in the house. The Amazon Alexa and Dot devices are no different.

The Google Home Hub and the Echo Show do better as they both have a display to view the time, photo and video content as well as other visual information. However both devices lack what i consider to be a crucial, make or break, deal breaking feature… A USB port to charge my device while I sleep! I would be happy with a wireless charging pad on top or in the back… But no way to charge my phone at all?? This is an absolute requirement IMO for a device to be considered for the nightstand.

This device however has a USB charging port to charge my phone. Charge speed doesn’t matter as I will be asleep for at least 4 hours… It has a screen so I can see the current time. And the alarms can be set with your voice so no need to fumble with buttons. For $25 on sale you can’t beat the price! You can’t buy and Alexa Dot or Google Mini for close to that (double that in most cases)

I’m ordering one now… I’ll update this post once it’s on the nightstand.

https://www.bestbuy.com/site/insignia-voice-smart-portable-bluetooth-speaker-and-alarm-clock-with-google-assistant-gray-black/5865906.p?skuId=5865906&ref=199&loc=0JlRymcP1YU&acampID=1&siteID=0JlRymcP1YU-TfKqK9fp.vmvBO0HqGM0dQ

CRUD Script and SSMS Toolkit

Using stored procedures in your Data Access code from ASP.Net applications stops most (not all) SQL Injection Attacks and also ensure that the query is executed with the same parameters in the same order and format each time allowing the query optimizer to use the same query plan on subsequent executions.  So it makes good sense to use stored procedures for almost all access to your database.  The only problem with this practice is the time that it takes to create at least 4 stored procedures for each table in your database.  We need a procedure for Insert, one for Select, one for Update and One for Delete.  We may even need additional stored procedures to get customers by email or to search for customers by FirstName or LastName.  In a database that has 1,000 tables that means at a minimum we are creating 4,000 stored procedures.
 So in order to lighten the DBAs workload we can use an SSMS Add-in (SSMS = SQL Server Management Studio) or a CRUD script (CRUD = Create, Read, Update, Delete) to automate the creation of our Insert, Select, Update and Delete statements.
I found a nifty little script that creates stored procedures for Select, Insert, Update and Delete for all of the tables in a Database or for a Single table when a TableName Variable is set. You can find this script in the Demos folder on the ProDataMan Portal with the Name ISUD with Prefix and Schema Support.sql or you can use the following link to download: CRUDScript
*New: I finally updated CRUDScript for Schema support!
Someone told me about a feature of the SSMS Toolkit a SSMS Add-in available here: SSMS Toolkit
This tool allows you to create CRUD stored procedures for tables based on fully customizable templates that you can change to suit your needs. But this tool does so much more!! See the Features page for more details

Custom Errors Series: Part 1 – What is an Exception

What is an Exception
An Exception is the object class that is created by .Net when an error or unexpected state is encountered while executing application code.  All Exceptions inherit from the System.Exception class.  The Exception object that is created when an error occurs contains error details including the stack trace and potentially any underlying exceptions that may have occurred.  The errors that occur fall into three basic categories only one of which is potentially exposed through Exception objects :

  • Syntax / Compile Errors
  • Runtime Errors
  • Logic Errors

These errors are listed in order of easiest to most difficult to find.

Syntax errors represent some coding error that the compiler can generally find for you they are the easiest to find.  In most Integrated Development Environments (IDE) the offending code will be underlined with a red squiggly.

Runtime errors are a little more difficult to find because as the name implies they do not occur until runtime; when a user actually runs the application.  The runtime error will only occur if the user presses the button or performs the action that runs the piece of code containing the error.  When the code is executed and the error occurs an Exception object will be created.  It is up to the developer to plan for this and “Handle” the Exception in a graceful way. However, if no one ever presses the button the error may never be found.

Logic Errors are the most difficult to find because even when the offending code is executed no error occurs and no exception object is created.  The error could be as simple as text behind a graphic instead of in front or a decimal point in the wrong place.  In any event it will require a human or an automated test check the expected output for known inputs to find the error.

For more on the different types of exceptions see the next post in this series

Custom Error Series –Part 2 – Types of Exceptions

Story Points Estimation

When planning an agile project creating User Stories and estimating their complexity is an important step to provide your customer and delivery team with a clear understanding of the solution being developed.  Estimating the complexity of a User Story is something typically done by a Product Owner after or during a meeting with a customer then verified and approved by the delivery team during release and sprint planning.  Make no mistake that this is a consensus not a majority rules estimation process.  While the project owner gets first stab at story point estimation it is the delivery team that will be responsible for doing the actual implementation. The delivery team should never commit to adding a story to a sprint without first having a conversation about the delivery team tasks required to bring the story to the teams stated definition of done.

Since we are estimating relative levels of complexity and not actual hours a modified Fibonacci sequence can be used for estimations of User Stories received by the development team.  This will help keep the team from getting bogged down looking for exact estimates and allow them to “round up” to the next level of complexity.

0, ½, 1, 2, 3, 5, 8, 13, 20, 40, 100

Complexity vs Hourly Estimates
Humans are not very good and estimating actual time for complex activities.  But we happen to be very good at estimating relative complexity, this will be about as about as hard to do as that was.  So when estimating at a high level such as story points it is best to keep those estimates at the relative story point level and save the more precise detailed estimates to the delivery team tasks to be captured during release and sprint planning.  Also whenever possible it is best to keep User Stories to a size that will fit within a single sprint, even better to keep them down to 1-3 day sized chunks.

Ultimately Delivery team tasks will be nested beneath the User Stories at a more granular level so we can save time estimates for these smaller work items.  The sweet spot for tasks nested beneath User Stories 2 to 4 hour chunks.  After 6-10 sprints and sprint planning meetings your teams story point estimations should be pretty accurate.

Since the teams capacity describes the amount of story points that the team can finish in a sprint and a sprint is a time boxed event if we accurately estimate the number of story points we can finish in a sprint we can extrapolate the number of hours required to complete the committed story points.

A great way to start the conversation about task estimates is to play Planning Poker.  Here is a great video to get your started: https://youtu.be/MrIZMuvjTws

User Story Slicing
If our User Stories are too large to fit into a single sprint it may be an Epic or Feature masquerading as a User Story, in this case it is best to break this large complex User Story down into smaller chunks to make it easier to understand.  We call this process slicing or sizing of the User Story.  If we think of the User Story as a slice of double chocolate layered cake (the flavor was irrelevant but call it a craving) then we can think of our slicing efforts as a slice of cake from top to bottom and not simply peeling off the top layer (otherwise you miss out on the frosting between the layers).

Slicing the cake vertically means we follow our business process from the User Interface layer all the way through to any data access components that might be involved, in other words if we have a log in user story we can actually log in because the UI, data layer and database required by the story are all in place.

Small Increments
Let’s break things down into smaller components so that you can understand.  The larger the size of the user story the more moving parts it has, the larger the margin for error.  Also if the user story is too large to fit in a sprint it will affect the team’s apparent capacity and velocity as the burndown chart will not move until the story is marked as complete.  A large user will have many delivery team tasks nested beneath it each of these tasks will have its own time estimates and since the sweet spot for these tasks is 2 hours a large user story could potentially have tens of tasks associated with it.

Story as a container for work items
The User Story is a high-level nontechnical customer requirement and is meant to ease communication between the customer, product owner and delivery team.  As such the user story is not the place for technical detail, this is the realm of the delivery team task (formerly known as developer tasks).  The story point complexity rating has a direct impact on the number and size of delivery team tasks to be expected for each user story.  As a general rule it is best to keep tasks to small workable chunks created and assigned in 2 hour increments.  Two-hour delivery team tasks make estimation far more precise by reducing the margin of hour to minutes instead of hours or days.  During sprint planning we should strive to identify about 2/3rd of the required technical delivery team task as the effort and time required to identity 100% of technical delivery tasks.  We should spend on average 2-4 hours in sprint planning for each week of the sprint.
The more complex the User Story the larger the delivery team tasks will be.  The larger the delivery team task the less accurate the task time estimates will be.  Put simply the more we reduce the amount and size of work in progress the more accurate our time and complexity estimates will be.  See our post on Slicing User Stories for more detail on how to size or slice large and complex User Stories.