Storing Infrastructure Secrets in Script

When migrating your organizations culture to the DevOps way automation is a key component. Not only automation of builds and testing but also automation of infrastructure components. As I’m sure most readers are aware the build out of infrastructure components usually requires elevated permissions using credentials that we would prefer not be widely published. How do we accomplish this level of automation while still keeping the necessary elevated permissions secure and still allow team members that don’t necessarily have required permissions to run the scripts?
Below are a few examples of secure credentials storage in infrastructure scripts.
PowerShell:https://blogs.technet.microsoft.com/robcost/2008/05/01/powershell-tip-storing-and-using-password-credentials/
AWS KMS:https://blog.fugue.co/2015-04-21-aws-kms-secrets.html
CyberArk:http://www.cyberark.com/solutions/by-project/application-credential-security/

Advertisements

The Test Driven Development Rhythm

The Test-Driven Development (TDD) Processes follows a pattern known as the TDD Rhythm which dictates the order in which elements of the solution should be created / edited.

Before we can successfully implement TDD a few key agile constructs must exist.  Most importantly we must have ​Tasks derived from a User Stories (or requirements) that define the details of required system feature.  These defined details would include the Acceptance Criteria for feature described in the User Story.  We would then use the task details and acceptance criteria to define our tests.

The TDD Rhythm

1.  Write a Failing Test

The first step in the TDD Rhythm is to Write a Failing Test.  Using the Task Details, we write a test that exercises the functionality defined by the Users Story and expects that the value that is returned is the same as the value that is expected based on the Acceptance Criteria defined in the Task Details.

2. Run the Failing Test

Run the test to see it fail. This is an interesting step as depending on your application architecture may require some minimal project structure be created and project references made for your Failing Tests to even compile before they can run and fail. For example, if you are storing all of your Business Logic in a Class Library Project called BusinessRules that compiles as a Windows .dll and your Tests are centrally stored in a Test Project then your Class Library Project will have to Exist and the Namespace, Class and Method will have to exist before your Test Project will compile and the Tests will run and fail. Fortunately, Visual Studio includes code generation tools that will create the Classes and Methods as long as the Class Library Project Exists and at least one class with a Namespace statement exists. The Failing Test generated by Visual Studio will throw a NotImplementedException which will obviously cause the method to fail.

3.  Write just enough code to pass the test

This can be a difficult concept to get your mind around for especially when the common simple TDD example code is used.  For example, take a method simply returns a Boolean value to illustrate simple TDD method creation if we start with a test that runs the required Boolean method and expect the method to return a true the code to pass the test would simply be return True;

[TestMethod]


public
void TestGetBool()

{


Assert.IsTrue(BoolApp.BoolHost.GetBool());

}

Figure A. Test Method to test GetBool Method


public
static
bool GetBool()

{


return
true;

}

Figure B. Minimal code needed to pass the test

With this example it may seem like a waste of time to write this minimal code to pass the test as it is clear that the test needs a returned value of true in order to pass so where is the value is writing this useless passing test? Without writing the test that tests the “unhappy path” through our method (aka the test that expects a return value of false) it is hard to see the value is a method that simply returns true with no additional implementation logic. An eager developer may want to just skip to writing implementation logic without wasting time on the simplest code step of the TDD Rhythm but follow the pattern young Jedi. As seen in a slightly more complex method that returns a formatted string, understanding how the output should be formatted in order to pass the test can potentially be much more difficult than just returning a Boolean value of true.


public
void TestWelcomeBack()

{


string expected = “Welcome Back Antoine! Your last visit to the site was 02/01/2016.”;


string actual = WebSite.BizRules.WelcomeBack(user);


string message = “We should get “ + expected;


Assert.AreEqual(expected, actual, message);

}

Figure C. Test Method to test WelcomeBack Method


public
static
string WelcomeBack(object user)

{


return
“Welcome Back Antoine! Your last visit to the site was 02 / 01 / 2016.”;

}

Figure D. Minimal code needed to pass the test

In this example the Literal String returned including the user name Antoine and the last visit date would obviously need to be updated for each user and on each daily visit but the formatting and welcome statement may also be important and could possibly come from a configuration file somewhere. The point is that the minimal code required to pass the test in this case acts as documentation for the method including formatting requirements for returned values. On the next refactoring pass we would update the code in the method to include the code necessary to extract the user name from the user object passed to the method and retrieve the date of their last visit from the membership database and return it in the expected format. The String Literal is our formatting template as we create the implementation code we know what the expected result format looks like.

4. Run the Passing Test

At this point our method has just enough code to pass the test but does not necessarily meet the business requirement nor does it allow us to perform the task described by the user story. This will become obvious as more tests are developed to tests the “unhappy path” or as varying return values are expected by other tests. But at this point we understand what must be done for the test to pass and can keep that in mind as we refactor the code to make it meet the business requirement or for optimization purposes.

5. Refactor the Code

Depending on “minimal” code we wrote to pass the test our first refactoring pass may be to add required functionality or if the required functionality already exist we may be refactoring for Maintainability, Scalability or Performance Optimization. In any event as we refactor the code for whatever reason we can do so with the confidence that any changes that we make have tests in place to ensure that we have not made a change that would break existing functionality already passing tests. If you make a change and all of the sudden tests that were passing stop passing you know you have a problem. The tests can also be used for Gated Check-ins that require that any changes a developer makes to code must pass existing tests before it can be checked into Source Control allow bugs to be identified before they make into our build and potentially out to customers.

The Expanded TDD Rhythm

6. Run All Tests

Once we have refactored our code to include desired functionality or optimize for maintenance, scalability or performance we need to run all tests to ensure that our changes did not break the method that we were working on but also for any methods that depend on this method or its results. This is a necessary step to avoid failed check-ins on Source Control or Continuous Integration (CI) Servers where Gated Check-ins are used. With gated check-ins your check-in cannot break the automated build and all tests must pass or your check-in will be rejected and your code not allowed into source control until the issues are resolved.

7. Repeat

As changes are required we can continue to repeat this process of writing failing tests, coding, passing tests and refactoring until our code for the features we are adding for our updates is “perfect”

https://en.wikipedia.org/wiki/Test-driven_development

https://en.wikipedia.org/wiki/User_story

https://en.wikipedia.org/wiki/Code_refactoring

SQL Injection Attack Demo

A SQL Injection Attack is one of the many security issues that must be address when designing and developing applications that access a database.  The injection vulnerability is potentially present on pages or forms where the user must enter a value to be submitted to the server. If the user input is not properly validated and the database doesn’t protect itself then SQLiA can occur. I have posted a sample application under the Demos link in the Downloads section of the mail portal. To download the SQL Injection Attack Sample Web Site and SQL Script click here: VB.Net Version orC# Version To run this demo code you will need Visual Studio 2008 or higher and SQL Server 2000 or higher installed

SQL Injection Demo:

The SQLInjectionDemo.zip file consists of a T-SQL Script file named CustomerOrdersDB_SQLInjection.sql used to create the database, tables (including sample data) and stored procedures (Stored procedures were created usingCRUD Script) and a Visual Studio 2008 project called Demo.sln. The Visual Studio Solution contains 2 pages: SQLInjection.aspx and SQLInjectionFixed.aspx that as the names imply illustrate a page that is vulnerable to SQL Injection and one that is not (not all possible SQL injection attacks are prevented but most).

To test the search feature that’s vulnerable to SQL Injection:Open the Solution (Demo.sln)Select SQLInjection.aspx in the Solution ExplorerPress Ctrl + F5 or Select Start without Debugging from the Debug menuType Antoine into the search boxPress the Search button
Note: You will notice that the results displayed on the page are filtered to show only Antoine Victor. Try a couple more searches then continueCopy the injection statement from the bottom of the pagePaste into the search boxPress the Search button
Note: You will see a list of all of the tables defined in the current database and all columns defined in those tables. Think credit card table, employee table with Social Security Numbers. Armed with this information a hacker could use the same SQL Injection vulnerability in this page to then request columns and rows from the credit card or employees table.
Fortunately there is a relatively easy fix for this. The fix is a 2 part process, first we validate the user input before sending it to the server and removed any special characters or malicious code, and second we make all calls to the database through stored procedures (created automatically usingCRUD Script or theSSMS Toolkit)
To see the page with the Injection issue resolved in the current browser window navigate to SQLInjectionFixed.aspx and follow the previous steps. This SQL Injection issue is now resolved.
For a list of other common injection attacks to test with this demo see: SQL Injection Cheat Sheet.

YouTube Demo showing the SQL Injection Fix:

Add stored procedures to prevent SQL Injection:

Auto File Growth by Percentage of file size

In SQL 2000 and before setting auto file growth to a percentage meant that the file would grow by 10% of the initial file size (regardless of how large the file had grown since its creation).
In SQL 2005 and later the growth by percentage setting is based on the “current” file size and not the “initial” file size.  The means the amount that the file grows will vary greatly from the time that the file is created to its size 2 years later.
The ProDataMan way
In SQL 2005 and later do not use a percentage setting used a fixed Megabyte setting based on the amount of data that your users typically add between maintenance periods.

SQL Server 2005 Bug

I was just made aware of a new bug (oops I mean undocumented feature) in SQL 2005 that prevents the backup of File Groups that scroll beyond the viewable area of the Backup Dialog box.
This is not a problem if you have only a few file groups but if you have enough that you can’t see them all without scrolling then you will only be able to back up the ones that you can see before you touch the scroll bar.
Thanks Peter!!