SSDT And Pre-Model Scripts Part 2: Execute Pre-Model Scripts Using Octopus Deploy


Part one of this post concerned the notion of SSDT and Pre-Model Scripts. This post relates to an actual implementation of executing a SQL script using Octopus Deploy. My pull request on Github was approved, so my step template can be exported from the Octopus Library.

The only part that I feel requires some explanation is the SQL Scripts and the option to select a previous step. If you’ve read part one of this post you’ll know that you keep the pre-model scripts in a folder in the database solution. These are not compiled (ie not in build) but are included as part of the output and packaged up in the NuGet package. When the NuGet package is unpacked, it’s then a case of locating the scripts and executing them.

First, the order. Extract the NuGet package that includes the dacpac and the post-deploy scripts and then add the execute SQL step:


In the step template for execute SQL, there are Octopus parameters. These we define the step that the scripts were unpackaged, and the final path to the scripts themselves:

stepoptionsAnd these are the values of the parameters:


variablesSo now when we execute the step, we can see which scripts were executed:


When To Use Octopus Deploy Script Modules


Lately I’ve been thinking a lot about script modules in Octopus Deploy. Script Modules, for those of you not ITK, are a collection of PowerShell cmdlets or functions that exist outside of a script step that can be used across multiple steps across different projects/environments. They don’t have to be used to contain modules exclusively; they can just contain that will get executed upon the initial import of the module. The issue here is that these script modules are imported or every step in a process. So unless the code you write is idempotent, you’ll probably cause an error in the process. This is by design.

So at first I thought “put nothing in script modules” because they’re pretty pointless, right? Anything you write will pertain to a step exclusively, so why do you need to share code across steps. Turns out quite a bit of code is shared across steps, particularly checking and validating parameters.

Then, as step templates got more sophisticated they got harder to read, so then I thought “put everything in script modules!” It aided in legibility of the step templates and allowed them to focus mainly on setting up a process and using script modules to execute the function.

There is however an issue to this: script modules are the “magic” of a step template. If you’re not aware that a step template requires a script module, how can you communicate this implicitly? You could mention that a script module is required in a step template, but this is still a manual process that everyone remembers, or even buys into.

There is also another issue: I had to make a change to a step template that required a change to the script module. The step template in question was in heavy use, and so careful planning was required to make sure that the change in the step template and the script module was not a breaking change: basically what I mean is that whereas a step template has to be explicitly updated for a given step in an deployment pipeline, the same cannot be said for a script module. So whatever is the latest version of the script template is what a user gets. And if you make a few changes to the step template in an iterative manner, you have to be sure that if anyone updates the step template then all version have to match one another. Continue reading “When To Use Octopus Deploy Script Modules”

Where Will SSDT Create my Database

SSDT is great for many things, but one thing it’s not great at is giving control over to the user with regards to data and log file creation. It uses prefixed values of DefaultDataPath and DefaultLogPath. These variables are system defined and cannot be overridden.  Instead it uses the default locations that are set on the instance we are deploying to. Humph. And I’d wager that if these are null then it will just use where the system database settings are. This is not particularly useful for anyone.

You may think “Well OK I’ll just use a pre-deployment script”. Nice try. I thought so too. But as it turns out pre-deployment scripts aren’t pre-deplyoment scripts per se. If you add a pre-deplyoment script to create a database, it’ll be run AFTER the initial database creation (go on try it yourself if you don’t believe me, but I have saved you the trouble already!)

Clearly, we have to create a shell database that we can control where files are created outside of SSDT, and also the primary filegroup. If you’re using something like Octopus Deploy or TeamCity, or VSTS then it’s simple enough to create a step that will create this shell database for you. Then everything else (logins, users, tables etc) can be created using SSDT. And if you have more files/filegroups to be created, then you can use SSDT to set these up; what I’m talking about is the initial creation of the database.

There’s no shortage of options as to how to do this: you could have a SQL file with the CREATE statements run, and this would work, providing you have the correct “if not exists create db” setup, but this may not be particularly useful for re-use.

If you want to go for something that can be re-used, then I think SMO is your friend. It can be downloaded via Feature Pack, so practically any box can have it deployed (and what I mean by this is that you don’t need a full SQL Server install to have access to SMO), it works well with PowerShell, and there’s no shortage of examples online with how you can create a script that does what you need to do. This post from Allen White gives a good overview of SMO in PowerShell. And if you’re using something like Octopus, then this script also helps give a good start on creating a database using SMO. Where I think this step template in the Octopus library falls short is that it only creates the database, and not sets the files and filegroups up. But if you combine Allen’s scripts with the idea of the step template, then you can pretty much see where I am coming from. It’s also worth setting the recovery model using SMO and not worrying about this using SSDT.

I’m not wild about the use of scripts outside of the database to deploy the database; it implies “magic”, or behind the scenes changes that some people won’t know exists. And usually I’d advocate the use of SSDT pre/post deploy scripts to manage everything within the database. But databases are hard to deploy: they’re not a bunch of dlls that can be rolled back. A minor error early in the setup of a deployment pipeline can cause massive problems further down the process. And file creation is the first thing you do. What if the log file is created on the C drive? How hard can it be to move log files? Very hard is how. And creating a secondary log file to compensate is awful. And to try to remove that first log file? All these issues could be removed with a bit of forethought.

Remember it’s no good assuming that the instance you are deploying to has everything set up the way you like it. You have the control over this stuff when you are using CI/CD properly, and you have to take responsibility for database deployments.

PowerShell Snippet: Trimming The String!


When building up urls from different parameters in something like TeamCity, or Octopus, it’s simple enough to get double “//” in urls if the parameters are not consistent. So little helper functions are always useful to have imported to manage such things. Below is an example of such a thing!

function Update-Parameter
[bool] $trimstart = $false,
[bool] $trimend = $false

if ($trimstart)
$StringChecker = $param.StartsWith($character)
if ($StringChecker -eq "True")
$param = $param.TrimStart($character)
if ($trimend)
$StringChecker = $param.EndsWith($character)
if ($StringChecker -eq "True")
$param = $param.TrimEnd($character)
return $param

$url = ""
$lan = "/en-us/"
$ext = "/professional-program/data-science/"
$trim = "/"

$output = "Before Trim {0}/{1}/{2}" -f $url, $lan, $ext

$url = Update-parameter $url $trim $false $true
$lan = Update-parameter $lan $trim $true $true
$ext = Update-Parameter $ext $trim $false $true

$output = "After Trim {0}/{1}/{2}" -f $url, $lan, $ext

SqlServer PowerShell Modules NuGet Package Now Available


Back in the July Update of SSMS 2016, a bunch of new SQL PowerShell functions were added, plus two neat additions to Invoke-Sqlcmd: -outputas, which allowed you to output the result set into a data object (eg, data row, data table etc), and -ConnectionString, which allows you to pass in a connection string instead of using the pre-defined parameters. All very useful stuff, go and have a read.

However, this update has two issues: firstly, it’s not updating the classic sqlps module, but rather has created a new module: sqlserver. This new module will be regularly updated and delivered with SSMS updates. And the sqlps will only be affected by changes from the DB Engine; so Cumulative Updates, Service Packs etc. So there’s quite a bit of overlap here.

Secondly, I don’t like how this new module is being delivered via a monthly release of SSMS. That’s a 900MB download and a long install to gain access to a module that is only 2MB in size. To make matters worse, the SQL Server 2016 Feature Pack, which includes a SQL PowerShell msi, IS NOT being updated! Waah!

So this may just seem like a whinge, but I have checked and it is possible to install this module on it’s own and use without having anything else other than a couple of prerequisites installed. So I’m going to be pro-active: I have taken in upon myself to upload the module to NuGet for people to consume. As far as prerequisites go, you only need SMO and SQLCLR installed, available through the Feature Packs, though they do have to be the SQL Server 2016 versions of these objects.

As mentioned in one of the links above, you have to Remove-Module sqlps first before you import the new module.

I’ll update the NuGet package every time there’s a new release of SSMS.

Oh, and here’s a quick syntax of how to import the module:

Import-Module -Name "C:\nuget\PowerShell.SqlServer.Modules.1.0.1\sqlserver" -Verbose
$conn = ".;Integrated Security=True"
Invoke-Sqlcmd -ConnectionString $conn -query "SELECT GETDATE() AS TimeOfQuery;"

PowerShell Snippet: Verify Minimum Version of .NET

Occasionally it’s useful to verify the minimum version of .NET installed on a box. This PowerShell does exactly that.

$Netver = (Get-ItemProperty ‘HKLM:\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full’ -Name Release).Release
write-host ".NET Version : $netver"

ElseIf ($Netver -le 394271)
throw "At least .NET Framework 4.6.1 needs to be installed on this tentacle to use Microsoft.Data.Tools.Msbuild."

PowerShell Snippet #7: Retrieve SQL Server Licence Key

Copied from somewhere else on the internet, this PowerShell script will return the product key used for a SQL instance Install. Super useful when changing licenses on temporary VM’s I spin up and play around with to SQL Developer whose instances have passed the Enterprise evaluation use-by date. Putting this here for my own benefit. I claim no kudos!


function GetSqlServerProductKey {
## function to retrieve the license key of a SQL 2014 Server.
param ($targets = ".")
$hklm = 2147483650
$regPath = "SOFTWARE\Microsoft\Microsoft SQL Server\120\Tools\Setup"
$regValue1 = "DigitalProductId"
$regValue2 = "PatchLevel"
$regValue3 = "Edition"
Foreach ($target in $targets) {
$productKey = $null
$win32os = $null
$wmi = [WMIClass]"\\$target\root\default:stdRegProv"
$data = $wmi.GetBinaryValue($hklm,$regPath,$regValue1)
[string]$SQLver = $wmi.GetstringValue($hklm,$regPath,$regValue2).svalue
[string]$SQLedition = $wmi.GetstringValue($hklm,$regPath,$regValue3).svalue
$binArray = ($data.uValue)[0..16]
$charsArray = "B","C","D","F","G","H","J","K","M","P","Q","R","T","V","W","X","Y","2","3","4","6","7","8","9"
## decrypt base24 encoded binary data
For ($i = 24; $i -ge 0; $i--) {
$k = 0
For ($j = 14; $j -ge 0; $j--) {
$k = $k * 256 -bxor $binArray[$j]
$binArray[$j] = [math]::truncate($k / 24)
$k = $k % 24
$productKey = $charsArray[$k] + $productKey
If (($i % 5 -eq 0) -and ($i -ne 0)) {
$productKey = "-" + $productKey
$win32os = Get-WmiObject Win32_OperatingSystem -computer $target
$obj = New-Object Object
$obj | Add-Member Noteproperty Computer -value $target
$obj | Add-Member Noteproperty OSCaption -value $win32os.Caption
$obj | Add-Member Noteproperty OSArch -value $win32os.OSArchitecture
$obj | Add-Member Noteproperty SQLver -value $SQLver
$obj | Add-Member Noteproperty SQLedition -value $SQLedition
$obj | Add-Member Noteproperty ProductKey -value $productkey


PowerShell Snippet #6

Recently I needed to run Invoke-Sqlcmd that would use a connection string of a variety of parameters. Below is the script I used to accept whatever parameters would be passed in to the “Invoke-sqlcmd”. The input file contained the same text as the $query parameter.


$ServerInstance = ".\instance"
$database =  "Northgale"
$uname = "user"
$pword = "password123"
$query = "'SELECT GETDATE() AS TimeOfQuery;'"
$inputFile = "C:\Users\Richard\Documents\inputfile.sql"

$connectionString = "-ServerInstance $ServerInstance -UserName $uname -Password $pword -Database $database -verbose"
$connectionString  = $connectionString -split '(?=prg=PowerShell-)'
$InvokeSqlcmd  = "Invoke-Sqlcmd -inputfile $inputFile $connectionstring"
write-host $InvokeSqlcmd -verbose
Powershell.exe -command "$invokeSqlCmd"

PowerShell Snippet#5: Running MSBuild

Script on running MSBuild via PowerShell. This ties up neatly with my previous post.

$msbuild = "C:\Windows\Microsoft.Net\Framework\v4.0.30319\MSBuild.exe"
$MsBuilExists = Test-Path $msbuild
If ($MsBuilExists -ne $true) {write-host "msbuild does not exist at this location. Install Visual Studio 2015 (Community Edition should be adequate)"}
$buildFile = $PSScriptRoot+"\BuildAllDBProjects.targets.xml"
& $msbuild $buildFile

PowerShell Snippet #4



Below is a function that will search in  a string and replace a character in a variable. Simple enough. It generally saves time in calling this from a helper file in source control for tidying up variables that are passed in to a Powershell function that aren’t quite formatted as required. A good example of this is upstream build variables with spaces in TeamCity.

Function FormatReplaceVariable

Write-Host "Replacing characters $find with $substitute in $param" -Verbose
$param = $param -replace "$find", "$substitute"
Write-Host "Now parameter looks like $param" -Verbose
return $param

$bob = FormatReplaceVariable -param "as'lfjksaj'JASD;oasdjf'asd aasd asd; ghas'fkasdf;aksd" -find ";" -substitute "!!!!"
write-host $bob