SSDT and Pre-Model Scripts

Hello!

Earlier this week I posted about the need to create the database as a pre-model tasks: that is creating a database before we run sqlpackage/dacfx against a database/server so that the script itself does not create the database.

Going a step further than this, pre-deploy scripts in SSDT scripts may be executed before the main diff script is executed, but not before the diff script is generated. This is an important fact to understand. If you want scripts to be executed before the diff is generated, then you need to execute pre-model scripts.

How you go about this is up to you, but there needs to be a few rules that must be followed:

Idempotent: big word, and is used in maths. But don’t let that put you off. What it means in this context is that a script can be re-run and the same result happens. So in the context of altering a primary key, the pseudo-code would look something like this:

if database exists then
if table exists then
if primary key name eq “oldname” then
drop primary key then
add primary key with “newname”

and then you add primary key “newname” to the solution. That way the drop and add part of the script will only ever be run once, or again against an older version of the database that has the old primary key.The “newname” of the primary key guarantees that this will be the case.

Stored in the Solution: within database solutions you can add scripts that are part of the solution but are not executed:

checksIf you copy the file to the output, what you’ll be able to do is add the files required to your nuspec file and get them deployed as part of the NuGet package. Neat!

premodelnuspec

So if you have the scripts in the solution and re-runnable then you can be confident that they will run against older versions of the database. It keeps a level of audit and history of the database.

Now, how to run the scripts. As with the create database script, SMO is probably the easiest way to do this. You could use SQLCMD, or Invoke-SqlCmd, but it’s entirely up to you.

Now if you want to use SMO, and you are deploying using Octopus Deploy, then you’re in (some) luck: there are scripts available in the Octopus Library to do something like this, but you may will have to alter them to execute a folder full of sql scripts. Fortunately, I already have completed this, and I will add my script to the GitHub solution to be available on the Library. When this is available, assuming it meets the quality control, I’ll blog about how to use it.

edit: it is now available on the Octopus Library. There is also a part two on how to use the step template.

Deploying SSDT Scripts (With My Good Eye Closed)

(If anyone actually gets the reference: kudos. If not, then it’s just that you’re not a Soundgarden fan)

So, there’s something that has been troubling me about deploying to databases with SSDT for some time. Actually, it’s not a SSDT-exclusive problem per se, it’s more of a database/release management process.

What it pertains to is that most people like to create the deployment script for review prior to deploying to production. This makes sense if you are new to the Continuous Deployment process, and that maybe sometimes SSDT creates scripts that are not quite expected. You may skip this because you feel rather more confident in your deployments, or the database sufficiently simple enough to just deploy without having to review. But generating the script as a deploy artefact is still a good idea, because auditing.

At any rate, the script is generated and maybe reviewed….. so then what? In SSDT there is no way to create and deploy script in one step; they are two distinct steps. And even if they were one step, this would still not resolve the issue that troubles me. So what is this issue?

The issue is that by creating a script, and then running the deploy, you cannot be sure that the database is in the exact same state that it was when the initial script was generated. If you don’t already know, SSDT runs a deploy entirely in memory, so as mentioned there is no script created. You have to explicitly create the script as part of the process. Or, if you have already created one, you have to re-create the script.
Continue reading “Deploying SSDT Scripts (With My Good Eye Closed)”

Writing Output from MSTest

Unit testing can sometimes throw up unexpected issues that are entirely concerned with the unit tests and not the application under test. Recently I’ve been having issues with DateTime formats affecting the results of the unit tests: the format of the DateTime was different on the build server than it was on my local machine. The stored procedure could not be altered (because reasons*). Continue reading “Writing Output from MSTest”

SSISMSBuild Updated and Moved

New! (Sorta) Improved! (Kinda)

In a long running series of loosely related posts, I have updated the SSISMSBuild Project that I uploaded to CodePlex some years ago.

But wait, there’s more!

The updates are just references and updating the .net target framework to keep it current, however I also have moved it from CodePlex to GitHub. Mainly because I’ve started uploading other projects there so I want to keep my git skills fresh because I tend to forget stuff twice as quickly as I learn them!

References!

Once you download the solution, you may find that the references to a few of the dlls are broken. This is a common problem with these dlls, and the solution is to unload the project and edit the references. Chances are the dlls will be in this location. C:\Program Files (x86)\Microsoft SQL Server\120\Tools\Binn\ManagementStudio

You will need client tools installed as part of your SQL Server installation or the dlls won’t be on your box. Either way just search for those dll names in “C:\Program Files (x86)\Microsoft SQL Server\” and they will probably be found.

 

Enabling Log Files for TFS Builds

When trying to diagnose what is causing a build to fail, there is no more a vital resource than the build logs. So you can imagine my frustration recently in working on a TFS build agent that was running a failing build that was not producing log files at the end of the build. So now I had two problems, fix the build and get the builds producing log files.

Clearly it wasn’t a permissions issue, as the build was running fine, compiling all the solutions and producing artifacts . The build account was part of the “Project Collection Build Administrators” TFS Group, so I was confident that the account would inherit everything it needed to run a build from this account, right?

Right?

Well no actually. Look at the member of tab for Build Administrators:

noperms2

It’s not a member of the build service accounts. In fact, no one was;
noperms

So I added the build account to the build service group and ran a build, and hey presto! Log files! And the problem of my failing build? A missing closing ‘)’ in the PowerShell file.

Pruning Back the Branches

stickman

With the release of SQL Server 2016 RC2, the RTM version must surely be nearly upon us. And certainly it has to be one of the most antcipated versions of SQL for some time? Now, I’m not one to drink the Kool-Aid on anything and everything, but I read a great post about SQL Server 2016 and the build/deployment process and how it has changed within the SQL Server Team. The most surprising aspect about this post was that the SQL Server code is kept in one branch, and that Azure SQL is deployed directly from this branch. And it got me thinking about branching strategies.

I’ve always advocated a dev/main/release process, but I’ll admit this has weaknesses, not least that testing will usually only take place properly in one branch, and that bugs found in one branch may not find there way “back” or “forward”, but to go with one branch means that you are forced to keep the quality at production-code quality and make use of feature switches. Certainly it’s an ambitious way of working, and Microsoft’s ALM documentation suggests that no branches is reserved for smaller teams, but surely if the SQL team at Microsoft are able to do it then certainly it’s a branching strategy worth considering?