Swansong Musings

TL;DR go here for mixture of personal/professional stuff. Or here for more professional stuff.

Going meta again this week, as I blog about blogging in other places and a certain mistake I made not two weeks ago.

It Must Be True, It’s Written On The Internet

blog-post-internetOne of the reasons I write a blog is because I find I’m more likely to remember something if I write it down. So a blog is a useful place. And usually I’m very meticulous in my research and provide examples in order to confirm with myself what I believe to be the case to actually be the case. But on occasion I will get it wrong. And this week I got it wrong. Big time. And it is to do with defining the files when creating a database. And whilst I was right in the fact that you cannot overwrite the system variables wrt where files will be written, I overlooked the fact that you can define the locations yourself. Look at that. I got it so wrong I’m using the trifecta of bold, underline and italics to make the point.

Below is a screenshot of an mdf and ldf file being created to a different location:eggonmyface So a pre-model script or a deployment contributor are not really necessary, you just need to define the location in your publish.xml file and parameterise the file-path yourself.

Sup Dawg, I Heard You Like Reading my Blogs

So, earlier this year I started working at Sabin.IO, the consultancy ran by Simon Sabin, he of SQLBits, amongst other distinctions. You may have noticed that around the turn of the year I started posting less and less DBA-type posts and focused more on Database DevOps, which is exactly what I used to do not so long ago. But invariably I’ve come back to Database DevOps for a few reasons:

  1. You can actually genuinely make a difference helping people improve their database deployment story
  2. I was burnt by (and burnt out) too many chaotic databases, where the definition of “gold code” was whatever was in live, but there was no control as to who or what got deployed there.

But to quote a friend of mine, “there’s nothing negative in pointing out problems. It’s only negative in pointing out problems and doing nothing about it.” And very true that is. So I’m helping my former DBA colleagues by advocating the CI/CD database deployment process.

Anyway, there’s a few blog posts over on Sabin IO written by yours meely about some exciting new features in SQLPackage, and how to leverage those same features through the DacFx API. So why not head over there and give them a read why don’t you?!

And so as One Door Closes, Another Opens….

Empty platitudes aside, this will be my last blog post here. 4 years, 325 posts, over 100,000 views, not bad going. But I haven’t stopped blogging: I will in fact be posting at Sabin IO. Because I’ve not only been working at Sabin IO, I’ve actually gone permanent as of the end of October. And seeing as:

  • I like to blog
  • I like wide audience to read it

It makes sense for me to post at Sabin.IO .

This site will hang around for a while, but what with the name being a company name it’ll go and revert to a “.wordpress.com” name, and some of the content will appear on the SabinIO blog.

See ya pals!

SSDT And Pre-Model Scripts Part 2: Execute Pre-Model Scripts Using Octopus Deploy

Hello!

Part one of this post concerned the notion of SSDT and Pre-Model Scripts. This post relates to an actual implementation of executing a SQL script using Octopus Deploy. My pull request on Github was approved, so my step template can be exported from the Octopus Library.

The only part that I feel requires some explanation is the SQL Scripts and the option to select a previous step. If you’ve read part one of this post you’ll know that you keep the pre-model scripts in a folder in the database solution. These are not compiled (ie not in build) but are included as part of the output and packaged up in the NuGet package. When the NuGet package is unpacked, it’s then a case of locating the scripts and executing them.

First, the order. Extract the NuGet package that includes the dacpac and the post-deploy scripts and then add the execute SQL step:

steporder

In the step template for execute SQL, there are Octopus parameters. These we define the step that the scripts were unpackaged, and the final path to the scripts themselves:

stepoptionsAnd these are the values of the parameters:

 

variablesSo now when we execute the step, we can see which scripts were executed:

execute

The Alternative to Pre-Model Scripts

Hello!

Two of my most recent posts have concerned pre-model scripts; those scripts that need to be run before the dacpac is compared against the production database. These are sometimes necessary. And this necessity is usually because SSDT may not produce a script that is optimal. One such example is an index creation: no index is created “ONLINE”, and this can be a problem if the database being deployed is online during this operation. And can be even worse if the table is particularly large.

In respect to the SSDT team, I can see why this is the case: some versions of SQL have the online index feature, some don’t. So one solution may be to write a pre-model script that will create an index with the online operation included. And while there’s nothing wrong with this, there is an alternative: deployment contributors. Continue reading “The Alternative to Pre-Model Scripts”

SSDT and Pre-Model Scripts

Hello!

Earlier this week I posted about the need to create the database as a pre-model tasks: that is creating a database before we run sqlpackage/dacfx against a database/server so that the script itself does not create the database.

Going a step further than this, pre-deploy scripts in SSDT scripts may be executed before the main diff script is executed, but not before the diff script is generated. This is an important fact to understand. If you want scripts to be executed before the diff is generated, then you need to execute pre-model scripts.

How you go about this is up to you, but there needs to be a few rules that must be followed:

Idempotent: big word, and is used in maths. But don’t let that put you off. What it means in this context is that a script can be re-run and the same result happens. So in the context of altering a primary key, the pseudo-code would look something like this:

if database exists then
if table exists then
if primary key name eq “oldname” then
drop primary key then
add primary key with “newname”

and then you add primary key “newname” to the solution. That way the drop and add part of the script will only ever be run once, or again against an older version of the database that has the old primary key.The “newname” of the primary key guarantees that this will be the case.

Stored in the Solution: within database solutions you can add scripts that are part of the solution but are not executed:

checksIf you copy the file to the output, what you’ll be able to do is add the files required to your nuspec file and get them deployed as part of the NuGet package. Neat!

premodelnuspec

So if you have the scripts in the solution and re-runnable then you can be confident that they will run against older versions of the database. It keeps a level of audit and history of the database.

Now, how to run the scripts. As with the create database script, SMO is probably the easiest way to do this. You could use SQLCMD, or Invoke-SqlCmd, but it’s entirely up to you.

Now if you want to use SMO, and you are deploying using Octopus Deploy, then you’re in (some) luck: there are scripts available in the Octopus Library to do something like this, but you may will have to alter them to execute a folder full of sql scripts. Fortunately, I already have completed this, and I will add my script to the GitHub solution to be available on the Library. When this is available, assuming it meets the quality control, I’ll blog about how to use it.

edit: it is now available on the Octopus Library. There is also a part two on how to use the step template.