Swansong Musings

TL;DR go here for mixture of personal/professional stuff. Or here for more professional stuff.

Going meta again this week, as I blog about blogging in other places and a certain mistake I made not two weeks ago.

It Must Be True, It’s Written On The Internet

blog-post-internetOne of the reasons I write a blog is because I find I’m more likely to remember something if I write it down. So a blog is a useful place. And usually I’m very meticulous in my research and provide examples in order to confirm with myself what I believe to be the case to actually be the case. But on occasion I will get it wrong. And this week I got it wrong. Big time. And it is to do with defining the files when creating a database. And whilst I was right in the fact that you cannot overwrite the system variables wrt where files will be written, I overlooked the fact that you can define the locations yourself. Look at that. I got it so wrong I’m using the trifecta of bold, underline and italics to make the point.

Below is a screenshot of an mdf and ldf file being created to a different location:eggonmyface So a pre-model script or a deployment contributor are not really necessary, you just need to define the location in your publish.xml file and parameterise the file-path yourself.

Sup Dawg, I Heard You Like Reading my Blogs

So, earlier this year I started working at Sabin.IO, the consultancy ran by Simon Sabin, he of SQLBits, amongst other distinctions. You may have noticed that around the turn of the year I started posting less and less DBA-type posts and focused more on Database DevOps, which is exactly what I used to do not so long ago. But invariably I’ve come back to Database DevOps for a few reasons:

  1. You can actually genuinely make a difference helping people improve their database deployment story
  2. I was burnt by (and burnt out) too many chaotic databases, where the definition of “gold code” was whatever was in live, but there was no control as to who or what got deployed there.

But to quote a friend of mine, “there’s nothing negative in pointing out problems. It’s only negative in pointing out problems and doing nothing about it.” And very true that is. So I’m helping my former DBA colleagues by advocating the CI/CD database deployment process.

Anyway, there’s a few blog posts over on Sabin IO written by yours meely about some exciting new features in SQLPackage, and how to leverage those same features through the DacFx API. So why not head over there and give them a read why don’t you?!

And so as One Door Closes, Another Opens….

Empty platitudes aside, this will be my last blog post here. 4 years, 325 posts, over 100,000 views, not bad going. But I haven’t stopped blogging: I will in fact be posting at Sabin IO. Because I’ve not only been working at Sabin IO, I’ve actually gone permanent as of the end of October. And seeing as:

  • I like to blog
  • I like wide audience to read it

It makes sense for me to post at Sabin.IO .

This site will hang around for a while, but what with the name being a company name it’ll go and revert to a “.wordpress.com” name, and some of the content will appear on the SabinIO blog.

See ya pals!

VSTS Hosted Build Agent Specs

I was interested to know just what the hardware specifications of the hosted build agent is. So I added some PowerShell to read out the info below:

 


2016-06-29T09:23:31.3935358Z systemname      Name                                      DeviceID NumberOfCores NumberOfLogicalProcessors Addresswidth
2016-06-29T09:23:31.3935358Z ----------      ----                                      -------- ------------- ------------------------- ------------
2016-06-29T09:23:31.3935358Z TASKAGENT5-0010 Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz CPU0                 2                         2           64
2016-06-29T09:23:31.4095356Z Total memory:  7167.55078125

What piqued my interest greater was that this is the exact same spec for a D2 v2 box that is available via Azure. Clearly, Microsoft have a build agent template which is built, stored in a pool, and provisioned whenever a build takes place. One must assume that it is disposed of after a single use to prevent and nefarious activity by other people trying to steal code….

d2v2

Deploying SSDT Scripts (With My Good Eye Closed)

(If anyone actually gets the reference: kudos. If not, then it’s just that you’re not a Soundgarden fan)

So, there’s something that has been troubling me about deploying to databases with SSDT for some time. Actually, it’s not a SSDT-exclusive problem per se, it’s more of a database/release management process.

What it pertains to is that most people like to create the deployment script for review prior to deploying to production. This makes sense if you are new to the Continuous Deployment process, and that maybe sometimes SSDT creates scripts that are not quite expected. You may skip this because you feel rather more confident in your deployments, or the database sufficiently simple enough to just deploy without having to review. But generating the script as a deploy artefact is still a good idea, because auditing.

At any rate, the script is generated and maybe reviewed….. so then what? In SSDT there is no way to create and deploy script in one step; they are two distinct steps. And even if they were one step, this would still not resolve the issue that troubles me. So what is this issue?

The issue is that by creating a script, and then running the deploy, you cannot be sure that the database is in the exact same state that it was when the initial script was generated. If you don’t already know, SSDT runs a deploy entirely in memory, so as mentioned there is no script created. You have to explicitly create the script as part of the process. Or, if you have already created one, you have to re-create the script.
Continue reading “Deploying SSDT Scripts (With My Good Eye Closed)”

Deploy Cubes Using Powershell: The Script

Hello!

Recently someone got in touch with me about one of my earliest posts. I realised I had made a mistake on that post and had updated. But I also realised that the script itself is actually part of a PowerShell module. So wanting to make things as easy as possible, and being the helpful guy that I am, I have altered that example into a script which I am posting here.

It’s worth stressing that the errors that output to the console are fairly generic, and that there is a log file that will contain all the information if there are any errors.

Continue reading “Deploy Cubes Using Powershell: The Script”

Using TeamCity To Raise Octopus Defects Updated

Hello!

I recently updated the script that uses both the TeamCity and Octopus API’s to raise defects if a TeamCity build has failed (the reasoning here is tat a deployment has passed on Octopus, but subsequent automated testing has failed.)

This script uses functions and is a little neater than the previous version. You will have to create a snapshot dependency between the build that this step runs in and the build that deploys/runs the tests. You will also need to add parameters to the TeamCity build that runs this script to populate the parameters.

Essentially the “UpdateDefects” function is called, and if that finds that the build has failed then it will run “RaiseDefect”.

That’s all for today, have a good one. Happy scripting!


param(
[parameter(Mandatory=$true)] [string]$buildType,
[parameter(Mandatory=$true)] [string]$dependentBuildNumber,
[parameter(Mandatory=$true)] [string]$epicFail,
[parameter(Mandatory=$true)] [string]$root,
[parameter(Mandatory=$true)] [string]$user,
[parameter(Mandatory=$true)] [string]$pass,
[parameter(Mandatory=$true)] [string]$api,
[parameter(Mandatory=$true)] [string]$OctopusApiKey,
[parameter(Mandatory=$true)] [string]$ProjectName,
[parameter(Mandatory=$true)] [string]$ReleaseVersion
)

Function UpdateDefect
{
$build = "$buildType #$dependentBuildNumber"
$secpasswd = ConvertTo-SecureString $pass -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential($user, $secpasswd)
$result = Invoke-WebRequest $root -Credential $credential -UseBasicParsing
[xml]$xml = $result.Content
[string]$status= $xml.builds.build | select -first 1 | Format-Table -AutoSize -Property status -HideTableHeaders | Out-String
$status = $status.Trim()
if ($status -eq $epicFail)
{
write-host "Build $build Failed. Raising a defect."
RaiseDefect
}
elseif ($status -ne $epicFail) {write-host "Build $build has not failed. Not raising a defect."}
}
$buildType
Function RaiseDefect
{
[System.Uri]$defectMessage = "This is a defect raised by failing TeamCity build $build. Please refer to the logs of the build for the reason of the failure."
$header = @{ "X-Octopus-ApiKey" = $OctopusAPIKey } #header for web requests
$defectDescription = @{ description = $defectMessage } | ConvertTo-Json #description of defect is required when posting new defect

#retrieve project id; required for getting release
$projectUri = "$api/api/projects/$ProjectName"
$Project = Invoke-WebRequest -Uri $projectUri -Headers $header -UseBasicParsing | ConvertFrom-Json
$ProjectId = $Project.Id

$relUri = "$api/api/projects/$ProjectId/releases/$ReleaseVersion"
$releaseDetails = Invoke-WebRequest -Uri $relUri -Headers $header -UseBasicParsing | ConvertFrom-Json

$reportDefect = $releaseDetails.Links.ReportDefect #url to report defects
$defectApiUrl = $releaseDetails.Links.Defects #url to get info on defects
$resolveDefect = $releaseDetails.Links.ResolveDefect #url to resolve defects

$getDefectUrl = "$api$defectApiUrl" #make full defect url
$reportDefectUrl = "$api$reportDefect" #make full report url
$resolveDefectUrl = "$api$resolveDefect" #make full resolve url

#check if unresolved defect exists. If it does, then write-error and exit script
$StatusesofDefects= Invoke-WebRequest -Uri $getDefectUrl -Method Get -Headers $Header -UseBasicParsing | ConvertFrom-Json
$StatusesofDefects.Items.Status
foreach ($DefectStatus in $StatusesofDefects.Items.Status)
{
if ($DefectStatus -eq 'Unresolved')
{
Write-Error "An unresolved Octopus defect already exists. Resolve previous Octopus defects before attempting to raise another defect."
exit
}
}
#we'll only get here if there are no unresolved defects
write-host "raising defect for project" $ProjectName "for release version" $ReleaseVersion
Invoke-WebRequest -Uri $reportDefectUrl -Method Post -Headers $Header -Body $defectDescription -UseBasicParsing | ConvertFrom-Json
#commented out resolve defect. We don't need it to run, but it's here for future reference.
#Invoke-WebRequest -Uri $resolveDefectUrl -Method Post -Headers $header | ConvertFrom-Json
}

UpdateDefect

Using TeamCity To Raise Octopus Defects

In my previous post, I shared a script that used the Octopus API to create a defect, with the aim of it being added to a TeamCity build and chaining the build to a deploy/test build step in TeamCity: the aim being to raise an Octopus defect if a test fails whilst the deployment to the environment succeeded. You can read more about it here.

What makes this a challenge is that there is no way to have a chained build that runs if, and only if, a build has failed. So as with Octopus you have to use the TeamCity API. In this script I get the the status of the last build that deployed/ran the tests, and if this build succeeded I do nothing. So yes this chained build has to always run post deploy/test phase.

Where it gets interesting though is if the build failed. Here we raise a defect, but not before checking to see if there are any defects raised that are still unresolved, as only one Octopus defect can be unresolved at any one time. Continue reading “Using TeamCity To Raise Octopus Defects”

Enabling Log Files for TFS Builds

When trying to diagnose what is causing a build to fail, there is no more a vital resource than the build logs. So you can imagine my frustration recently in working on a TFS build agent that was running a failing build that was not producing log files at the end of the build. So now I had two problems, fix the build and get the builds producing log files.

Clearly it wasn’t a permissions issue, as the build was running fine, compiling all the solutions and producing artifacts . The build account was part of the “Project Collection Build Administrators” TFS Group, so I was confident that the account would inherit everything it needed to run a build from this account, right?

Right?

Well no actually. Look at the member of tab for Build Administrators:

noperms2

It’s not a member of the build service accounts. In fact, no one was;
noperms

So I added the build account to the build service group and ran a build, and hey presto! Log files! And the problem of my failing build? A missing closing ‘)’ in the PowerShell file.

Connect VSTS Package Management to Octopus

As part of the Visual Studio Team Services offering, Microsoft have added a Package Management module. Though still only in preview mode, you are able to set up feeds and then publish NuGet packages either through NuGet command line or as a step in a Visual Studio build. I’m going to assume you’ve read the documentation and set up a feed, and that as part of your build you wish to publish a NuGet package to your new feed, and then you want to add the feed as a service on Octopus. I’m also going to assume you are using the NuGet package step instead of OctoPack to make your NuGet packages. Continue reading “Connect VSTS Package Management to Octopus”

Pruning Back the Branches

stickman

With the release of SQL Server 2016 RC2, the RTM version must surely be nearly upon us. And certainly it has to be one of the most antcipated versions of SQL for some time? Now, I’m not one to drink the Kool-Aid on anything and everything, but I read a great post about SQL Server 2016 and the build/deployment process and how it has changed within the SQL Server Team. The most surprising aspect about this post was that the SQL Server code is kept in one branch, and that Azure SQL is deployed directly from this branch. And it got me thinking about branching strategies.

I’ve always advocated a dev/main/release process, but I’ll admit this has weaknesses, not least that testing will usually only take place properly in one branch, and that bugs found in one branch may not find there way “back” or “forward”, but to go with one branch means that you are forced to keep the quality at production-code quality and make use of feature switches. Certainly it’s an ambitious way of working, and Microsoft’s ALM documentation suggests that no branches is reserved for smaller teams, but surely if the SQL team at Microsoft are able to do it then certainly it’s a branching strategy worth considering?

Running Clean Builds On Visual Studio Team Services

Hello!

Recently I have been using Visual Studio Team System for builds and unit testing. I’ve actually really enjoyed getting back to working with builds and testing again, and I’m amazed at just how different it is running builds on VSTS instead of on-premises TFS build servers. Whereas I was used to use massive MSbuild files with tasks etc, or those awful Workflow builds that I never really adopted[1], the build process online is ridiculously straightforward, and the hosted build process machines have pretty much every piece of software you could hope to have running on a build server. Continue reading “Running Clean Builds On Visual Studio Team Services”