Deploying SSDT Scripts (With My Good Eye Closed)

(If anyone actually gets the reference: kudos. If not, then it’s just that you’re not a Soundgarden fan)

So, there’s something that has been troubling me about deploying to databases with SSDT for some time. Actually, it’s not a SSDT-exclusive problem per se, it’s more of a database/release management process.

What it pertains to is that most people like to create the deployment script for review prior to deploying to production. This makes sense if you are new to the Continuous Deployment process, and that maybe sometimes SSDT creates scripts that are not quite expected. You may skip this because you feel rather more confident in your deployments, or the database sufficiently simple enough to just deploy without having to review. But generating the script as a deploy artefact is still a good idea, because auditing.

At any rate, the script is generated and maybe reviewed….. so then what? In SSDT there is no way to create and deploy script in one step; they are two distinct steps. And even if they were one step, this would still not resolve the issue that troubles me. So what is this issue?

The issue is that by creating a script, and then running the deploy, you cannot be sure that the database is in the exact same state that it was when the initial script was generated. If you don’t already know, SSDT runs a deploy entirely in memory, so as mentioned there is no script created. You have to explicitly create the script as part of the process. Or, if you have already created one, you have to re-create the script.
Continue reading “Deploying SSDT Scripts (With My Good Eye Closed)”

Writing Output from MSTest

Unit testing can sometimes throw up unexpected issues that are entirely concerned with the unit tests and not the application under test. Recently I’ve been having issues with DateTime formats affecting the results of the unit tests: the format of the DateTime was different on the build server than it was on my local machine. The stored procedure could not be altered (because reasons*). Continue reading “Writing Output from MSTest”

Import-Module SqlPs Takes Forever To Load

Hello!

I’ve been working on some pre-deployment scripts to SQL Server using PowerShell, and I chose to use Invoke-Sqlcmd as opposed to creating a sql connection*, however the first time I ran the script on my local box to test, it seemed to hang. I thought it may be my local instance not running, but that was running fine. So I created the simplest command possible and tried it:


Invoke-Sqlcmd -Query "SELECT GETDATE() AS TimeOfQuery;" -ServerInstance "."

But still it was hanging. And then warning messages started to appear:

slowToImport

And at the end of the messages, my query returned. Subsequent re-runs of the Invoke-Sqlcmd ran almost instantly. But the errors pertain to Azure, and I’m not using Azure!. But I do have the Azure modules installed on my machine. A little bit of Googling led me to this Connect issue. Essentially what is happening is tha the Import-Module that is ran implicitly when using Invoke-sqlcmd “incorrectly loads files (e.g. DLLs, .ps1xml, etc) in the context of the current provider, as opposed to the context of the “file system” provider (this causes the SQL Provider to try to connect to entities that may look like SQL Server, but they are not – as stated in the description of this issue; there isn’t anything intrinsically wrong that the SQL Provider is doing here.)”

And now for the good news: it’s fixed. Hurrah! But now for the bad news: you need to install SQL Server 2016 to get the fix. I did install just the June 2016 release of SSMS as it’s far more realistic for people to upgrade this than the database engine, but unfortunately this did not resolve the issue even after a restart. It was a long shot, but one worth trying. You can also uninstall the Azure modules; it’s arguably easier to uninstall these and run them on another box than upgrade to SQL Server 2016, but I’ll admit that’s hardly a great solution.

*I have found that it’s easier to set Invoke-sqlcmd to verbose and save any print messages from SQL to a file that can then be saved as an artifact to be used for deployment than any other way.

 

Sup Dawg, I Heard You Like NULLS…

Quick note on NULLS: NULL means “Unknown Value”, not “0”. So adding a value to NULL results in NULL:

1 + NULL = NULL

Because NULL means “I Don’t Know”, adding two NULLS together does not equate to the same value; it’s still two indeterminate values.

NULL + NULL != NULL

You can work around this in T-SQL by using COALESCE.


declare @i VARCHAR(8) = NULL
declare @j VARCHAR(8) = NULL
SELECT @i

SELECT @i + 'bob'

SELECT COALESCE (@i, 'bob')

if @i = @j
select 'yes'
else
select 'no'

Specifying Multiple Properties in SQLPackage

The MSDN page for SQLPackage is great for all the information you could possibly need to use SQLPackage via command line. And there are a lot of options, especially with the Properties: although these can be set in a publish.xml file, you can override whatever is specified within the publish file by specifyingthe property name and setting the value. This is especially useful when you don’t want to have many publish.xml files in a solution.

However one thing the MSDN page misses are examples, which is strange because the MSDN pages usually have many examples. Nevertheless it’s still a great resource, but recently I needed to deploy some dacpacs and needed to specify multiple properties via cmdline. At first I thought it could be done like so:

/p:Storage="Memory";AllowIncompatiblePlatform=True

But this turned out to be incorrect. What you actually need to do is specify “/p:” for each property you are going to call:

/p:Storage=Memory /p:AllowIncompatiblePlatform=True

Forcing SQL Server To Order Operations

The subject of my last post was to do with returning DBCC CHECKDB results from a temp table, which was populated by running DBCC DBINFO against each database hosted on the SQL instance. I shared a query that returned any databases that had not had a CHECKDB run against it for over 7 days. The query is below:

 
select dbi.Value from #db dbi
WHERE field LIKE '%lastknow%'
AND (CASE WHEN isDate(dbi.value) = 1 THEN CAST (dbi.Value AS DATETIME2(3))END) < DATEADD (DAY,-7, GETDATE())

That “CASE” statement stands out quite a bit doesn’t it. Let’s explore why this query needs a CASE expression. Begin by creating a temp table that contains a sample of the data stored in the temp table. Continue reading “Forcing SQL Server To Order Operations”

Copy and Delete Data Across Tables When There is no Space

Hello!

Recently I needed to apply compression data on a particularly large table. One of the main reasons for applying compression was because the database was extremely low on space, in both the data and the log files. To make matters worse, the data and log files were nowhere near big enough to accommodate compressing the entire table in one go. If the able was partitioned then I could have done one partition at a time and all my problems would go away. No such luck.

So I had to think quite outside of the box to compressing the data. The solution I came up with was to create a new empty table, and copy 50 rows at a time (that was the largest I could move at any one time, yes I really was THAT low on space) and then delete the 50 rows copied. But instead of COPY/DELETE, which would seem the logical way, I ran the DELETE first and then run an INSERT from the output table. This way I would be certain that the same rows that were deleted were copied into the new table, and meant that I was only scanning the original table the once.

I had to run further checks into the script: the database was set to simple mode, but the log file would still grow because of ETL processes etc running on other parts of the db, and so I needed to check to make sure that I wasn’t saturating the log file with each delete. So the logic here will work for both SIMPLE, BULK and FULL recovery models. Continue reading “Copy and Delete Data Across Tables When There is no Space”

Moving TempDB Script

Although it is not an activity that I regularly do, I needed to write a script to move TempDB. This was whilst I was testing different RAID arrays on a SAN. The TempDB had quite a few files and I got bored writing out the commands over and again, so came up with a script that will print out the commands to alter the files for TempDB, and if necessary will execute the command also. This will also work for user databases, the caveat being that whilst TempDB will re-create your TempDB on an instance restart, the user database files will have to be moved whilst the database needs to be offline. So that’s some manual grunt that cannot be scripted here. Yeah I know, sucks to be a DBA sometimes…

USE tempdb;
GO

SET NOCOUNT ON

DECLARE @SQL NVARCHAR(MAX) = ''
DECLARE @EXEC NVARCHAR(MAX) = ''
DECLARE @RUN BIT = 0
DECLARE @newdatalocation NVARCHAR(256) = 'L:\temptempdb\'
DECLARE @newLogLocation NVARCHAR(256) = 'M:\tempdb_log'
DECLARE @fileName NVARCHAR(64)
DECLARE @fileLocation NVARCHAR(256)

DECLARE cur CURSOR
FOR
SELECT f.NAME
,f.filename
FROM sys.sysfiles f

OPEN cur

FETCH NEXT
FROM cur
INTO @fileName
,@fileLocation

PRINT @fileName
PRINT @fileLocation

WHILE @@FETCH_STATUS = 0
BEGIN
SELECT @SQL = 'ALTER DATABASE ' + DB_NAME() + ' modify FILE (
NAME = ' + f.NAME + CHAR(10) + ',filename = ' + '''' + CASE
WHEN RIGHT(f.filename, 3) = 'ldf'
THEN @newLogLocation
ELSE @newdatalocation
END + '' + f.NAME + '' + RIGHT(f.filename, 4) + '''' + ')' + CHAR(10)
FROM sys.sysfiles f
WHERE f.NAME = @fileName

PRINT @SQL

SELECT @EXEC = @EXEC + @SQL

FETCH NEXT
FROM cur
INTO @fileName
,@fileLocation
END

CLOSE cur;

DEALLOCATE cur;

IF @RUN = 1
BEGIN
EXEC @EXEC
END

Happy Scripting!