Automate SSAS Syncing Part 3: Using PowerShell to Check for Disk Space on SSAS Instances

Hello, and welcome to part 3 of the series on syncing databases across servers. The first two parts focused on ensuring that the role exists on the source server with the appropriate permissions to sync. We then check and add the service account of the target instance to ensure that the sync can occur.

To be honest, if your SSAS databases are very small, or you know you have plenty of space on your disks, then these checks are unnecessary. But many of the SSAS databases I sync are hundreds of gigabytes, and synchronisation can take a long time, so it always makes sense to verify that enough disk space exists in the first place rather than it rolling back because of lack of disk space.

There are several checks that you can make on the disk sizes. It really depends if your servers are alike. For instance, we have one very large instance that we use to consolidate multiple, smaller instances, so disk checks are more complicated in this scenario. To give you an idea of what we could check for, I have 4 checks below:

  1. are the disks the same size?
  2. If the target is smaller, how much smaller is the target disk space?
  3. What will be the size of the target database post sync, and does the target disk have sufficient free space to accommodate this growth?
  4. Does the target disk have sufficient free space to have 2 copies of the database?

If you’re confused or unfamiliar with how sync works, here’s a very high level overview:

  1. the sync scans and verifies the metadata of the source and target databases on the two servers and transfers over only files that have been modified. The scanning can take some time.
  2. These files are transferred into a temporary folder on the target database. This is a single threaded operation, and so this can seem to take a long time if there are a lot of files to transfer.
  3. Once these files are transferred into the temp folder the sync then copies all the data from the target folder into the temp folder. Any files that are in the target database that do not exist in the source database are not copied over. This local file copy is faster than over the network.
  4. Once all the files are copied over you will have two copies of the database on the disk; the original target database and the temporary folder.So this is why you need free space on the target disk that is at least as large as the source database. If not then the copy will most likely fail.
  5. The temporary folder is  then renamed to the database and the number is incremented by 1, and the original target folder is deleted. The sync runs this way so that the transaction can be rolled back.
  6. with regards to locking, during synchronisation, locks are placed on the databases to ensure validity. A write lock is applied  onthe target database before the transfer of the files and a read commit lock is applied to the source DB at almost the same moment. The write lock is released from the target Analysis Services database once the metadata is validated and the transaction is committed. The read commit lock is released at about the same moment as the write lock as it is taken in this distributed transaction. This means that during the synchronization process a write lock is applied on the target server (when a target database exists), preventing users from querying and/or writing over the database. There is only a read commit lock on the source server, which prevents processing from committing new data but allows queries to run so multiple servers can be synchronized at the same time.

So the disk checks below ensure that there is enough space and will quit if there is not enough space. The full script (which will be posted at the end of this series) will include an option to drop the target database and copy the source fully over the network, thus removing the need for two copies to exist. This is not without disadvantages though as you will see.

Here are the links for the other parts of the series
Automate SSAS Syncing Part 1: Create SSAS Database Role
Automate SSAS Syncing Part 2: Create SSAS Role And Add Member
Automate SSAS Syncing Part 3: Using PowerShell to Check for Disk Space on SSAS Instances


[CmdletBinding()]
param(
[Parameter(Position=0,mandatory=$true)]
[string] $sourceInstance,
[Parameter(Position=1,mandatory=$true)]
[string] $targetInstance,
[Parameter(Position=2,mandatory=$true)]
[string] $syncAccount)

[void][System.reflection.Assembly]::LoadWithPartialName("Microsoft.AnalysisServices")

$rolename = "Sync00"
$syncAccount = "PROD\svc_PRE1SQLAnalysis"

$sourceInstance = "pre1sql02"
$targetInstance = "pre1sql04"

$targetsvr = new-Object Microsoft.AnalysisServices.Server
$targetsvr.Connect($targetInstance)

$sourcesvr = new-Object Microsoft.AnalysisServices.Server
$sourcesvr.Connect($sourceInstance)

foreach ($db in $sourcesvr.Databases)
{
$targetDisk = $targetsvr.ServerProperties

$targetDisk = $targetDisk | Select-Object name, value | Where-Object {$_.Name -eq 'DataDir'} | Select-Object -Expand value 
$TargetDisk.ToString() | Out-Null
$TargetDriveLetter = $targetDisk.Substring(0,2)

$targetdisksize = Get-WmiObject Win32_LogicalDisk -ComputerName $targetsvr -Filter "DeviceID='$TargetDriveLetter'" | Select-Object -Expand Size
$targetfreespace = Get-WmiObject Win32_LogicalDisk -ComputerName $targetsvr -Filter "DeviceID='$TargetDriveLetter'" | Select-Object -Expand FreeSpace
$targetdisksize = $targetdisksize / 1gb
$targetfreespace = $targetfreespace / 1gb

# get disk details for source server

$sourceDisk = $sourcesvr.ServerProperties

$sourceDisk = $sourceDisk | Select-Object name, value | Where-Object {$_.Name -eq 'DataDir'} | Select-Object -Expand value 
$sourceDisk.ToString() | Out-Null
$SourceDriveLetter = $SourceDisk.Substring(0,2)

$sourcedisksize = Get-WmiObject Win32_LogicalDisk -ComputerName $sourcesvr -Filter "DeviceID='$sourceDriveLetter'" | Select-Object -Expand Size
$Sourcefreespace = Get-WmiObject Win32_LogicalDisk -ComputerName $Sourcesvr -Filter "DeviceID='$SourceDriveLetter'" | Select-Object -Expand FreeSpace
$sourcedisksize = $sourcedisksize / 1gb
$Sourcefreespace = $Sourcetfreespace / 1gb

$sourceDBfolder = $sourceDB.name

$sourceDBSize = 
 invoke-command -computername $sourcesvr -scriptblock{param ($sourceDBfolder, $SourceDisk) $path = Get-ChildItem $SourceDisk | Where-Object {$_.PSIsContainer -eq $true -and $_.Name -match $sourceDBfolder}
 $path = $SourceDisk+"\"+$path
 $fso = New-Object -comobject Scripting.FileSystemObject
 $folder = $fso.GetFolder($path)
 $size = $folder.size
 $size = $size / 1gb
 $size
 } -Args $sourceDBfolder, $sourceDisk

 # first check; are the target and source disks the same size?

 if ($targetdisksize -ge $sourcedisksize)
{
    $t = Get-Date -format "d MMM yy HH:mm:ss"
    "${t}: both target and source disk sizes are the same or target is larger"
}

# if disks are not the same size, then how much smaller is the target? 
# we stil might be able to sync with the smaller disk

else 
{
$total = $sourcedisksize - $targetdisksize 
$total = $total /1gb
    $t = Get-Date -format "d MMM yy HH:mm:ss"
    "${t}: target disk is smaller than the source disk by $total gb "
}

# figure out what the difference is between the sizes of the databases on the source and target servers
# if the expected growth (the difference between how much data is on the source server minus how much data is on target server)
# is greater than the free space on the target server then chances are the sync will fail
# remember that the target database will also drop data that does not exist in source server, so this is worst case scenario
# we will break out of script if there is not enough space
# remember this check does not take into account that there will be 2 copies of the database at then end of the sync, this is handled in the next check

     $expectedGrowth = ($sourcedisksize - $sourcefreespace) - ($targetdisksize - $targetfreespace)
     "source disk size "+ $sourcedisksize
     "source free space "+ $sourcefreespace
     "target disk size "+ $targetdisksize
     "target free space "+ $targetfreespace

    $expectedGrowth = [Math]::Round($expectedGrowth, 4)
    if ($expectedGrowth -ge $targetfreespace)
    {
        $expectedGrowth=$expectedGrowth/1gb
        $targetfreespace=$targetfreespace/1gb
        $t = Get-Date -format "d MMM yy HH:mm:ss"
        write-error "${t}: there is not enough free space on the target server for the anticipated growth. Expected growth is $expectedGrowth gb, whilst free space is $targetfreespace gb"
        break
    }


# when a sync runs, it first copies all the data that exists in the source database that does not exist in the target database 
# over the network from the source database into a temporary folder
# the sync job then copies out the remaining data form the target database into the temporary folder
# if any data exists in the target server that does not exist in the sdource server, this is deleted/not copied over
# once the copy is complete, the temp folder is renamed to the database name and the number on the folder name is incremented by 1
# the original folder is then deleted and replaced by the new folder
# this means at any one time when syncing there will be 2 copies of the databases on disk
# this is not a problem if the databases are small. however it is a very big problem if the databases are very big.


if ($targetfreespace -le $sourceDBSize)
{
$t = Get-Date -format "d MMM yy HH:mm:ss"
Write-Error "${t}: There may not be enough free space to synchronize the database $sourceDB. Consider dropping the target database $sourceDB on $targetsvr."
break


}

}

Author: Richie Lee

Full time computer guy, part time runner. Full time Dad, part time blogger. Pokémon Nut. Writer of fractured sentences. Maker of the best damn macaroni cheese you've ever tasted.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s