So it’s been a while since I’ve worked on any SSAS, but today I am happy to announce that there is an up-to-date version of the SSAS Activity Monitor. The solution has had couple of changes to it.
- The Analysis Services dll has been updated to SQL Server 2016. So this versions will work with 2012, 2014 and 2016 instances of SSAS.
- The .Net target has been altered from 4.0 to 4.5.1.
- There is an exe that is outputted as part of the build.
- A redundant project has been removed.
- The code has been moved to github.
If you have been through the commits you’ll see that Marco Russo has been contributing most of this. To be honest, since I have not worked with SSAS directly in about 2 years now, I’ve not been terribly interested in developing any features to this project. So the fact tha someone in the BI Cmmunity has actively sought out to help out in maintaining this project is a big help, and so anyone else who wants to contribute I encourage you to get in touch with Marco or myself to make this project even better.
Recently someone got in touch with me about one of my earliest posts. I realised I had made a mistake on that post and had updated. But I also realised that the script itself is actually part of a PowerShell module. So wanting to make things as easy as possible, and being the helpful guy that I am, I have altered that example into a script which I am posting here.
It’s worth stressing that the errors that output to the console are fairly generic, and that there is a log file that will contain all the information if there are any errors.
Continue reading “Deploy Cubes Using Powershell: The Script”
Recently I needed to update some cubes in a test environment after the underlying data warehouse had been refreshed. Typically I’d sync the cubes from prod, but as sync causes locking, and I needed to get this done during the day, I needed to reprocess all of the cubes in the test environment, and there were a lot of them, all ranging in sizes from hundreds of gb to a few mb. The simplest way to update the cubes was to run a process default on the dimensions and then run a process default on the measure groups. Some of the measure groups were partitioned, so I could’ve got clever and just processed fully any unprocessed partitions, but I felt a default would do most of the hard work for me. Continue reading “Processing Cubes After a Data Refresh: Batching Processing Using AMO”
Hello! And welcome to the final part of this series. This script combines the first 4 parts of this series to create an unabridged process of synchronising a ssas database from one server to another. Even though I’ve used this script fora while, it still gets tweaks and I’d still consider it a work in progress, however it’s as complete now as it ever has been. No doubt when I make some more changes I’ll post an update.
The script so far includes:
- checking if sync role exists, and creates it if it does not
- checking the permissions on the role to ensure that the sync role has administer permissions on the database to be sync’d, and will add them if they do not
- checks that the service account of the target server is a member of the sync role (a requirement of syncing) and adds it if not a member
- checks if the source database exists on the target server. It will copy it in most cases (see 6) however if it does not exist it will disregard point 7
- checks that there is sufficient space on the disk to contain two copies of the database (see below for a full explanation), and contains the option to drop the target database before syncing.
- if there is no target database and there is not enough space to sync this database, it will skip syncing
- If the target database is to be dropped before syncing, then the target roles/permissions/data source are stored and will replace the updated database values (assuming it existed on target server prior to syncing)
- This assumes that you are not copying over the permissions from the target cube. The idea here being that these cubes are used for a different environment (like test for example) that have their own datasource/roles/members.
Continue reading “Automate SSAS Syncing Part 5: Syncing from Start to Finish”
Hello, and welcome to part 4 of the series. In this post, I’m going to do a straight up sync with none of the disk checks from yesterdays post. The full script at the end of the series will include the disk checks. Continue reading “Automate SSAS Syncing Part 4: Synchronise Databases Across Servers”
Hello, and welcome to part 3 of the series on syncing databases across servers. The first two parts focused on ensuring that the role exists on the source server with the appropriate permissions to sync. We then check and add the service account of the target instance to ensure that the sync can occur.
Continue reading “Automate SSAS Syncing Part 3: Using PowerShell to Check for Disk Space on SSAS Instances”
Part two of this series expands upon yesterdays post, which was creating a database role and assigning permissions, by adding a member to that role. The script adds just one member to the role, but if you need more than one member you can either create a domain group and add all the users required into that group, then use the group name in place of the user name, or do something clever with arrays.
Tomorrow’s post will look at some disk checks that are included to ensure there is enough space to sync cubes.
Here are the links for the other parts of the series
Automate SSAS Syncing Part 1: Create SSAS Database Role
Automate SSAS Syncing Part 2: Create SSAS Role And Add Member
Continue reading “Automate SSAS Syncing Part 2: Create SSAS Role And Add Member”
Every day this week I am going to be posting part of a script that I recently wrote, but broken down into separate components that are useful in their own right. The full script, which I will be sharing at the end of the week, automates the synchronisation of SSAS cubes from one instance to another.
So to make this an automated process that was robust to any deviation I had to make sure that the destination service account has full admin access to the source database, that the destination disk could stand the anticipated growth as well as the copy of the sync’d cube, and that the datasource’s connection string on the destination database was reset to it’s original value. Typically the datasource does not need updating, but the destination databases are used in a different environment with their own datasource. And as the sync also syncs the datasource it is necessary to update the datasource back to its original value Continue reading “Automate SSAS Syncing Part 1: Create SSAS Database Role”
Here’s a simple Powershell script here to get the top level info of the cubes on an instance. This is a good script to understand how to use the AMO namespace to get info on the cubes via Powershell. You can add many other properties to this list to expand the info that you want. I’ve often mused about moving a lot of the logic that I constantly put in these scripts into higher tier functions in Powershell, or even create a ssas library, but time is always a premium… Continue reading “Get CubeInfo”
One feature that has always irritated me greatly is how you cannot alter the connection for a xmla query window via the right click content menu. For .sql, .mdx and .dmx it is an option via the content menu, but not xmla! Continue reading “Change Connection for xmla Query Window”