Altering SQL Job Status

Hello!

As an environment grows bigger and you have many jobs across many server instances, it’s important to automate as much as you can. And starting with the small, tedious but time consuming jobs is never a bad idea! A good example is having to enable/disable SQL Agent jobs during maintenance, or when a snapshot for replication is being pushed out you may not want certain jobs to run to prevent failures. So here is a stored procedure to alter the status of a SQL Agent Job. You only need to pass in the name of the Agent job. If the job is enabled it will disable it, and as an added precaution it will stop the job if it is running. If the job is disabled it will enable and start it running. Though if you don’t want it to run then you can always add a bit switch to the sproc.

Happy scripting!

Continue reading “Altering SQL Job Status”

Insert or Update Approaches in SQL Server

Recently I needed to write a stored procedure that would update a row if it existed in a queue table, and if it did not then to insert the row. Thinking from a pseudo code thought process, you’d run a “IF EXISTS (SELECT Id FROM TABLE) UPDATE, ELSE INSERT INTO TABLE..” Whilst this approach does work, it would be inefficient. This will do a table/index scan twice: once for the SELECT statement and the UPDATE statement.

Depending on your version of SQL Server, there are more efficient ways to do this:

  • In 2005, the efficient approach would be to attempt an UPDATE first, and if the @@rowcount returned from the previous statement is 0, then INSERT. This reduces the number of scans, however for inserts it is still two IO operations. As I was developing on a 2005 box, this is the approach I went for.
  • In 2008 onwards you can reduce the INSERT to one operation by using the MERGE keyword.

Continue reading “Insert or Update Approaches in SQL Server”

EXEC sp_executeSQL

Today I am going to talk about the stored proc sp_executesql and the benefits it has over EXEC @SQL. Microsoft actually recommend that you use the sp_executesql stored procedure instead of the EXECUTE statement. This is because of something called parameter substitution, which makes Sp_executesql a far more efficient method of executing dynamic SQL.  Continue reading “EXEC sp_executeSQL”

xp_regRead Cheat Sheet

Xp_regread is an undocumented stored procedure. It enables you to read the literal registry path that you specify via T-SQL.


EXEC [master].[dbo].[xp_regread] @rootkey='HKEY_LOCAL_MACHINE',
@key='Software\Microsoft\MSSQLServer\Setup',
@value_name='SQLPath'



You can use the OUTPUT parameter with xp_regread to return the value back.



declare @outputValue nvarchar (128)
EXEC [master].[dbo].[xp_regread] @rootkey='HKEY_LOCAL_MACHINE',
@key='Software\Microsoft\MSSQLServer\Setup',
@value_name='SQLPath',
@value = @outputValue OUTPUT
SELECT @outputValue

You can check who has access to xp_regread by running the following query;


SELECT
OBJECT_NAME(major_id) AS [Extended Stored Procedure],
USER_NAME(grantee_principal_id) AS [User]
FROM
sys.database_permissions
WHERE
OBJECT_NAME(major_ID) ='xp_regread'

As of SQL Server 2012 installations it appears that the public role gets the permission by default.

SSIS 2012 Deployment Fail Part Two: SSIS Server Maintenance Job

Recently I post about a timeout issue I was getting when deploying SSIS projects.  It was mainly to do with too much log and previous versions being kept in the SSISDB on our test environments. My solution was to run the SSIS Server Maintenance Job prior to every deployment to mitigate the timeout. By and large this has been super effective.

However that job can take a considerable amount of time to run. And it is not just our test environment that has seen issues: we rely on SSIS heavily and our schedules are every 5 minutes or so, so lots and lots of logs are created.

Looking around the internet, I found an article that updates the stored proc that is run when deleting the log data by truncating it (link below). I’ve tested out the stored proc provided and it works well. It still takes a long time, like 45 minutes, but that is light speed compared to the 6 hours we were getting on a daily basis.

We don’t want this stored proc to replace the current one that is run as part of the Maintenance Job as we’d lose all ability to trace any issues.

In the comments section there is also a suggestion for indexes to create that have helped the purges of subsequent runs. It will be worth seeing if the daily job can now cope, and will keep the reports more snappy, which is no bad thing.

http://thinknook.com/truncate-ssis-catalog-database-operation-log-tables-2012-11-03/

SSIS 2012 Deployment Fail

(Update: Part 2 focuses on clearing out the SSISDB by creating new stored proc based on the one used by the maintenance job. If your Maintenance Job is taking hours to run, you need to check this out.)

No WiX Wednesday this week, owing to commitments in real life. Instead, here is something regarding SSIS 2012 Deployment. Enjoy!

As part of our CI and Test Builds we have automated the deployment of two SSIS Projects. One is fairly large and the other one contains only two dtsx packages. Recently we have been getting timeout issues with the deployment of the solutions. Continue reading “SSIS 2012 Deployment Fail”

Find Largest Tables To Compress

In this post I’m gong to show some T-SQL that will show you the compression state of tables and how to quickly find objects that are worth compressing.

Last week brentozar.com ran one of their Technology Triage Tuesdays on Compression in SQL Server. The video does not appear to be up yet, however it provided good insight, certainly better than the Technet pages on the same subject. As we have a multi-terabyte data warehouse at work, which is on an Enterprise licensed instance of SQL Server 2012, I’m familiar with the subject of data compression in SQL. Until recently however, another one of our other SQL Servers was on a Standard license. Recently this instance was upgraded to Enterprise, and so I was able to compress the database. Although not as large as our data warehouse, it was well worth considering compressing some of the larger tables.

Continue reading “Find Largest Tables To Compress”

Delete Old Tables On a Schedule Part Two: The Stored Proc

Been a massively busy week, what with deploying clustered SQL servers on Sunday, Release this weekend, new project starting next week for the dev team, so that means it starts this week for me, plus the usual drama of month end loading.

Today, I have been up since 5am working with cubes using PowerShell. So am still telecommuting. My own proclivities to the activity are documented here and to further the conversation the article linked below is an article on the Business Insider website about how metrics were used to make the decision. Apparently someone checked the VPN to see if people were logging in enough, and apparently they weren’t. I’m good friends with one of the network guys, so I asked him about the VPN stats we have. Turns out you can se how much someone has downloaded in a session, allowing people to see just how much traffic you’re making. The network guy pulled up the stats and showed me one person that logged on for an hour, but the traffic was barely enough for him to check his emails. Conversely, some sessions were in the GB’s w/r/t network traffic. I asked him how if the management ever ask to see these stats. “Often” was the reply.

http://www.businessinsider.com/how-marissa-mayer-figured-out-work-at-home-yahoos-were-slacking-off-2013-3

I also discovered that my script I wrote in this post was not running as a job, so made a stored proc and I call that another way. I have made this more agnostic so that you can select the schema you want to drop the tables from.


USE DbName
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

CREATE PROCEDURE [dbo].[DropTables]
(

@LengthOfTime int,
@schema VARCHAR(20)
)
AS
BEGIN
SET NOCOUNT ON;

DECLARE @tname VARCHAR(100)
DECLARE @sql VARCHAR(max)

DECLARE db_cursor CURSOR READ_ONLY FOR
SELECT name AS tname
FROM sys.objects
WHERE create_date > DATEADD (hh, -@LengthOfTime,  CAST(GETDATE() AS datetime))
and type = 'U'
OPEN db_cursor
FETCH NEXT FROM db_cursor INTO @tname
WHILE @@FETCH_STATUS = 0
BEGIN
SET @sql = 'DROP TABLE ' +@schema + '.' + @tname
EXEC (@sql)
--PRINT @sql
FETCH NEXT FROM db_cursor INTO @tname
END
CLOSE db_cursor
DEALLOCATE db_cursor
END

List of Disks and Free Space

Recently I have been supporting our move from TFS from 2010 to TFS 2012. The database instances is stored on a server that I do not have log on rights to. During the upgrade, it became clear that one of the tables was growing at a rapid rate. This is because rows were being moved from one table to another, and there were a lot of rows to move: I came back from my Christmas break on the 31st and apparently it had been running from the 28th!

So I needed to know how much space the disk had left and whether we needed to increase it. But as I did not have access to the server I needed to check how much space we had before requesting a disk increase. By running the stored proc below I was able to check how much free space we had left on the disks.

EXEC master xp_fixeddrives

I was able to check which disk it was by running the query in my previous post

https://phoenixultd.wordpress.com/2012/12/24/sql-server-database-file-size-and-free-space/