Cluster Log

It is possible to get a portion of the cluster log right from your desktop workstation…

 

Open command prompt

start powershell

execute the following command:

Get-ClusterLog -Cluster <name of cluster> -Destination <path on your local workstation for file> -Node <netbios name of cluster node> -Timespan <number of minutes to collect from now, counting backwards>

 

Posted in Clustering | Comments Off on Cluster Log

Managing Windows Server Core from the command line

Commands for managing a Server core instance form the command line…

Disk Space:

wmic logicaldisk C: get caption,size,freespace (all values in bytes)

PageFile:

wmic pagefile list /format:list

wmic pagefileset set InitialSize=4096,MaximimSize=8192 (all sizes in MB)

Install Windows Features:

https://technet.microsoft.com/en-us/library/jj205467(v=wps.630).aspx

Change startup of windows services:

sc config {service name} start=auto|demand|disabled

User Profiles

rundll32.exe sysdm.cpl,EditUserProfiles

Change Drive Name

LABEL drive: newname

LABEL /MP volume newname

Posted in SQL on Server Core | Comments Off on Managing Windows Server Core from the command line

SQL Agent jobs, linked servers and minimal privileges

Basically, this issue is all described here…

https://support.microsoft.com/en-us/kb/2492477

In a nutshell, a T-SQL job executes in the context of the job owner. If the job owner is sa or another member of the sysadmin role is is able to make a linked server call successfully.

However, if the SQL Agent job T-SQL step is run under a proxy account and that proxy account is not a member of the sysadmin role, it will fail because of impersonation. Part of the linked server call is to use EXECUTE AS when the job owner is not a member of the sysadmin server role. This is by design and a built in security feature. The problem here is that, on one hand Microsoft recommends the use of Windows Authentication yet on the other hand they are saying for this issue to use SQL authentication and login mapping. If you are anything like me, then you don’t have SQL ( a.k.a Mixed Mode) enabled on your server at all.

Without the use of a Linked Server, there are only a few other ways to be able to connect to another server. Those being:

SQLCMD

OPENROWSET

OPENDATASOURCE

Only SQLCMD and OPENDATASOURCE will let you execute a stored procedure. When it comes to executing  T-SQL job step, there is only 1 option – OPENDATASOURCE.

Basically, the syntax is:

EXEC OPENDATASOURCE(‘SQLNCLI’, ‘Data Source=<my SQL Server>;Integrated Security=SSPI’).msdb.dbo.sp_start_job @job_name=‘Brent’

In this example, I have specified SQLNCLI as the provider_name. If you are running, for example, SQL2014, then you will have SQLNCLI11 available as a provider. Therefore, if you always supply SQLNCLI as the provider, SQL Server will always redirect to the latest version of the SQL Native Client.

By using this method you don’t have to use elevated privileges and you shouldn’t have to modify your processes too much.

Posted in Security, T-SQL, Tips | Comments Off on SQL Agent jobs, linked servers and minimal privileges

AG Management

AG in this case means Availability Group, a feature called Always-On Availability Groups, introduced since SQL Server 2012.

Whilst the databases in the AG are kept in sync for changes, there is no built-in synchronisation method for logins, SQL Agent jobs etc.

Here is how I manage jobs…

All jobs are created on all the nodes in the AG and they all have the same schedule and they are all enabled.

Each job has a check to see if it is the “active node”.

If it is a single step job, then build this into each job on each node:

DECLARE @dbname NVARCHAR(128) = ‘<your AG db name goes here>’

IF sys.fn_hadr_is_primary_replica(@dbname) = 1

    BEGIN

        EXEC database.schema.SP

    END

ELSE

    RAISERROR (‘This is not the primary’, 0, 0)

It is very important that the job has a DB of master. This is because, when you script out the job and apply it to the other nodes with the replica database as the db to execute in, when the job runs, it will generate an error like “Cannot log onto server (local)”. This is because it is trying to start in a DB that is a replica DB, which we all know are offline for using. By performing this check, it will execute on the active node or it will exit gracefully. RAISERROR with a severity lower than 11 will not cause the job to “fail”. Therefore, make sure that the database name to execute the SP is included in the EXECUTE.

The second way to do this is with multi-step jobs like so;

The very first job step is this (setup in master) –

DECLARE @dbname NVARCHAR(128) ='<your AG db name goes here>’

IF sys.fn_hadr_is_primary_replica(@dbname) = 1

    BEGIN

        RAISERROR(‘This is the primary’, 0, 0)

    END

ELSE

    RAISERROR (‘This is not the primary’, 11, 0)

What is happening here is, if it is the primary node, the RAISERROR with the message “This is the Primary” is a low enough severity to have the job succeed and move to the next step – have your on success setting to go to the next step on success and on failure setting go to the last step which I will detail in a sec. If this is not the primary node, the job will fail but in an effort to make sure we don’t have a “job failure” when it really isn’t, we have to go to the last step in the job which is simply:

PRINT ‘This is not the primary database replica, exiting without error’

It is important that this step can only be accessed from the first step on failure therefore, make sure that your actual jobstep has quit on success and quit on failure so you can get alerted on actual job failures.

Not the use of the severity values in the RAISERROR statements

Posted in Availability Groups | Comments Off on AG Management

SQL Server Job Execution Status Output

It is not possible to use INSERT….EXEC to call sp_help_job to output the results into a table for querying e.g you want to get the execution status of a job using sp_help_job through some form of automation.The error you will receive is this:

An INSERT EXEC statement cannot be nested.

This is because sp_help_job already is using insert…exec inside the procedure (it calls other procedures within) .

The only way to successfully do that is by using OPENROWSET. You will have to enableAd-Hoc distributed Queries (via sp_configure) to be able to do this.

Here is an example of how to use this:

SELECT * FROM   OPENROWSET(‘SQLNCLI’, ‘SERVER=<my_sql_server>;TRUSTED_CONNECTION=YES;’, ‘EXEC msdb.dbo.sp_help_job @job_name = ”<my_sql_agent_job_name>”, @job_aspect = ”JOB”’)

Posted in T-SQL, Tips | Comments Off on SQL Server Job Execution Status Output

Creating a plan guide

I initially had lots of trouble creating my first plan guide. There are some strict rules to getting them to work, but when they do, it’s magical.

Plan guides are unique to a database – you need to make sure you are in the right database when you create it. If a query runs in master but accesses other databases, you need to create your guide in master. Likewise, if the query is local to one database, create it in that database.

To look at existing plan guides…

select * from sys.plan_guides

Once you have determined the reason for creating a plan guide (vendor product, middleware code from entity framework etc), you’ll need to start preparing the script. Plan guides can only be created in T-SQL. The command is:

EXEC sys.sp_create_plan_guide @name = ''
                            , @stmt = ''
                            , @type = ''
                            , @module_or_batch = ''
                            , @params = ''
                            , @hints = ''

@name – Is the name you will give the plan guide.

@stmt – Is the SQL statement. This is the catch, you have to get this from SQL Profiler, because it has to be exact. Using the textdata field in the trace to identify your query, right click on the row and select Extract Event Data. Also, be sure to run a server side trace i.e. SQL Trace writing to a file – don’t use the Profiler GUI as there is too much risk of it affecting the SQL Server. Then you can find your particular query using the Trace Functions to query the active trace file. To query the trace file, use this:

SELECT * FROM ::fn_trace_gettable(‘<path to the trace file>’, DEFAULT)

Make sure you have the start time in your trace, so you can identify the row easily when you open profiler to extract the row. When you Extract Event Data, save to a location as a .sql file and then open in SSMS.

Copy the T-SQL from the Event Data unchanged into the @stmt value. in a parameterised query, do not include the parameters, only the SQL.

@type – is one of three options and mostly will be SQL

@module_or_batch – Make this NULL

@params – This is the section of the query taken from profiler that has the parameters. Only pull the Parameters and not the values e.g.

N’@p0 int,@p1 int,@p2 nvarchar(4000)’

@hints – This is the whole reason for creating the plan guide – enter your hint here following Query hint rules e.g

OPTION (MAXDOP 1)

Once you have successfully created a plan guide, matching query plans in the cache are removed. If you have lots of single use plans and you’ve created a plan guide to force parameterisation, you will see al these plan disappear.

Next step is testing the plan guide. The best way to do this is by querying the plan cache and looking at the execution plan. To query the plan cache, use the followings, inserting a snippet of the query to find the object in the cache:

SELECT * FROM sys.dm_exec_cached_plans AS cp
INNER JOIN sys.dm_exec_query_stats AS qs ON qs.plan_handle = cp.plan_handle
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) AS st
CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) as qp
WHERE st.text like ‘% enter your snippet here %’

Once you have found your query, open the Query Plan and in the Graphical Plan Window, right click and select Show Execution Plan XML. Inside the XML, Ctrl-F to find “PlanGuide”. If this exists, this will confirm that the plan guide has been applied.

As mentioned earlier, to view a Plan Guide’s existence, query sys.plan_guides in the correct database.

To drop a plan guide, execute:

EXEC sp_control_plan_guide @operation = ‘drop’, @name = <name of the plan guide>.

If at any time any sort of upgrade occurs of either the appplication or SQL, you need to re-evaluate the use of the plan guide.

cheers.

Posted in Plan Guides | Comments Off on Creating a plan guide

Scripting out all indexes

The following script will script out all indexes and can be modified to do just clustered or non-clustered by adjusting the upper CTE query WHERE clause (highlighted with comments). You can add the DROP_EXISTING=ON if you want to move an index to a new filegroup ( for example ). Add this to the bottom query.

WITH    indexCTE
AS ( SELECT DISTINCT
o.object_id
, i.index_id
, i.name AS IndexName
, i.is_unique
, i.has_filter
, i.filter_definition
, ds.name AS FilegroupName
, p.data_compression
, i.is_padded
, i.fill_factor
, i.ignore_dup_key
, i.allow_row_locks
, i.allow_page_locks
FROM     sys.objects AS o
INNER JOIN sys.indexes AS i ON i.object_id = o.object_id
INNER JOIN sys.data_spaces AS ds ON ds.data_space_id = i.data_space_id
INNER JOIN sys.partitions AS p ON p.object_id = i.object_id AND p.index_id = i.index_id –inner join causes exclusion of XML Indexes
INNER JOIN sys.index_columns AS ic ON ic.object_id = i.object_id
AND ic.index_id = i.index_id
INNER JOIN sys.columns AS c ON c.column_id = ic.column_id
WHERE    o.is_ms_shipped = 0
AND i.index_id > 0
AND i.is_primary_key = 0 –comment out to include PK’s
AND i.is_unique_constraint = 0 –comment out to include unique constraints
) ,
indexCTE2
AS ( SELECT   OBJECT_SCHEMA_NAME(indexCTE.[object_id]) AS SchemaName
, OBJECT_NAME(indexCTE.[object_id]) AS TableName
, IndexName
, CASE indexCTE.is_unique WHEN 1 THEN ‘ UNIQUE’ ELSE ” END AS UniqueIndex
, indexCTE.has_filter
, indexCTE.filter_definition
, indexCTE.FilegroupName
, CASE indexCTE.index_id WHEN 1 THEN ‘ CLUSTERED’ ELSE ‘ NONCLUSTERED’ END AS IndexType
, indexCTE.data_compression
, CAST(indexCTE.is_padded AS VARCHAR(3)) AS is_padded
, CAST(indexCTE.fill_factor AS VARCHAR(3)) AS fill_factor
, CAST(indexCTE.ignore_dup_key AS VARCHAR(3)) AS ignore_dup_key
, CAST(indexCTE.allow_row_locks AS VARCHAR(3)) AS allow_row_locks
, CAST(indexCTE.allow_page_locks AS VARCHAR(3)) AS allow_page_locks
, ( SELECT    CASE WHEN ic.is_descending_key = 1
THEN ‘[‘ + c.name + ‘] DESC, ‘
ELSE ‘[‘ + c.name + ‘] ASC, ‘
END
FROM      sys.columns AS c
INNER JOIN sys.index_columns AS ic ON c.object_id = ic.object_id
AND ic.column_id = c.column_id
AND ic.Is_Included_Column = 0
WHERE     indexCTE.object_id = ic.object_id
AND indexCTE.index_id = ic.index_id
ORDER BY ic.key_ordinal
FOR
XML PATH(”)
) AS ixcols
, ISNULL(( SELECT DISTINCT ‘[‘ + c.name + ‘], ‘
FROM   sys.columns AS c
INNER JOIN sys.index_columns AS ic ON c.object_id = ic.object_id
AND ic.column_id = c.column_id
AND ic.Is_Included_Column = 1
WHERE  indexCTE.OBJECT_ID = ic.object_id
AND indexCTE.index_id = ic.index_id
–ORDER BY ic.key_ordinal
FOR
XML PATH(”)
), ”) AS includedcols
FROM     indexCTE
)
SELECT  ‘CREATE’ + UniqueIndex + IndexType + ‘ INDEX [‘ + IndexName + ‘] ON [‘ + SchemaName + ‘].[‘ + TableName + ‘]’
+ ‘ ( ‘ + SUBSTRING(ixcols, 1, LEN(ixcols) – 1) + CASE LEN(includedcols) WHEN 0 THEN ‘ )’ ELSE ‘ ) INCLUDE ( ‘ + SUBSTRING(includedcols, 1, LEN(includedcols) – 1) + ‘ ) ‘ END
+ CASE WHEN has_filter = 1 THEN ‘ WHERE ‘ + filter_definition ELSE ” END
+ ‘ WITH ( PAD_INDEX = ‘ + CASE WHEN is_padded = 1 THEN ‘ON’ ELSE ‘OFF’ END + ‘, ‘
+ CASE WHEN CAST(SERVERPROPERTY(‘Edition’) AS VARCHAR) LIKE ‘Enterprise%’ THEN ‘ ONLINE = ON ‘ ELSE ‘ ONLINE = OFF’ END + ‘, ‘
+ CASE WHEN fill_factor > 0 THEN ‘FILLFACTOR = ‘ + fill_factor + ‘, ‘ ELSE ” END
+ ‘IGNORE_DUP_KEY =’ + CASE WHEN [ignore_dup_key] = 1 THEN ‘ ON’ ELSE ‘ OFF’ END + ‘, ‘
+ ‘ALLOW_ROW_LOCKS =’ + CASE WHEN [allow_row_locks] = 1 THEN ‘ ON’ ELSE ‘ OFF’ END + ‘, ‘
+ ‘ALLOW_PAGE_LOCKS =’ + CASE WHEN [allow_row_locks] = 1 THEN ‘ ON’ ELSE ‘ OFF’ END
+ CASE indexCTE2.data_compression WHEN 0 THEN ‘ )’ WHEN 1 THEN ‘, DATA_COMPRESSION = ROW )’ WHEN 2 THEN ‘, DATA_COMPRESSION = PAGE )’ ELSE ” END
+ ‘ ON [‘ + FilegroupName + ‘];’
FROM    indexCTE2

Posted in T-SQL | Comments Off on Scripting out all indexes

Disk allocation unit size

How to determine what allocation unit size was selected when a disk was formatted?

If a disk was formatted incorrectly i.e. a SQL Server data volume,  but you want to check what allocation unit size was selected  when the disk was formatted, use this…

From a CMD window, type FSUTIL FSINFO NTFSINFO <drive letter:>

e.g . FSUTIL FSINFO NTFSINFO D:

and look at the line: Bytes Per Cluster

The accepted rule for all SQL Server drives is 64K which reads as 65536 bytes

Posted in Performance, Tips | Comments Off on Disk allocation unit size

Linked Server on SQL2014

 

If a linked server was created on a SQL2008 instance specifying a provider of SQLNCLI10 (SQL Native Client 10) and the server is upgraded to SQL2012 or SQL2014, any calls using this linked server will fail with the following error:

Error: 8522, Severity: 16, State3.

Microsoft Distributed Transaction Coordinator (MS DTC) has stopped this transaction.

Fix: Simply change the linked server to use SQLNCLI11

One of the reasons to specify the provider in the linked server is that you want to specify the provider string and one of those cases is specifying a failover partner in the provider string.

 

Posted in SQL2014 | Comments Off on Linked Server on SQL2014

MDW (Management Data Warehouse) Missing Index

The cleanup job for the MDW (mdw_purge_data) has a step that performs the following query…

SELECT qt.sql_handle
    INTO #tmp_notable_query_text
    FROM snapshots.notable_query_text as qt
    WHERE NOT EXISTS (SELECT snapshot_id
                      FROM snapshots.query_stats as qs
                      WHERE qs.sql_handle = qt.sql_handle)

and as a result is missing the following index on:

[snapshots].[query_stats] ([sql_handle])

This index can change the run time of the purge from hours to minutes.

 

Posted in MDW | Comments Off on MDW (Management Data Warehouse) Missing Index