Pages

Friday, 5 April 2013

SSDT syntax highlighting disappeared. How I fixed it.

I recently struggled with a problem where I seemed to lose the syntax highlighting in Visual Studio 2012 for SSDT projects.  Took me a while to stumble onto a solution after multiple reinstalls and almost giving up to rebuild the PC (I like syntax highlighting)

The tldr solution is:
  • Open a Visual Studio Command Prompt (as administrator)
  • Run:
    DEVENV /SETUP

The problem (no pretty colours)

I'd been having an issue with my PC build where my SQL 2012 installation seems to lose some WMI settings.  I can temporarily fix that with a mofcomp command, but getting frustrated with that I decided to reinstall SQL Server 2012.  That was my first mistake.  After the reinstall of SQL Server 2012 I noticed I'd lost all the syntax highlighting for SSDT projects as per this Stack Overflow question (not mine by the way): http://stackoverflow.com/questions/14365678/limited-vs2012-sql-syntax-highlighting

That article has the rather unhelpful "Hey it just fixed itself" answer.  No such luck for me.  After a general failing of my Google foo, I thought it would be just easier to reinstall Visual Studio 2012, SSDT etc.  So that's what I did.  Alas I still had no syntax highlighting.  And no options for the SQL "types" to manually update the colours.

To list all I had tried;
  • Removed and reinstalled SQL Server 2012
  • Patched SQL Server 2012 with SP1
  • Reinstalled SSDT for Visual Studio 2012
  • Removed and reinstalled Visual Studio 2012
  • Patched Visual Studio 2012 with Update 1
  • Tried an earlier version of SSDT
  • Updated SSDT (again)
All yielded no difference.

Twitter failed me too.

The solution

I had pretty much given up and decided to get the PC rebuilt.  But one last Google search yielded a couple of potential fixes that I thought I would try.  These were two further Stack Overflow articles about problems that were (at best) vaguely similar to mine:
  • http://stackoverflow.com/questions/15643145/cannot-find-user-keywords-section-in-visual-studio-2012
  • http://stackoverflow.com/questions/11988077/plain-c-sharp-editor-in-visual-studio-2012-no-intellisense-no-indentation-no
The second one looked more promising, but infinitely more difficult for a simple DBA.  So I thought I'd give the first article a go first.

The first stack overflow article says to perform the following steps:
  • Open a Visual Studio Command Prompt (as administrator)
  • Run:
    DEVENV /SETUP
After all my reinstallations I really couldn't see this making any difference.  I was happy to be wrong!  These simple steps brought back the options to update colours for syntax highlighting.

Monday, 4 March 2013

Data warehouse starter for 10

I've been working on a little project in my spare time, as scarce as that is lately, and as part of that I was putting together some dimensions for a star schema.  This isn't the first time I've been around this block so I dug out some old code.  I also thought "I can't be the first to do, need or want this".

So I took some of the common dimensions and code to populate them and started a SSDT project.  I've added a few more dimensions since that and have since loaded it up to BitBucket.

I've opened up the repository to allow anyone who is interested or would benefit from the beginnings of a data warehouse model.  You can find it here: https://bitbucket.org/kchenery/warehouse-model

Schema

Currently you'll find it has dimensions for:
  • Date
  • Time
  • Country
  • Continent
  • Currency
  • HTTP and FTP status codes - thanks to David Curlewis (b | t) for the inspiration on these
  • Languages
  • Top Level Domains
If you're in New Zealand, you'll find a couple of specific dimensions:
  • ANZSIC codes
  • NZ Regions
Its far from complete.  So if you find something that needs fixing or like it but want it expanded feel free to log an issue/request in BitBucket and I'll see what I can do for you.  For example; I want to add currency symbols to the currency dimension (but haven't yet found a good source).

Data

Along with the schema to create the basic dimensions, I've included post deploy scripts in the SSDT project to pre-populate the data.  Most of these are pretty simple scripts that merge a long SELECT...UNION ALL script into the target dimension table.

The Date dimension has a procedure that is called to create a range of dates.  This can be called to customise the range loaded or even extend that range.

You'll also notice each dimension table has a "special" record to record the "unknown" dimension value.  The unknown record, and any other special records, will always have a negative value for the dimension key.  E.g. -1 is always the unknown record.

Holiday Dates

Now a Date dimension, like any dimension or even database, is only as good as the data in it.  You can customise the dates by adding your current regional dates into the dimHolidayDate table.

Saturday, 22 September 2012

SSDT SQL Agent Job Deployment Template

SQL Server Data Tools (SSDT) is a great product for database development.  It even includes the ability to code and publish some "server level" objects:
  • Logins (so we can associate database users)
  • Extended Events
  • Linked Servers
  • Cryptography keys
  • Service Broker queues
  • Server level triggers
  • Server roles and memberships
But one object currently omitted is the ability to define and deploy SQL Agent Jobs.  The only practical solution is to manage these through post deployment scripts.

As I've been working on a project that will have a fairly large number of SQL Agent jobs I have come up with a pattern for maintaining these that might be of use to other people.  Its a pretty simple approach, it does involve a few manual steps but as you'll see these aren't too bad.

For my projects I like to structure my deployment scripts into a "Scripts" folder.  Under that I put a number of sub-folders for various tasks, e.g:
  • Data population/seeding
  • Miscellaneous (usually this just has a script to drop the __Refactor table)
  • Jobs
The "root" Scripts folder just holds a Pre and Post deployment script that will reference other scripts in the various directories.

E.g:
/*
Post-Deployment Script Template       
--------------------------------------------------------------------------------------
 This file contains SQL statements that will be appended to the build script.  
 Use SQLCMD syntax to include a file in the post-deployment script.   
 Example:      :r .\myfile.sql        
 Use SQLCMD syntax to reference a variable in the post-deployment script.  
 Example:      :setvar TableName MyTable       
               SELECT * FROM [$(TableName)]     
--------------------------------------------------------------------------------------
*/

/* Data Population */
:r ".\Data\Seed Product Categories.sql"

/* Job Categores */
:r ".\Jobs\Job Categories.sql"

/* Jobs */
:r ".\Jobs\Example Job.sql"

/* Refactor table tidy up */
:r ".\Misc\Drop the __Refactor table.sql"


This allows me to add and remove scripts quickly and easily.  I can even just comment one out if I want.  Its also a good way to quickly find deployment code (as long as you name your files sensibly).

In the SSDT project, that'd look something like this:

The job template script

As you'll see from the partial screenshot above there is also a file named "_Template.sql" in the Jobs folder that was not referenced in the PostDeployment.sql file.  This template file contains a stub I use for creating the job deployment scripts.


The _Template.sql  file contains stubs for various steps we want to go through:
  1. Define a SQLCMD variable for the job name we want to use.  This parameter is used throughout the script as we reference the name a few times.
  2. Find out if there's an existing job with the same name.  If there is we need to determine if we're going to do nothing and leave it alone or if we're going to redeploy the job.
  3. If there is no existing job with the same name we will deploy the "new" job.
  4. If there is an existing job we will check another SQLCMD variable RedeployJobs.  If that variable is set to "Yes" then we'll drop the existing job and recreate it.
  5. Depending on results of the above we'll either deploy the job or simply do nothing.

It looks like this:
/*
Job Deployment Script Template                            
--------------------------------------------------------------------------------------

Instructions
------------
    1.  Copy the contents of this script (you do not need to copy these comments/instructions)
        to a script that will contain the job definition.

    2.  Update the JobName parameter with the desired name.

    3.  Script the job from the server using SSMS.

    4.  Copy the job category to the Job Categories.sql post deployment script.

    5.  Copy the relavent sections from the generated script to the various sections below.

    6.  Ensure the database name is correct.  I recommend using the built in $(DatabaseName)
        parameter.

*/

/* Copy from below this line to the end of the script */
--:setvar JobName "Enter the job name within these quotes and uncomment this line"

PRINT 'Deploying job: $(JobName)';
DECLARE @JobId  UNIQUEIDENTIFIER;

SELECT  @JobId = job_id
FROM    msdb.dbo.sysjobs
WHERE   name = '$(JobName)';

/*
 * First we check existence of the specified job name and whether the RedeployJobs parameter
 * has been set to "YES".
 */
IF UPPER('$(RedeployJobs)') = 'YES'   /* Redeploy the job */
OR @JobId IS NULL                   /* It does not exist so deploy it anyway */
BEGIN;
    /*
     * Either this is a new job or the RedeployJobs parameter is set to YES.  If its an existing job
     * we need to remove it so that we can "redeploy" it.
     */
    IF @JobId IS NOT NULL
    BEGIN;
        PRINT ' Deleting existing job';
        EXEC msdb.dbo.sp_delete_job @job_id = @JobId;

        /*
         * Set the @JobId variable to NULL for the sp_add_job command later on.  If it is not null the
         * server things the job is from a MSX server
         */
        SET @JobId = NULL;
    END;
     
    /*
     * Add the job
     */

    /*
     * Add the job step(s)
     */

    /*
     * Add the job schedule
     */

    /*
     * Add the job server
     */
    EXEC msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)';

    PRINT ' Created the job "$(JobName)"';
END;
ELSE
BEGIN;
    PRINT ' Bypassing job "$(JobName)" deployment as job exists and RedeployJob parameter is "$(RedeployJobs)"';
END;

Hopefully the template is pretty clear as far as determining whether to deploy the job or not.


Deploying a new job

I'm going to assume you've created a job in your development environment that you now want to include in your SSDT project.   Here's how we go about that:
  1. First we need to create a new "PostDeployment" script in our Jobs folder
  2. Right click on the Jobs folder --> Add --> Script...
  3. Select "Post-Deployment Script" and call it the same name as your job (simply so you can find it easily).  For the purposes of this blog post, it will be referred to as Example Job.sql

  4. Click Add
  5. Open up the _Template.sql file and copy its contents into the new Example Job.sql script.
  6. In the Example Job.sql script, uncomment the ":setvar JobName ...." line and enter the name of the job between the quotes:
    :setvar JobName "Example Job"
  7. Now open up SQL Server Management Studio and script out your job to a new query window or file.  From here we'll be copying various sections into our new Example Job.sql file to fill in the missing sections.
  8. First copy the section to add the job:

    Into the line just below the comment:
    /*
     * Add the job
     */
  9. Modify the step so that the @job_name parameter uses the SQLCMD variable:
  10. For each job step follow the same approach by copying the sp_add_jobstep commands to the line just below the comment:
    /*
     * Add the job step(s)
     */ 
  11. Note: For any step that references the database this project applies to, it is highly recommended that you update the command to use the SQLCMD variable
    $(DatabaseName)
    . E.g.:

  12. I also add the "sp_update_job" command in this section.  If you prefer you could create your own section for that.
  13. Finally add the job schedule. I like to name the schedule the same as the job and therefore use the $(JobName)
    SQLCMD variable here too:
     
  14. You should now have something that resembles this:
    :setvar JobName "Example Job"
    
    PRINT 'Deploying job: $(JobName)';
    DECLARE @JobId  UNIQUEIDENTIFIER;
    
    SELECT  @JobId = job_id
    FROM    msdb.dbo.sysjobs
    WHERE   name = '$(JobName)';
    
    /*
     * First we check existence of the specified job name and whether the RedeployJobs parameter
     * has been set to "YES".
     */
    IF UPPER('$(RedeployJobs)') = 'YES'   /* Redeploy the job */
    OR @JobId IS NULL                   /* It does not exist so deploy it anyway */
    BEGIN;
        /*
         * Either this is a new job or the RedeployJobs parameter is set to YES.  If its an existing job
         * we need to remove it so that we can "redeploy" it.
         */
        IF @JobId IS NOT NULL
        BEGIN;
            PRINT ' Deleting existing job';
            EXEC msdb.dbo.sp_delete_job @job_id = @JobId;
    
            /*
             * Set the @JobId variable to NULL for the sp_add_job command later on.  If it is not null the
             * server things the job is from a MSX server
             */
            SET @JobId = NULL;
        END;
         
        /*
         * Add the job
         */
        EXEC msdb.dbo.sp_add_job @job_name=N'$(JobName)', 
      @enabled=1, 
      @notify_level_eventlog=0, 
      @notify_level_email=0, 
      @notify_level_netsend=0, 
      @notify_level_page=0, 
      @delete_level=0, 
      @description=N'This is a SSDT example job deployment', 
      @category_name=N'Database Maintenance', 
      @owner_login_name=N'sa', @job_id = @jobId OUTPUT;
    
        /*
         * Add the job step(s)
         */
        EXEC msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Step 1', 
      @step_id=1, 
      @cmdexec_success_code=0, 
      @on_success_action=1, 
      @on_success_step_id=0, 
      @on_fail_action=2, 
      @on_fail_step_id=0, 
      @retry_attempts=0, 
      @retry_interval=0, 
      @os_run_priority=0, @subsystem=N'TSQL', 
      @command=N'SELECT * FROM sys.databases', 
      @database_name=N'$(DatabaseName)', 
      @flags=12;
    
        EXEC msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1;
    
        /*
         * Add the job schedule
         */
        EXEC msdb.dbo.sp_add_jobschedule @job_id=@jobId, @name=N'$(JobName)', 
      @enabled=1, 
      @freq_type=4, 
      @freq_interval=1, 
      @freq_subday_type=4, 
      @freq_subday_interval=10, 
      @freq_relative_interval=0, 
      @freq_recurrence_factor=0, 
      @active_start_date=20120922, 
      @active_end_date=99991231, 
      @active_start_time=0, 
      @active_end_time=235959, 
      @schedule_uid=N'64aed10d-3b89-47ae-8f51-ca6d196b1444';
    
        /*
         * Add the job server
         */
        EXEC msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)';
    
        PRINT ' Created the job "$(JobName)"';
    END;
    ELSE
    BEGIN;
        PRINT ' Bypassing job "$(JobName)" deployment as job exists and RedeployJob parameter is "$(RedeployJobs)"';
    END;
    
  15. Save your file
  16. Update the "master" post deployment script to reference the newly created job script

Before you deploy...

If you were to try and deploy this now, it would fail.  We haven't yet added the $(RedeployJobs) variable to the project.  This is very simple to do:
  1. Open up the properties for the project
  2. Select the "SQLCMD Variables" tab
  3. Add the RedeployJobs variable with Yes as the value for Default and Local:

Deploying

Now when you deploy you can decide whether to override any existing jobs with the same names or leave them as they are. As it uses a SQLCMD variable you can specify this as part of your continuous integration tools. Even if you specify "No" for RedeployJobs a new job will always be deployed.

Whilst there are quite a few steps to follow, once you've done this once or twice you'll realise that its a very easy approach.  The main dangers are forgetting to update the generated job names and database references with the appropriate SQLCMD variables.

Monday, 9 July 2012

Cursors and While loops reply

Earlier in the year David Curlewis ( t | b ) wrote a blog post titled "cursors and while loops". If you haven't read that article please go read it now as this is basically a reply or follow-up to that post.

In his article, David showed that for the test he was using the cursor was far and away the higher performing query. The WHILE loop took 30+ seconds whereas the cursor query took closer to 0.5 seconds.

I spotted a potential improvement to make the WHILE loop behave more in line with what I expected and I think what David expected.

Lets recap

Since the hardware I am running on is no doubt different to David's and I ran my tests on SQL Server 2012 with cumulative update 2 (11.0.2325) there are going to be some timing differences. I believe I have used the same AdventureWorks version of SQL 2008 (get it from CodePlex) and I have used the scripts David used.

For each test result I have taken the average of 5 tests.

Rather than repeat ALL the cursor tests that David did I've selected the one that performed the fastest: a GLOBAL STATIC CURSOR. As I said: go read his article if haven't yet.

The CURSOR approach

------------------------------------------------
-- CURSOR
------------------------------------------------
--DECLARE cursor_name CURSOR [ LOCAL | GLOBAL ]
--     [ FORWARD_ONLY | SCROLL ]
--     [ STATIC | KEYSET | DYNAMIC | FAST_FORWARD ]
--     [ READ_ONLY | SCROLL_LOCKS | OPTIMISTIC ]
--     [ TYPE_WARNING ]
CHECKPOINT;
DBCC DROPCLEANBUFFERS;
GO
SET NOCOUNT ON;
GO
DECLARE @SalesOrderID INT, @LineCount INT;
DECLARE SOID CURSOR GLOBAL STATIC TYPE_WARNING
FOR
    SELECT  SalesOrderID
    FROM    Sales.SalesOrderHeader
    WHERE   OnlineOrderFlag = 1;
 
OPEN SOID;
FETCH NEXT FROM SOID INTO @SalesOrderID;
WHILE @@FETCH_STATUS = 0
BEGIN
    SET @LineCount = ISNULL(@LineCount,0) + ISNULL((
            SELECT  COUNT(*)
            FROM    Sales.SalesOrderDetail
            WHERE   SalesOrderID = @SalesOrderID
            ), 0);
 
    FETCH NEXT FROM SOID INTO @SalesOrderID;
END
CLOSE SOID;
DEALLOCATE SOID;
IF @LineCount <> 60398
BEGIN
    RAISERROR('Bad Total',16,1);
    PRINT @LineCount;
END
GO


Monday, 19 March 2012

Using host names? You are doing it wrong!

This is one of my pet peeves on how many people seem to set up their environment and in particular what they use in their connection strings.

It comes down to this: If you are using the actual host name to connect to your database, you are doing it wrong!

Thursday, 9 February 2012

Listing SQL Server Instances

As a consultant I go into environments where the client simply does not know how many SQL Server instances they have. Perhaps as a DBA in a company you've been asked to do a license audit and need to know about all the SQL Server instances in your company. Or, as is common in New Zealand so probably elsewhere, you've just started at a company as their first DBA since they've grown to a size that requires one. The same company has probably allowed too many people to have too many rights and all kinds of SQL Server instances have sprung up all over the place.

Its a common situation. You can of course physically go to each machine in your company and check what is installed. But I'm a lazy DBA, or I try to be, and I want a solution that can let me have another coffee before lunch time.

If you do a Bingoogle search for something like "list sql server instances" you'll probably find these articles:
Both of which list this PowerShell command:

[System.Data.Sql.SqlDataSourceEnumerator]::Instance.GetDataSources()

I did. And when I did all I could think was "Sweet! How easy is that!?"

Sadly not as easy as it looks.

Wednesday, 8 February 2012

Deleting files by age with PowerShell

Back in November 2011 I blogged about using FORFILES to delete files by age.  I've since been "playing" with PowerShell and intend to blog quite a bit about it over the coming months.

To say PowerShell is powerful is an understatement!  There's plenty of material on the Internet to help you too.  But I wanted to kick off by replacing that old article with a PowerShell script, basically because I find PowerShell far easier to read!

But rather than just dumping a command with little explanation, I want to work you through it. This also demonstrates how I think through a problem with PowerShell and maybe that'll help you too. So I start with trying to display the items I want to deal with (be it report on, delete, move, action etc):
Get-ChildItem "C:\Example"

That will list all the files in C:\Example directory. A bit like this:
Mode                LastWriteTime     Length Name                                                                       
----                -------------     ------ ----                                                                       
-a---       7/02/2012  11:41 p.m.        598 File A.txt                                                                 
-a---       7/02/2012  11:41 p.m.         95 File B.txt                                                                 
-a---       7/02/2012  11:41 p.m.        311 File C.txt                                                                 
-a---       7/02/2012  11:41 p.m.         12 File D.dat   

Note: You can use DIR too as it is aliased to Get-ChildItem.

Ok...but we're interested in files older than an age.