Introduction to the SQL Server Mobile Report Publisher – Now Live on Pluralsight

My latest Pluralsight course, Introduction to the SQL Server Mobile Report Publisher, just went live!

The Mobile Report Publisher is a powerful, under-appreciated tool that lets you quickly and easily create reports for not just the web, but for a variety of platforms such as Apple’s iOS and Google Android.

In this course, Introduction to the SQL Server Mobile Report Publisher, you’ll learn to quickly and easily create dashboards for the web, as well as mobile devices such as phones and tablets.

First, you’ll explore how to use the Report Publisher designer to create the layout of the dashboard.

Next, you’ll see how to create datasets and bind them to the dashboard.

Finally, you’ll learn advanced features such as filters to limit the data on the dashboard, as well as drillthroughs to launch a detailed report or web page.

When you’re finished with this course, you’ll have the skills and knowledge of the Mobile Report Publisher needed to create dashboards on multiple platforms.

What’s that you say? You don’t have a subscription but want to watch my course? Hey, no problem. Pluralsight has a 10 day free trial. Just go to https://www.pluralsight.com/pricing/free-trial and sign up!

What Is SQL Server Blocking?

This is a guest post from my friend Kevin Kline. Kevin serves as Principal Program Manager at SentryOne. He is a founder and former president of PASS and the author of popular IT books like SQL in a Nutshell. Kevin is a renowned database expert, software industry veteran, Microsoft SQL Server MVP and long-time blogger at SentryOne. As a noted leader in the SQL Server community, he blogs about Microsoft Data Platform features and best practices, SQL Server trends as well as professional development for data professionals.

Understanding the way that SQL server blocking works is key to ensuring that your database is able to run smoothly and optimally, yet this can be a tricky subject to unpick if you are new to it.

You can get an in-depth explanation of SQL server blocking by reading this guide from SentryOne, but for a briefer introduction to what is involved, read on.

clip_image002

Image Source: Pixabay

Basics of blocking

In certain SQL server systems, two or more processes may need to make use of the hardware and software resources available to them simultaneously. The nature of the process will determine whether or not it is granted ‘lock’ status, which effectively allows it to gain priority over other processes and leverage the resources ahead of them.

Any processes which are competing with locked processes are referred to as being blocked, since they are effectively being made to wait in a queue until the prioritized, locked process has been completed and the resources are freed up.

Server blocking is vital to ensuring that the integrity of the data remains uncompromised and that the entire ecosystem is able to operate as usual.

Lock types

To appreciate why and when blocking occurs, it is necessary to look at the different types of lock modes that are available in the SQL server environment.

The most potent of the pack is the Exclusive lock, which is generally applied to make sure that data is modified in a logical order, rather than allowing multiple processes to tinker with it at once. Update is another lock mode that, as the name suggests, is applied during updates so that they follow the proper sequence.

Locks can be applied at several levels, from row level to the page, table and the database, with this hierarchy being useful in determining where blocks might originate.

Potential problems

While SQL server blocking can be considered an entirely routine aspect of database operation, there are circumstances in which problems can arise.

For example, blocking may cascade across a plethora of processes in a chain if one process is blocked by a locked process and in turn passes this onto subsequent processes that are waiting to be executed.

Of course working out whether a particular block should be considered an issue or not is not always straightforward, but in general so long as the block does not last more than five seconds or so it should not be too disruptive.

Deadlocks

SQL server deadlocks are closely related to blocking, although they are not exactly the same. Rather than effectively queuing processes through locks to allow for ordered execution, they occur when more than one process has an exclusive lock over specific resources, meaning neither can be completed.

In this instance, the server will terminate one of the conflicting processes, and like blocking a deadlock is not intrinsically bad, but needs to be monitored and managed by the administrator.

Making use of monitoring

As you may have gathered, while SQL server blocking is commonplace, acceptable and ultimately necessary to keep a database in good order, it is also something which it pays to keep tabs on to ensure seamless operation.

It is possible to do this manually, although this is a very labor-intensive procedure. Instead using a modern monitoring solution to automatically track blocks and provide alerts if problems arise is much more efficient.

Furthermore it may be helpful to use monitoring software which is capable of expressing blocking data visually, so that it can be digested quickly. As your experience with managing an SQL server grows, your skill with weeding out worrisome blocks will improve, but even veterans need the right tools to make their lives easier.

If you found this post useful, please see Sentry One’s guide on SQL Server Blocking at https://www.sentryone.com/sql-server/sql-server-blocking.

Azure PowerShell PlaybooK: Azure SQL–Now on Pluralsight!

My latest course is now available on Pluralsight! It’s the Azure PowerShell Playbook: Azure SQL. If you aren’t familiar with Pluralsight’s Playbook format, they are fast past courses that are almost 100% demo driven. They are meant to be consumed quickly, my course is just one hour and four minutes long. Great lunchtime viewing!

This course shows you how to use PowerShell to manage and migrate your on premises database up to Azure SQL. In brief, you’ll learn how to:

  • Create resource groups
  • Create and work with Storage Accounts
  • Create a SQL Server in Azure SQL
  • Package up your local database into a bacpac file
  • Import your bacpac file into a new Azure SQL database
  • Execute commands against your new Azure SQL database
  • Cleanup! I even show how to remove everything you’ve created, individually or as a whole

And all of this with PowerShell!

Additionally, I’ve included functions for just about everything listed, so (assuming your subscription gives you access to the samples) you’ll have a great starting point for your own library of cmdlets. (All the code for the functions appears on screen, so if you have to you could always pause and type it in.)

You can find my new course at:

https://www.pluralsight.com/courses/azure-powershell-sql-playbook

I also wrote an article for RedGate’s SimpleTalk website that aligns well with this course. I dive deeper into the restartability aspect of the way the functions were coded, something I couldn’t get deep into with the video course due to time constraints.

https://www.red-gate.com/simple-talk/sysadmin/powershell/powershell-functions-reusability-restartability-azure/

What’s that? Yes you in the back row, you say you don’t have a Pluralsight subscription? Well no worries, just email me, free@arcanetc.com and I’ll be glad to send you a code that will be good for 30 days at Pluralsight. During that time you can watch my courses, indeed you can watch any course at Pluralsight.

Mobile Report Publisher–Dashboards Everywhere

My latest article on RedGate’s SimpleTalk site has just come out: Mobile Report Publisher – Dashboards Everywhere.

In it, I give a quick overview of using SQL Server 2016’s Mobile Report Publisher to create a simple but useful dashboard. Mobile Report dashboards can be assembled easily, and quickly, from SSRS (SQL Server Reporting Services) Shared Datasets.

Reports generated with Mobile Report Publisher are great, as they can be used on, as the name suggests, mobile devices such as phones and tablets, but on websites as well. They fill the need between a traditional report and those analytic reports created by self service tools such PowerBI.

Using the instructions in my SimpleTalk article, you should be able to create your first, simple report. If that wets your appetite for more, I can offer two additional learning paths.

First, there’s my book SQL Server 2016 Report Services Cookbook. In it I have an entire chapter on the Mobile Report Publisher. Being an entire chapter I had more space to go deeper, and provide instructions on pulling data from multiple sources. I also go into the use of other components in the dashboard. You can find the book on both Amazon and my publisher, PactPub’s website.

The second resource is my Pluralsight course What’s New in SQL Server 2016 Reporting Services. The report gives a great overview of all the new features of SSRS 2016 (and still applicable to 2017). The bulk of the course focuses on the Mobile Report Publisher, but I also cover other new features such as the new Report Portal, and the ability to host KPIs right on the portal, among other features.

Don’t have a Pluralsight subscription? No worries, just email me, free@arcanetc.com and I can send you a code good for 30 days during which you can watch my courses, or any of the great courses at Pluralsight!

What’s New in SQL Server 2016 Reporting Services–Now live at Pluralsight

So you want to see what’s new in SQL Server 2016 Reporting Services? Well I’ve got just the thing for you. My 12th course just went live on Pluralsight, named appropriately “What’s New in SQL Server 2016 Reporting Services”.

This course is designed specifically for those who are already familiar with SQL Server Reporting Services, and just need to come up to speed with the new features in the 2016 release. Even better you can do it during a long lunch (or two short ones), the course is just under an hour and a half so you can learn quickly.

The course starts with a look at the brand new Report Portal, the replacement for Report Manager. You’ll see how to create KPIs right in the portal, improvements to subscriptions, and how to custom brand the Report Portal so it can integrate right into your companies websites. After that you’ll learn about the new HTML 5 compatibility, something that makes SSRS integrate even better into your web apps and sites.

Some time is then spent on improvements to what Microsoft calls “Paginated Reports”, essentially the same reports you’ve grown to know and love. You’ll see improvements for embedded reports, as well as the two new chart types, sunburst and treemap. Finally, the long awaited ability to arrange parameters is covered.

Last, but certainly not least, the course covers what I consider to be the most exciting piece of the 2016 release, the Mobile Report Publisher. You’ll see how to use it to generate reports, and how they can be used on the web as well as mobile devices such as phones and tablets.

What’s that you say? You want to see it, but don’t have a Pluralsight subscription? Hey, no problem. Just shoot me an email, free at arcanetc.com. I can send you a code good for 30 days during which you can watch this and any of the over 5,000 (yes, FIVE THOUSAND) courses.

Zero to Hero with PowerShell and SQL Server–Precon at DevDataDay

{dev = data} DAY Birmingham 2016Are you looking to learn about PowerShell? Do you use SQL Server? Then boy have I got a bargain for you!

I’m doing a precon for our upcoming devdataday event. The precon will take place on Friday, August 19th, the day before devdataday.

This is designed to be a course for the PowerShell novice. No prior knowledge required! I’ll start the day with an introduction to PowerShell, basic usage and cmdlets. Next we’ll dive into programming scripts in PowerShell, starting with the basics of loops, conditional logic, and functions. We’ll move into advanced topics, including creating your own reusable modules and testing them.
The real meat of the day comes next, learning how to interact with SQL Server from PowerShell. Both the SQL Provider and SMO (SQL Management Objects) DLL library will be covered in full. Most importantly you’ll learn how to read the online SMO library documentation and how to convert the examples into PowerShell syntax. The day will culminate by building a reusable module for working with many SQL Server common tasks, such as T-SQL code generation, or performing health checks on your servers.
The day will wrap up with a look at Pester, the new open source PowerShell testing tool. You’ll see how to test all the code you generated during the day. Don’t be left behind, learn how to leverage the power of PowerShell in your SQL Server environment.

To register, just go to https://devdatadaypowershellprecon.eventbrite.com/

Simple Way to Fix SQL Server Page Level Corruption

I’ve decided to try an experiment, and allow the occasional guest blogger. This will give new bloggers additional exposure, and draw eyes to both our sites.

Our first guest blogger is Andrew Jackson. Andrew is a SQL DBA and SQL Server blogger too. He likes to share about SQL Server and the problems related to it as well as their solution, handling database related user queries, server or database maintenance, database management etc.

You can find Andrew online at:

Facebook: https://www.facebook.com/people/Andrew-Jackson/100008825676608

Linkedin: https://www.linkedin.com/in/jackson-andrew-401147a5

Twitter: https://twitter.com/jacksonandrew32

Editor & Blog Contributor at: http://www.sqlmvp.org/

Without further delay, here is Andrew’s first contribution, Simple Way to Fix SQL Server Page Level Corruption.


 

Database corruption is the worst situation for any Database Administrator. The SQL database stores very crucial data, the occurrence of corruption make it inaccessible which causes huddle in workflow in any organization or business. Whenever the corruption issues are discovered, it must be resolved on time by the admin before the issue spread through the data infrastructure. Therefore, it is very necessary to take proper steps to protect the data from such damage. This article explains the causes, troubleshooting, and fixing SQL server page level corruption.

Page Level Corruption

Pages are the most basic unit of data stored in database, all .mdf or .ndf files in a database are logically divided into pages numbered from 0 to n. Whenever the pages of database file are infected by the corruption, DBA will repair each damaged page in isolation. Repairing and restoring the few pages individually is faster than repairing the entire file.

Causes:

· Hardware malfunction, most of database corruption occurs due to hardware failure.

· Issues in SQL server itself, sometimes when there are internal issues in SQL server results corruption

· Unplanned shut-down when database is opened

· Virus intrusion

· Updating the SQL server in newer version also results corruption

How to Fix Page Level Corruption

Moreover, Admin have their eventual plans to tackle SQL Server database corruption issue, there are strategies like replication, database monitoring, disaster recovery plans etc. However, there are the conditions when admin do not have such solution. In that case, admin need a solution to handle this problem, In SQL 2005 and later version there is a feature called Page Checksum that creates checksum value to detect the scale of the damage in database.

If the problem in SQL database is unreadable by SQL server, then it requires the complete restoration of the database. On other hand if damage is not that severe and lesser pages are infected then DBA can use DBCC utility.

DBCC CHECKDB Command for Handling Page Level Corruption

DBCC CHECKDB is a command that fix the corruption issue in SQL database. It fixes the inconsistency in SQL server database by performing Database Consistency Checks.

It is a combination of DBCC CHECKCATALOG, CHECKALLOC and DBCCCHECKTABLE.

The syntax of DBCC CHECKDB command is given below:

clip_image003

Other Possible Solutions

There is also an alternative solution to handle Page level corruption in SQL database, user can opt a third party tool namely that can easily tackle this corruption. It can recover highly corrupted page from both primary and secondary database. It supports all the versions of SQL Database Server. Using these defined ways; user or DBA can easily recover the page level corruption.

 


Thanks Andrew for your contribution. If you are interested in becoming a guest blogger, just email me info <@> arcanetc.com.

SQL Saturday 498 Chattanooga

Do you like PowerShell? SQL Server? Are you anywhere close to Chattanooga TN? Then don’t miss this SQL Saturday, June 25th 2016.

I’m giving two sessions (yes two for the price of one!). The first session will be PowerShell 201. It covers advanced concepts you need to know, such as debugging, remoting, security, and code signing.

The second session will educate on using PowerShell with SQL Server. We’ll cover the use of the SQL Provider (SQLPS) as well as the more advanced SQL Management Object library (SMO).

You’ll find the full schedule here, as well as links to register, directions, and the like.

http://www.sqlsaturday.com/498/Sessions/Schedule.aspx 

Best of all, my demos are already uploaded, so you can download early and play along during the presentation!

I’ll mention Chattanooga is a great vacation town, so bring your whole family. There’s a huge aquarium, discovery museum, Lookout Mountain, and tons of attractions to keep them occupied while you’re having fun at SQL Saturday.

Chattanooga SQL Saturday June 27 2015–Zero to Hero with PowerShell and SQL Server

This Saturday, June 27 2015, I will be at SQL Saturday #410 in Chattanooga, TN. I’ll be presenting a session “Zero to Hero with PowerShell and SQL Server”, in which we’ll start with the basics of PowerShell, then move into working with SQL Server via the PS SQL Provider.

This is going to be an extremely fast paced session, so you may wish to download the code samples ahead of time. I have already uploaded the samples to the event site. Just go to the Sessions menu, Schedule, and you’ll see a bit download button under the session. As of right now I’m the last session of the day, in room 219, but that is subject to change so be sure to check the schedule upon arrival.

Be warned, there is far more code in the download then we’ll be able to cover in a one hour session. It is well commented though, and you should be able to understand it after the session.

If you are looking for even more PowerShell goodness, my friend Aaron Nelson ( @sqlvariant | blog ) is doing a PowerShell for Data Professionals just after lunch. Our two sessions should work well together for those interested in using PowerShell in the world of SQL Server.

Hope to see you there!

Zero to Hero with PowerShell and SQL Server at 24 Hours of Pass

On September 9th I am co-presenting “Zero to Here with SQL Server and PowerShell” for the 24 Hours of PASS. If you’ve not heard of 24 Hours of PASS, it is 24 straight hours of online presentations. This time the sessions are a preview of the SQL PASS Summit in Seattle, WA in November.

At the PASS Summit I, along with two co-workers, am presenting a full day Pre-Con entitled Zero to Hero with PowerShell and SQL Server. I’m also doing a regular session, Make SQL Server POP with PowerShell.

The session for 24 Hours of PASS will take place at 00:00 GMT on September 10th, or for those of us in the states, September the 9th, 8 PM Eastern, 7 PM Central, 6 Mountain, or 5 Pacific. The session is titled the same as the precon, Zero to Hero with PowerShell and SQL Server. Through the preceding link you can see more about the session, get to the full schedule, and most importantly register!

Be sure to check out my co-presenters too, Bradley Ball (@SqlBalls | http://sqlballs.com ) and Jason Strate (@StrateSQL | http://www.jasonstrate.com )

Arcane-SQL–A PowerShell Module for Generating SQL Code

Overview

There are many PowerShell modules available for assisting the busy DBA with managing their SQL Server environment. This isn’t one of them. This module is targeted toward SQL Developers, with special functionality for data warehouse developers. A common task for BI professionals, one that is performed on almost every project, is the creation of a staging area. This might be a set of tables in the data warehouse, perhaps in their own schema, or in an entirely separate database often called an operational data store (ODS).

The staging tables are typically similar in structure to the ones in the source database. Similar, but not identical, as there are some small modifications which are commonly made to the staging tables. First, large data types such as VARCHAR(MAX) are seldom useful in data analysis and thus could be removed. Next, even the most casual user of SSIS will quickly see SSIS prefers to work with the double byte character sets (WSTR in SSIS, which maps to NVARCHAR in T-SQL) as opposed to the single byte (STR/VARCHAR) character sets. It can be helpful to convert these in the staging area.

This Module can (optionally) do all of these things and more when it is used to generate CREATE TABLE or SELECT statements. Imagine if you will a source system with thousands of tables and the need to create a staging area for it in a new data warehouse. This quickly becomes a long, boring tedious task. Now imagine being able to write a bit of PowerShell code and generate these tables in just a few minutes time.

Before diving in, it is highly suggested you download and review the example script, Arcane-SQL-Example.ps1. This demonstrates the most commonly used functions and provides patterns for their use.

Functionality

While the module is full of functions, there are a few core ones that should be highlighted. Complete documentation can be found in the module itself, which has been fully documented using the native PowerShell Help system. In addition there is an example script file which demonstrates some of the most common tasks.

  • Enable-SqlAssemblies – This is the most important function, without calling it nothing else works. Be aware the SQL Server assemblies (including the SMO – SQL Management Objects – and SQL Provider) need to be on the machine where this script is run. This module has been tested on, and intended for, SQL developers with SQL Server Developer Edition installed on their workstations.
  • Join-InvokeInstance and Join-ProviderInstance – Most of the interaction done with the SQL Provider requires the server name and instance name, assembled in a path like syntax. The Invoke-SQLCommand likewise requires this formatting, however it has a little quirk. If the instance is "default" then the Invoke requires it to be omitted while the provider requires it to be present. These two functions reduce the confusion, simply pass in the server name and instance, and they will format things correctly.
  • Get-TablesCollection – When working with tables it is common to iterate over all the tables in a database. This function will generate a PowerShell array of table objects, each object being of type Microsoft.SqlServer.Management.Smo.Table. By having table objects the wide variety of properties for the table are available, such as Schema name, Table name, and Row Count.
  • Get-TableByName – Most commonly scripts will retieve an array of tables using the above Get-TablesCollection, then iterate over them in a foreach loop. There are times however when only a single table from the collection is desired. For those types the Get-TableByName can be used to retrieve a specific table object based on the name of the table. 
  • Remove-SchemasFromTableCollection and Select-SchemasInTableCollection – Get-TablesCollection will return an array of all the tables in a database. Often there is a need to only work with a subset of that table collection. These two functions will filter based on the schema and return a new array. The first, Remove-SchemasFromTableCollection, removes all tables from the array of schemas that are passed in. The second, Select-SchemasInTableCollection, will retain only those tables in the schemas passed into the function.
  • Remove-TablesFromTableCollection and Select-TablesInTableCollection – These work as filters, similar to the functions above. Instead of the schema however, they are based on table name. All tables that begin with the text passed in are either removed, or in the latter function the only ones retained.
  • Get-PrimaryKeyIndex – Returns the primary key object for the passed in table object.
  • Get-PrimaryKeyColumnNames – returns a comma delimited list of the column names in the primary key
  • Decode-IsPrimaryKeyColumn – Will determine if the passed in column name is part of the primary key index
  • Get-TruncateStatement – Will generate a SQL Truncate Table statement based on the table object passed in.
  • Get-DropTableStatement – Generates a Drop Table statement, including the check to see if the table exists, for the passed in table object.
  • Get-CreateStatement – To simply say this function generates a create table statement would do it disservice. It will take a table object and reverse engineer it, generating a create table statement. Unlike other code generators, it has a suite of parameters which allow customization of the generated statement with an eye toward the needs of a data warehouse developer. A few are:
    • DataTypeAlignColumn – Set the column number to line up the data type declarations on. Passing in a value of 1, will suppress alignment and simply place the data type after the column name. The default is column 50.
    • OverrideSchema – It is common place staging tables in the data warehouse in their own schema, often named ‘Staging’ or ‘ETL’. Passing in a value here will include the new schema name in the create table declaration. If the table object passed in had a schema other than dbo, it is placed in front of the table name with an underscore. If it was dbo, the source schema is simply omitted.
    • PrependToTableName and AppendToTableName – Allows extra text to be placed before or after the table name. For example, it is common to create tables with _Delete, _Update, and _Insert in the staging area. This provides a simple way to do that.
    • AdditionalColumns – When creating tables in a data warehousing environment, there are often extra columns to hold metadata about the ETL process. A user of this function can create an array of additional columns using the Add-ColumnDefinition function and have them added to the create table statement.
    • Scrub – This is a very powerful switch. When added it will perform a cleanup to make the output suitible for data warehousing. Columns with large data types such as VARCHAR(MAX) are removed. All single byte character sets in the source are converted to double byte sets.
    • SuppressIdentity – Source systems will sometimes use the IDENTITY clause in the primary key column. Using this switch will suppress that identity clause from being generated in the new create table statement.
    • SuppressNotNull – Often staging tables will not be concerned with null versus not null values. Using this switch will create all columns as nullible, regardless of their setting in the source.
    • IncludeDropTable – Adding this switch will include a ‘if exists drop table’ style clause prior to the create table statement.
    • PrimaryKeysOnly – Will generate a create table statement that only has the primary keys found in the source system.

Finally, if a column in the source table object has a custom data type, the script will reverse engineer the data type back to its basic SQL data type.

  • Get-SelectStatement – Like its sister function Get-CreateStatement, under the covers this function provides a lot of power and flexibility to the statement it creates. Additional columns can be added, columns can be specified to order the output by, table aliases can be used, and most powerful of all is the ability to generate a HASHBYTES column, including the ability to remove specified columns from the hash byte calculation. Here are some of its parameters:
    • AsColumn – The routine will line up the AS <column alias> at the column number passed in here. The default is 50. To not use aligning, set this to 1.
    • PrependToColumnName – Text to include before each column name.
    • AppendToColumnName – Text to place after each column name.
    • AdditionalColumns – A collection of additional columns to be added to the SELECT statement. Useful for adding metadata columns. All items in the AdditionalColumns collection should be generated using the Add-OutputColumn function.
    • OrderByColumns – A list of columns to add to the ORDER BY clause. All items in the OrderByColumns collection should be generated using the Add-OutputColumn function.
    • TableAlias – Allows user to specify an alias to use for the table. The alias is then put in front of each column name.
    • HashBytes – If included the select statement will include a HASHBYTES function with all columns except the primary keys and any columns included in the OmitFromHashBytes collection. The name passed in this parameter will be used for the name of the HashBytes column.
    • OmitFromHashBytes – A collection of column names that should be excluded from the HashBytes calculated column. Useful for excluding metadata columns. All items in the OmitFromHashBytes collection should be generated using the Add-OutputColumn function.
    • Scrub – When included this will remove certain data columns from the output, such as BINARY, NVARCHAR(MAX), XML, and other large types not normally used in data warehouses. Additionally VARCHAR/CHAR are converted to NVARCHAR/NCHAR, and DATETIME converted to DATETIME2(4).
    • Flatten – When included will return the SELECT statement as one long string, without any Carriage Return / Line Feed characters. Additionally, any additional spacing (such as indicated with the AsColumn) is eliminated.
    • IncludeOrderByPK – When included the Primary Keys in the table object are included in the order by clause. If any columns are passed in the OrderByColumns parameter, the Primary Keys occur first in the Order By clause, followed by any columns in the OrderByColumns parameter.
    • IncludeNoLock – When included, a WITH NOLOCK clause is added to the SELECT statement.

Construction

Those PowerShell experts who review the code may note that in many places code does not follow the most "powershelly" way of doing things. In some places rather than using pipelining it was instead decided to use a foreach loop, for example. The intended audience for this module are T-SQL developers who may not be as comfortable in PowerShell as they are T-SQL. Thus using code that more closely aligned with T-SQL patterns would make it more useful and modifiable by SQL developers.

When development first started attempts were made to use advanced functions, using the pipeline for input and output. At some point however this didn’t make sense for a majority of the functions. Time constraints further impinged this effort. Some future revision may attempt to migrate selected functions back to an advanced design, but for now they will have to stand as is.

Development Environment

This module is intended to be used on a developer workstation, not on a server, and especially not on a production server. As such deployment has been made simple, just copy the Arcane-SQL folder to the developers PowerShell module library. On a standard Windows 7 machine this would be C:\Users\<<usernamehere>>Documents\WindowsPowerShell\Modules. If the Windows PowerShell folder and modules subfolder do not exist, they will need to be created first.

To keep things simple, no attempt was made to sign the script. If this is an issue the developer using this module can self sign it on their PC. Check the execution policy on the workstation where the module is installed to ensure sufficient rights to run the module.

This module was developed on machines with both SQL Server (Developer Edition) 2008R2 and 2012 installed (some machines with both) and worked without issue. One machine it was tested on had 3 versions of SQL Server, 2008R2, 2012, and 2014. On that one machine there were some errors with some of the functions passing in the SMO table objects. Those are still being investigated.

SQL Security was assumed to be handled using built in Windows Credentials. Thus the logged in user would need to have rights based on Windows credentials to the SQL Server they are targeting. 

The machine being developed on was using PowerShell v4, however v3 should work as well.

This module was developed using SAPIEN PowerShell Studio 2014. To make life easy for other developers the PowerShell Studio files (Arcane-SQL.psproj, Arcane-SQL.psproj.build, and Arcane-SQL.psprojs) were included in the code. If you are using a different editor, such as the PowerShell ISE, simply discard these files.

Warranty

To put it succinctly, there is none. No guarantee is made for the code in this module, users of this module assume all risks. While I am happy to receive bug reports, I make no promises or guarantees if, or when, they will be fixed.

Contributions

No, not the money kind, code contributions. If anyone wishes to extend the functionality of this module I am happy to collaborate as long as the coding standards demonstrated in this module are adhered to, and the contributions are relevant to the goals of this module. Be aware though this is not a money making effort, so expect no monetary reimbursement for any contributions.

Download

You can download the module and its example at:

http://gallery.technet.microsoft.com/Arcane-SQL-A-PowerShell-185651ad

Importing MongoDB Data Using SSIS 2012

I have embarked on a little quest to learn other database platforms (especially NoSQL) as more and more of our clients at Pragmatic Works have them in their enterprise, and want to be able to import data from them into their SQL Server data warehouses using SQL Server Integration Services (SSIS). While I found several articles that showed how to do so, these were outdated due to changes in the MongDB C# driver. After quite a bit of effort figuring out how to get this working, I thought I’d pass along my hard fought knowledge.

First, I assume you are familiar with MongoDB (http://www.mongodb.org/) and SQL Server (https://www.microsoft.com/en-us/sqlserver/default.aspx). In my examples I am using SSIS 2012 and MongoDB 2.4.8, along with the C# driver version 1.7 for MongoDB available at http://docs.mongodb.org/ecosystem/drivers/csharp/ .

First, download and install the C# driver. This next step is important, as there was a change that occurred with version 1.5 of the driver: the DLLs are no longer installed in the GAC (Global Assembly Cache) automatically. They must be there, however, for SSIS to be able to use them.

By default, my drivers were installed to C:\Program Files (x86)\MongoDB\CSharpDriver 1.7. You’ll want to open a CMD window in Administrator mode, and navigate to this folder. Next you’ll need GACUTIL, on my computer I found the most recent version at:

C:\Program Files (x86)\Microsoft SDKs\Windows\v8.1A\bin\NETFX 4.5.1 Tools\x64\

A simple trick to find yours: Since you are already in the CMD window, just move to the C:\Program Files (x86) folder, and do a “dir /s gacutil.exe”. It will list all occurrences of the program, just use the one with the most recent date. Register the dlls by entering these commands:

“C:\Program Files (x86)\Microsoft SDKs\Windows\v8.1A\bin\NETFX 4.5.1 Tools\x64\gacutil” /i MongDB.Bson.dll

“C:\Program Files (x86)\Microsoft SDKs\Windows\v8.1A\bin\NETFX 4.5.1 Tools\x64\gacutil” /i MongDB.Driver.dll

Note the “ quote marks around the path are important for the CMD window to correctly separate the gacutil program from the parameters.

Once that is done, create a new SQL Server Integration Services project in SQL Server Data Tools (SSDT), what used to be called BIDS in SQL Server 2008R2 (and previous). Put a Data Flow Task on the Control Flow design surface. Then open the Data Flow Task for editing.

Next, drag and drop a Script Component transformation onto the Data Flow design surface. When prompted, change the component type to Source.

image

Now edit the script transform by double clicking on it. Move to the Inputs and Outputs page. For my test, I am using the dbo.DimCurrency collection I created using the technique I documented in the previous post, Exporting Data from SQL Server to CSV Files for Import to MongoDB Using PowerShell ( https://arcanecode.com/2014/01/13/exporting-data-from-sql-server-to-csv-files-for-import-to-mongodb-using-powershell/ )

I renamed the output from “output” to “MongoDB_DimCurrency”. I then added four columns, CurrencyName, CurrencyAlternateKey, CurrencyKey, and ID.

image

Make sure to set CurrencyName, CurrencyAlternateKey, and ID to “Unicode string [DT_WSTR]” Data Type. Then change CurrencyKey to “four byte signed integer [DT_I4]”.

Now return to the Script page and click Edit Script. In the Solution Explorer pane, expand References, right click and pick Add Reference. Go to Browse, and navigate to the folder where the MongoDB C# drivers are installed. On my system it was in C:\Program Files (x86)\MongoDB\CSharpDriver 1.7\. Add both MongoDB.Driver.dll and MongoDB.Bson.dll.

image

Click OK when done, your Solution Explorer should now look something like:

image

Now in the script, expand the Namespaces region and add these lines:

using MongoDB.Bson;
using MongoDB.Driver;
using MongoDB.Bson.Serialization;

Now scroll down to the CreateNewOutputRows() procedure. Here is a sample of the code I used:

public override void CreateNewOutputRows()
{
  string connectionString = "mongodb://localhost";
  string databaseName = "AdventureWorksDW2014";

  var client = new MongoClient(connectionString);
  var server = client.GetServer();
  var database = server.GetDatabase(databaseName);
  string CurrencyKey = "";

  foreach (BsonDocument document in database.GetCollection<BsonDocument>("dbo.DimCurrency").FindAll())
  {
    MongoDBDimCurrencyBuffer.AddRow();
    MongoDBDimCurrencyBuffer.CurrencyName = document["CurrencyName"] == null ? "" : document["CurrencyName"].ToString();
    MongoDBDimCurrencyBuffer.CurrencyAlternateKey = document["CurrencyAlternateKey"] == null ? "" : document["CurrencyAlternateKey"].ToString();
    CurrencyKey = document["CurrencyKey"] == null ? "" : document["CurrencyKey"].ToString();

    MongoDBDimCurrencyBuffer.CurrencyKey = Convert.ToInt32(CurrencyKey);
    MongoDBDimCurrencyBuffer.ID = document["_id"] == null ? "" : document["_id"].ToString();
  }

}

I start by defining a connection string to the MongoDB server, followed by the database name. I then create a MongoClient object. Note the MongoClient is the new way of connecting to the MongoDB server. In earlier versions of the C# driver, you used MongoServer objects.

I then cycle through each document in the collection “dbo.DimCurrency”, using the FindAll() method. For each item I use the AddRow() method to add a row to the buffer. In order to find the proper name for the buffer I went to the Solution Explorer and expanded the BufferWrapper.cs file. This is a class created by the script transform with the name of the output buffer.

image

For each column in my outputs, I map a column from the document. Note the use of the ternary operator ? : to strip out nulls and replace them with empty strings. String columns you can map directly from the document object to the output buffers columns.

The CurrencyKey column, being an integer, had to be converted from a string to an integer. To make it simple I created a string variable to hold the return value from the document, then used the Convert class to convert it to an INT 32.

Once you’ve done all the above, validate the code by building the code. If that all checks out save your work, close the code window, then close the Script Transformation Editor by clicking OK.

Now place a destination of some kind on the Data Flow. Since I have my company’s Task Factory tools I used a TF Terminator Destination, but you could also use a Row Count destination. On the precedence constraint between the two, right click and Enable Data Viewer. Execute the package, if all goes well you should see:

image

A few final notes. This test was done using a MongoDB document schema that was flat, i.e. it didn’t have any documents embedded in the documents I was testing with. (Hopefully I’ll be able to test that in the future, but it will be the subject of a future post.) Second, the key was the registering of the DLLs in the GAC. Until I did that, I couldn’t get the package to execute. Finally, by using the newer API for the MongoDB objects I’ve ensured compatibility for the future.

Exporting Data from SQL Server to CSV Files for Import to MongoDB Using PowerShell

I’ve been exploring other database systems, in order to determine how to import data from them using SQL Server Integration Services (SSIS). My first step though was to create some test data. I wanted something familiar, so I decided to export the Adventure Works Data Warehouse sample database and import into MongoDB. While I had many options I decided the simplest way was to first export the data to CSV files, then use the MongoDB utility mongimport. Naturally I turned to PowerShell to create an automated, reusable process.

First, if you need the Adventure Works DW database, you’ll find it at http://msftdbprodsamples.codeplex.com/. Second, I did my export from a special version of Adventure Works DW I created called AdventureWorksDW2014. This is optional, but if you want to have a version of Adventure Works DW updated with current dates, see my post at https://arcanecode.com/2013/12/08/updating-adventureworksdw2012-for-2014/. Third, I assume you are familiar with MongoDB, but if you want to learn more go to http://www.mongodb.org/.

Below is the PowerShell 3 script I created. The script is broken into four regions. The first, User Settings, contains the variables that you the user might need to change to get the script to run. It has things like the name of the SQL Server database, the path to MongoDB, etc.

The second region, Common, establishes variables that are used by the remaining two regions. You shouldn’t need to change or alter these. The third region accesses SQL Server and exports each table in the selected database to a CSV format file.

The final region, “Generate MongoDB import commands”, creates a batch (.BAT) file which has all the commands needed to run mongoimport for each CSV file. I decided not to have the PowerShell script execute the .BAT file so it could be reviewed before it is run. There might be some tables you don’t want to import, etc.

It is also quite easy to adapt this script to just create CSV files from SQL Server without using the MongoDB piece. Simply remove the fourth and final region, then in the Common and User Settings regions remove any variables what begin with the prefix “mongo”.

As the comments do a good job of explaining what happens I’ll let you review the included documentation for step by step instructions.

#==================================================================================================
# SQLtoCSVtoMongoDb.ps1
# Robert C. Cain | @ArcaneCode |
http://arcanecode.com
#
# If you need a simple way to export data from SQL Server to MongoDb, here is one way to do it.
# The script starts by setting up some variables to the server environment (see the User Settings
# region)
#
# Next, it exports data from each table in the selected database to individual CSV files.
# Finally, it generates a batch file which executes mongoimport for each csv file to import
# into MongoDb.
#
# I broke this into four regions so if all that is desired is a simple export of data to CSVs,
# you can simply omit the final region along with any variables that begin with "mongo".
#
# While I could have gone ahead and run the batch file at the end, I chose not to in order to
# give you time to review the output prior to running the batch file.
#==================================================================================================

Clear-Host

#region User Settings

  # In this section, set the variables so they are appropriate for your project / environment
 
  # This is the spot where you want to store the generated CSVs.
  # Make sure it does NOT end in a \
  $csvPath = "C:\mongodb"

  # If you are running this on a computer other than the server, set the name of the server below
  $sqlServer = $env:COMPUTERNAME

  # If you have a named instance be sure replace "default" with the name of the instance
  $sqlInstance = "\default"

  # Enter the name of the database to export below
  $sqlDatabaseName = "AdventureWorksDW2014"

  # The settings below only apply to the MongoDB code generation
  # Assemble path to mongodb. This assumes utlities are stored in the default bin folder
  $mongoPath = "C:\mongodb"
  $mongoImport = "$mongoPath\bin\mongoimport"

  # Set the server name and port
  $mongoHost = "localhost"   # Leave blank to default to localhost
  $mongoPort = ""            # Leave blank to default to 27107
 
  # Set the user name and password, leave blank if it isn’t needed
  $mongoUser = ""
  $mongoPW = ""

  # Enter the name of the database to import to.
  $mongoDatabaseName = "AdventureWorksDW2014"

  # Upserts are REALLY slow, especially on large datasets. Setting this to $true will turn off
  # the upsert option. If set to true, you are responsible for either deleting all documents
  # in the collection before hand, or allowing the risk of duplicates.
  #
  # Setting to false will enable the upsert option for mongoimport, and attempt to determine the
  # keys and (if found) add them to the final mongoimport command.
  $mongoNoUpsert = $true

#endregion

#region Common ————————————————————————————
 
  # This section sets variables used by both regions below. There is no need to alter anything
  # in this region.

  # Import the SQLPS provider (if it’s not already loaded)
  if (-not (Get-PSProvider SqlServer))
    { Import-Module SQLPS -DisableNameChecking }

  # Assemble the full servername \ instance
  $sqlServerInstance = "$sqlServer\$sqlInstance"

  # Assemble the full path for the SQL Provider to get to the database
  $sqlDatabaseLocation = "SQLSERVER:\sql\$sqlServerInstance\databases\$sqlDatabaseName"

  # Now tack on the Tables ‘folder’ to the SQL Provider path, the move there
  $sqlTablesLocation = $sqlDatabaseLocation + "\Tables"
  Set-Location $sqlTablesLocation

  # Get a list of tables in this database
  $sqlTables = Get-ChildItem

#endregion

#region Export SQL Data —————————————————————————
  # In this section we will export data from each table in the database to a CSV file.
  # WARNING: If the CSV file exists, it will be overwritten.

  # These are just used to display informational messages during processing
  $sqlTableIterator = 0
  $sqlTableCount = $sqlTables.Count

  # Iterate over each table in the database
  foreach($sqlTable in $sqlTables)
  {
    $sqlTableName = $sqlTable.Schema + "." + $sqlTable.Name   

    # I’ll grant you the next little bit of formatting for the progress messages is a bit
    # OCD on my part, but I like my output formatted and easy to read.
    $sqlTableIterator++
    $padCount = " " * (1 + $sqlTableCount.ToString().Length – $sqlTableIterator.ToString().Length)
    $sqlTableIteratorFormatted = $padCount + $sqlTableIterator

    if( $sqlTableName.Length -gt 50 )
      { $padTable = " " }
    else
      { $padTable = " " * (50 – $sqlTableName.Length) }

    Write-Host -ForegroundColor White -NoNewline "Processing Table $sqlTableIteratorFormatted of $sqlTableCount : $sqlTableName $padTable"
   
    # If the instance is "default", we have to exclude it when we use Invoke-SqlCmd
    if($sqlInstance.ToLower() -eq "\default")
      { $sqlSI = $sqlServer }
    else
      { $sqlSI = $sqlServerInstance }

    # Load an object with all the data in the table
    # Note if you have especially large tables you may need to modify this
    # section to break things into smaller chunks.
    $sqlCmd = "SELECT * FROM " + $sqlTableName
    $sqlData = Invoke-Sqlcmd -Query $sqlCmd `
                             -ServerInstance $sqlSI `
                             -SuppressProviderContextWarning `

    # Now write the data out.
    # Note utf8 encoding is important, as it is all mongoimport understands
    # Also need to omit the Type Info header PowerShell wants to write out
    Write-Host -ForegroundColor Yellow "    Writing to table $sqlTableName.csv"
    $sqlData | Export-Csv -NoTypeInformation -Encoding "utf8" -Path "$csvPath\$sqlTableName.csv"

  }

  # Just add a blank line after the processing ends
  Write-Host

#endregion

#region Generate MongoDB import commands ———————————————————-

  # In this region we will generage the commands to import our newly exported data
  # into an existing database in MongoDB. This is an example of our desired output (wrapped
  # onto multiple lines for readability, in the output it will be a single line):

  #  C:\mongodb>bin\mongoimport –host localhost -port 27107
  #                             –db AdventureWorksDW2014 –collection DimSalesReason
  #                             –username Me –password mySuperSecureP@ssW0rd!
  #                             –type csv –headerline –file DimSalesReason.csv
  #                             –upsert –upsertFields SalesReasonKey

  # Note several of these parameters are optional, and could use defaults, or be potentially
  # omitted from the final output, based on the choices at the very beginning of this script

  # Feel free to alter the $mongoCommand as needed for other circumstances

  # Final warning, the database must already exist in MongoDb in order to import the data. This
  # script will not generate the database for you.

  # Create the name for the batch file we will generate
  $mongoBat = $csvPath + "\Import_SQL_" + $sqlDatabaseName + "_to_MongoDb_" + $mongoDatabaseName + ".bat"

  # See if file exists, if so delete it
  if (Test-Path $mongoBat)
    { Remove-Item $mongoBat }

  # These are just used to display informational messages during processing
  $sqlTableIterator = 0
  $sqlTableCount = $sqlTables.Count

  # mongoimport allows us to do upserts, helping to eliminate duplicate rows on import.
  #
  # To make an upsert work there has to be a key column to match up on. Fortunately,
  # most tables in the SQL Server world have Primary Keys, so we can find out what
  # columns those are and add it to the command. Note if there is no PK in SQL Server,
  # no upsert will be attempted.
  #
  # Note though that upserts are REALLY slow, so the option to skip them is
  # built into the script and set at the top (mongoNoUpsert). The generated batch file
  # assumes that either a) you have deleted all data from the collection ahead of time,
  # or b) you are OK with the risk of duplicate data.

  # Iterate over each table in the database to build the mongoimport command
  foreach($sqlTable in $sqlTables)
  {
    $sqlTableName = $sqlTable.Schema + "." + $sqlTable.Name

    # A bit more OCD progress messages
    $sqlTableIterator++
    $padCount = " " * (1 + $sqlTableCount.ToString().Length – $sqlTableIterator.ToString().Length)
    $sqlTableIteratorFormatted = $padCount + $sqlTableIterator
    Write-Host -ForegroundColor Green "Building mongoimport command for table $sqlTableIteratorFormatted of $sqlTableCount : $sqlTableName"

    # Begin building the command
    $mongoCommand = "$mongoImport "
   
    if ($mongoHost.Length -ne 0)
      { $mongoCommand += "–host $mongoHost " }

    if ($mongoPort.Length -ne 0)
      { $mongoCommand += "–port $mongoPort " }

    $mongoCommand += "–db $mongoDatabaseName –collection $sqlTableName "

    if ($mongoUser.Length -ne 0)
      { $mongoCommand += " –username $mongoUser –password $mongoPW " }

    $mongoCommand += " –type csv –headerline –file $csvPath\$sqlTableName.csv "
       
    # Build the upsert clause, if the user has elected to use it.
    if ($mongoNoUpsert -eq $false)
    {
      $mongoPKs = ""
      foreach($sqlIndex in $sqlTable.Indexes)
      {
        if($sqlIndex.IndexKeyType -eq ‘DriPrimaryKey’)
        {
          foreach($sqlCol in $sqlIndex.IndexedColumns) #$sqlPKColumns)
          {
            if ($mongoPKs.Length -ne 0)
              { $mongoPKs += "," }
            # Note column names are returned with [ ] around them, and must be removed
            # Have to use -replace instead of .Replace() because $sqlCol is an column not a string
            $mongoPKs += ($sqlCol -replace "\[", "") -replace "\]", ""
          }
               
          $mongoCommand += " –upsert –upsertFields $mongoPKs"
        }           
      }
    }

    # Append the command to the batch file
    $mongoCommand | Out-File -FilePath $mongoBat -Encoding utf8 -Append

  }

  # Just add a blank line after the processing ends
  Write-Host

#endregion

 

Updating AdventureWorksDW2012 for 2014

A while back I did a post that contained a script to update the AdventureWorksDW2012 database to have dates for the 2013 time period. This will allow folks to demo date related queries and be able to simply use things like GETDATE or NOW without having to do funky math tricks to take into account the pitifully out of date offering.

I’ve now updated the script for 2014, thought I’d pass along the updated version. Note some browsers don’t seem to render the script using the mono-spaced font I intend, but just ignore. Copy and paste into SQL Server Management Studio and it should work fine.

/*-----------------------------------------------------------------------------------------------*/
/* Updating AdventureWorks2012 for Today                                                         */
/*                                                                                               */
/* Robert C. Cain, http://arcanecode.com @ArcaneCode                                             */
/*                                                                                               */
/* Script Copyright (c) 2013 by Robert C. Cain                                                   */
/* AdventureWorks database Copyright (c) Microsoft.                                              */
/*                                                                                               */
/* This script will make a backup of the AdventureWorks2012DW database, then copy and restore it */
/* as AdventureWorksDW2014. It will then update it for current dates. 2008 now becomes 2014,     */
/* 2007 is now 2012, and so forth. This script is dependent on the AdventureWorks2012DW sample   */
/* database already being installed. It won't change AdventureWorksDW2012 in anyway.             */
/*                                                                                               */
/* Be warned, if AdventureWorksDW2014 exists, it will be deleted as part of this process.        */
/*                                                                                               */
/*-----------------------------------------------------------------------------------------------*/

PRINT 'Updating AdventureWorks2012 for Today - Starting'
GO

/*-----------------------------------------------------------------------------------------------*/
/* Step 1 - Make a copy of AdventureWorksDW2012 and restore as AdventureWorksDW2014              */
/*-----------------------------------------------------------------------------------------------*/
SET NOCOUNT ON

USE [master]

-- Step 1.1. Make a backup of AdventureWorksDW2012 ----------------------------------------------
PRINT 'Backing up AdventureWorksDW2012'
GO

BACKUP DATABASE [AdventureWorksDW2012] 
    TO DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Backup\AdventureWorksDW2012.bak' 
  WITH NOFORMAT, 
       INIT,  
       NAME = N'AdventureWorksDW2012-Full Database Backup', 
       SKIP, 
       NOREWIND, 
       NOUNLOAD,  
       STATS = 10
GO


-- Step 1.2. Delete the database AdventureWorksDW2014 if it exists ------------------------------
PRINT 'Deleting AdventureWorksDW2014, if it exists'
GO

IF (EXISTS (SELECT 1 
              FROM master.dbo.sysdatabases 
             WHERE name = 'AdventureWorksDW2014' )
   )
   EXEC msdb.dbo.sp_delete_database_backuphistory @database_name = N'AdventureWorksDW2014'
GO

IF (EXISTS (SELECT 1 
              FROM master.dbo.sysdatabases 
             WHERE name = 'AdventureWorksDW2014' )
   )
   DROP DATABASE [AdventureWorksDW2014]
GO

-- Step 1.3. Restore the database to a new copy -------------------------------------------------
PRINT 'Restoring AdventureWorksDW2012 to AdventureWorksDW2014'
GO

RESTORE DATABASE [AdventureWorksDW2014] 
   FROM  DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Backup\AdventureWorksDW2012.bak' 
   WITH  FILE = 1,  
   MOVE N'AdventureWorksDW2012_Data' 
     TO N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdventureWorksDW2014_Data.mdf',  
   MOVE N'AdventureWorksDW2012_Log' 
     TO N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdventureWorksDW2014_log.ldf',  
        NOUNLOAD,  STATS = 5

GO

PRINT 'Done Creating AdventureWorksDW2014'
GO



/*-----------------------------------------------------------------------------------------------*/
/* Step 2. Create a helper function to convert dates to a YYYYMMDD format Date Id.               */
/*-----------------------------------------------------------------------------------------------*/

USE [AdventureWorksDW2014]
GO

IF EXISTS (SELECT [name] FROM [sys].[all_objects] WHERE [name] = 'DateToDateId')
  DROP FUNCTION [dbo].[DateToDateId];
GO

CREATE FUNCTION [dbo].[DateToDateId]
(
  @Date DATETIME
)
RETURNS INT
AS
BEGIN

  DECLARE @DateId  AS INT
  DECLARE @TodayId AS INT

  SET @TodayId = YEAR(GETDATE()) * 10000
               + MONTH(GETDATE()) * 100
               + DAY(GETDATE())         

  -- If the date is missing, or a placeholder for a missing date, set to the Id for missing dates
  -- Else convert the date to an integer
  IF @Date IS NULL OR @Date = '1900-01-01' OR @Date = -1
    SET @DateId = -1  
  ELSE
    BEGIN
      SET @DateId = YEAR(@Date) * 10000
                  + MONTH(@Date) * 100
                  + DAY(@Date)         
    END  
  
  -- If there's any data prior to 2000 it was incorrectly entered, mark it as missing
  IF @DateId BETWEEN 0 AND 19991231 
    SET @DateId = -1

  -- Commented out for this project as future dates are OK
  -- If the date is in the future, don't allow it, change to missing
  -- IF @DateId > @TodayId 
  --   SET @DateId = -1

  RETURN @DateId

END

GO




/*-----------------------------------------------------------------------------------------------*/
/* Step 3. Add new dates to the dbo.DimDate table.                                               */
/*-----------------------------------------------------------------------------------------------*/
PRINT 'Adding new dates to dbo.DimDate'
GO

SET NOCOUNT ON

-- Later we will be writing an INSERT INTO... SELECT FROM to insert the new record. I want to 
-- join the day and month name memory variable tables, but need to have something to join to. 
-- Since everything is calculated, we'll just create this little bogus table to have something
-- to select from.
DECLARE @BogusTable TABLE
  ( PK TINYINT)

INSERT INTO @BogusTable SELECT 1;


-- Create a table variable to hold the days of the week with their various language versions
DECLARE @DayNameTable TABLE
  ( [DayNumberOFWeek]      TINYINT
  , [EnglishDayNameOfWeek] NVARCHAR(10)
  , [SpanishDayNameOfWeek] NVARCHAR(10)
  , [FrenchDayNameOfWeek]  NVARCHAR(10)
  )

INSERT INTO @DayNameTable
SELECT DISTINCT 
       [DayNumberOFWeek]      
         , [EnglishDayNameOfWeek] 
         , [SpanishDayNameOfWeek] 
         , [FrenchDayNameOfWeek]  
  FROM dbo.DimDate

-- Create a month table to hold the months and their language versions.
DECLARE @MonthNameTable TABLE
  ( [MonthNumberOfYear] TINYINT
  , [EnglishMonthName]  NVARCHAR(10)
  , [SpanishMonthName]  NVARCHAR(10)
  , [FrenchMonthName]   NVARCHAR(10)
  )

INSERT INTO @MonthNameTable
SELECT DISTINCT
       [MonthNumberOfYear] 
     , [EnglishMonthName]  
     , [SpanishMonthName]  
     , [FrenchMonthName]   
  FROM dbo.DimDate

-- This is the start and end date ranges to use to populate the 
-- dbo.DimDate dimension. Change if it's 2014 and you run across this script.
DECLARE @FromDate AS DATE = '2011-01-01'
DECLARE @ThruDate AS DATE = '2015-12-31'

-- CurrentDate will be incremented each time through the loop below.
DECLARE @CurrentDate AS DATE
SET @CurrentDate = @FromDate

-- FiscalDate will be set six months into the future from the CurrentDate
DECLARE @FiscalDate  AS DATE

-- Now we simply loop over every date between the From and Thru, inserting the
-- calculated values into DimDate.
WHILE @CurrentDate <= @ThruDate
BEGIN

  SET @FiscalDate = DATEADD(m, 6, @CurrentDate)

  INSERT INTO dbo.DimDate
  SELECT [dbo].[DateToDateId](@CurrentDate)
       , @CurrentDate
       , DATEPART(dw, @CurrentDate) AS DayNumberOFWeek
       , d.EnglishDayNameOfWeek
       , d.SpanishDayNameOfWeek
       , d.FrenchDayNameOfWeek
       , DAY(@CurrentDate) AS DayNumberOfMonth
       , DATEPART(dy, @CurrentDate) AS DayNumberOfYear
       , DATEPART(wk, @CurrentDate) AS WeekNumberOfYear
       , m.EnglishMonthName
       , m.SpanishMonthName
       , m.FrenchMonthName
       , MONTH(@CurrentDate) AS MonthNumberOfYear
       , DATEPART(q, @CurrentDate) AS CalendarQuarter
       , YEAR(@CurrentDate) AS CalendarYear
       , IIF(MONTH(@CurrentDate) < 7, 1, 2) AS CalendarSemester
       , DATEPART(q, @FiscalDate) AS FiscalQuarter
       , YEAR(@FiscalDate) AS FiscalYear
       , IIF(MONTH(@FiscalDate) < 7, 1, 2) AS FiscalSemester
    FROM @BogusTable
    JOIN @DayNameTable d
      ON DATEPART(dw, @CurrentDate) = d.[DayNumberOFWeek]
    JOIN @MonthNameTable m
      ON MONTH(@CurrentDate) = m.MonthNumberOfYear

  SET @CurrentDate = DATEADD(d, 1, @CurrentDate)
END
GO

-- If you want to verify you can uncomment this line.
-- SELECT * FROM dbo.DimDate WHERE DateKey > 20110000

PRINT 'Done adding new dates to dbo.DimDate'
GO





/*-----------------------------------------------------------------------------------------------*/
/* Step 4. Update the Fact Tables with the new dates.                                            */
/*-----------------------------------------------------------------------------------------------*/


PRINT 'Update Fact Tables'
GO

SET NOCOUNT ON

-- To move forward five years, we simply add 50,000 to the date key

-- 4.1 FactFinance ------------------------------------------------------------------------------
PRINT '  FactFinance'
GO

UPDATE [dbo].[FactFinance]
   SET [DateKey] = [DateKey] + 60000;


-- 4.2 FactInternetSales ------------------------------------------------------------------------
PRINT '  FactInternetSales'
GO

-- There are a few rows where the due date is on leap year. Update these to back off a day 
-- so the date add works OK
UPDATE [dbo].[FactInternetSales]
   SET [OrderDateKey] = 20080228
     , [OrderDate] = '2008-02-28'
 WHERE [OrderDateKey] = 20080229

UPDATE [dbo].[FactInternetSales]
   SET [DueDateKey] = 20080228
     , [DueDate] = '2008-02-28'
 WHERE [DueDateKey] = 20080229

UPDATE [dbo].[FactInternetSales]
   SET [ShipDateKey] = 20080228
     , [ShipDate] = '2008-02-28'
 WHERE [ShipDateKey] = 20080229

-- Now update the rest of the days. 
UPDATE [dbo].[FactInternetSales]
   SET [OrderDateKey] = [OrderDateKey] + 60000
     , [DueDateKey] = [DueDateKey] + 60000
     , [ShipDateKey] = [ShipDateKey] + 60000
     , [OrderDate] = DATEADD(yy, 6, [OrderDate])
     , [DueDate] = DATEADD(yy, 6, [DueDate])
     , [ShipDate] = DATEADD(yy, 6, [ShipDate])


-- 4.3 FactResellerSales ------------------------------------------------------------------------
PRINT '  FactResellerSales'
GO

-- As with Internet Sales, there are rows where the due date is on leap year. 
-- Update these to back off a day so the date add works OK
UPDATE [dbo].[FactResellerSales]
   SET [OrderDateKey] = 20080228
     , [OrderDate] = '2008-02-28'
 WHERE [OrderDateKey] = 20080229

UPDATE [dbo].[FactResellerSales]
   SET [DueDateKey] = 20080228
     , [DueDate] = '2008-02-28'
 WHERE [DueDateKey] = 20080229

UPDATE [dbo].[FactResellerSales]
   SET [ShipDateKey] = 20080228
     , [ShipDate] = '2008-02-28'
 WHERE [ShipDateKey] = 20080229

-- Now update the table
UPDATE [dbo].[FactResellerSales]
   SET [OrderDateKey] = [OrderDateKey] + 60000
     , [DueDateKey] = [DueDateKey] + 60000
     , [ShipDateKey] = [ShipDateKey] + 60000
     , [OrderDate] = DATEADD(yy, 6, [OrderDate])
     , [DueDate] = DATEADD(yy, 6, [DueDate])
     , [ShipDate] = DATEADD(yy, 6, [ShipDate])

-- 4.4 FactSalesQuota ---------------------------------------------------------------------------
PRINT '  FactSalesQuota'
GO

UPDATE [dbo].[FactSalesQuota] 
   SET [DateKey] = [DateKey] + 60000

-- 4.5 FactSurveyResponse -----------------------------------------------------------------------
PRINT '  FactSurveyResponse'
GO

UPDATE [dbo].[FactSurveyResponse]
   SET [DateKey] = [DateKey] + 60000

-- 4.6 FactCallCenter ---------------------------------------------------------------------------
PRINT '  FactCallCenter'
GO

-- All the rows in call center have a 2010 date, just add 3 years to make these 2014
UPDATE [dbo].[FactCallCenter]
   SET [DateKey] = [DateKey] + 40000


-- 4.7 FactCurrencyRate -------------------------------------------------------------------------
PRINT '  FactCurrencyRate'
GO

-- Because the DateKey is part of the PK, we have to drop the key before we can update it
ALTER TABLE [dbo].[FactCurrencyRate] DROP CONSTRAINT [PK_FactCurrencyRate_CurrencyKey_DateKey]
GO

-- Shift the 2008 Leap Year days to 2012 Leap Year
UPDATE [dbo].[FactCurrencyRate]
   SET [DateKey] = 20120229
 WHERE [DateKey] = 20080229

-- Update everything except the leap year we fixed already
UPDATE [dbo].[FactCurrencyRate]
   SET [DateKey] = [DateKey] + 60000
 WHERE [DateKey] <> 20120229

-- Add the PK back
ALTER TABLE [dbo].[FactCurrencyRate] 
  ADD CONSTRAINT [PK_FactCurrencyRate_CurrencyKey_DateKey] PRIMARY KEY CLUSTERED 
      ( [CurrencyKey] ASC,
          [DateKey] ASC
      )
 WITH ( PAD_INDEX = OFF
      , STATISTICS_NORECOMPUTE = OFF
      , SORT_IN_TEMPDB = OFF
      , IGNORE_DUP_KEY = OFF
      , ONLINE = OFF
      , ALLOW_ROW_LOCKS = ON
      , ALLOW_PAGE_LOCKS = ON
      ) ON [PRIMARY]
GO


-- 4.8 FactProductInventory ---------------------------------------------------------------------
PRINT '  FactProductInventory'
GO

-- As with the previous step, the date is part of the primary key, so we need to drop it first.
ALTER TABLE [dbo].[FactProductInventory] DROP CONSTRAINT [PK_FactProductInventory]
GO

-- Shift the 2008 Leap Year days to 2012 Leap Year
UPDATE [dbo].[FactProductInventory]
   SET [DateKey] = 20120229
 WHERE [DateKey] = 20080229

-- Update everything except the leap year we fixed already
UPDATE [dbo].[FactProductInventory]
   SET [DateKey] = [DateKey] + 60000
 WHERE [DateKey] <> 20120229
 
-- Add the PK back
ALTER TABLE [dbo].[FactProductInventory] 
  ADD CONSTRAINT [PK_FactProductInventory] PRIMARY KEY CLUSTERED 
      (    [ProductKey] ASC
      , [DateKey] ASC
      )
 WITH ( PAD_INDEX = OFF
      , STATISTICS_NORECOMPUTE = OFF
      , SORT_IN_TEMPDB = OFF
      , IGNORE_DUP_KEY = OFF
      , ONLINE = OFF
      , ALLOW_ROW_LOCKS = ON
      , ALLOW_PAGE_LOCKS = ON
      ) ON [PRIMARY]
GO

PRINT 'Done updating the Fact tables'
GO



/*-----------------------------------------------------------------------------------------------*/
/* Step 5. Cleanup, remove the helper function we added earlier.                                 */
/*-----------------------------------------------------------------------------------------------*/
PRINT 'Removing Helper Function'
GO

IF EXISTS (SELECT 1 FROM [sys].[all_objects] WHERE [name] = 'DateToDateId')
  DROP FUNCTION [dbo].[DateToDateId];
GO

/*-----------------------------------------------------------------------------------------------*/
/* All done!                                                                                     */
/*-----------------------------------------------------------------------------------------------*/
PRINT 'Updating AdventureWorks2012 for Today - Completed'
GO

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

Updating AdventureWorksDW2012 for Today

Like many of my fellow MVPs and Presenters, I use the Adventure Works sample data from Microsoft to do my presentations. Being a BI guy, I specifically use the AdventureWorksDW2012 version, the Data Warehouse of Adventure Works. I think you’d agree though it’s gotten a little long in the tooth. All of dates range from 2005 to 2008. This is especially irritating when demonstrating features reliant on the current date ( think GETDATE() or NOW() ).

Before you read further, let me stress again this is NOT for the typical AdventureWorks2012 database. This script is for the Data Warehouse version, AdventureWorksDW2012.

I scoured the search engines but couldn’t find anyone who had taken time to come up with a way to update the database. Finally fed up, I did it myself. Below is a script which will add five years to each date in AdventureWorksDW2012. 2008 becomes 2013, 2007 becomes 2012, and so on. The script, below, turned out to be pretty simple.

Before you begin though, a few prerequisites. First, you will need to have AdventureWorksDW2012 installed on your system. A friend and co-worker, Bradley Ball (@SQLBalls | blog ) pointed out one issue which I’ll pass along. He had some issues with the version of AdventureWorksDW2012 located at http://msftdbprodsamples.codeplex.com/releases/view/55330. When he just grabbed the mdf file and tried to create the database using the attach_rebuild_log option it came out corrupted. Instead he suggested the version stored at http://www.wrox.com/WileyCDA/Section/Wrox-Books-Using-the-SQL-Server-2012-RTM-Database-Examples-Download.id-811144.html?DW_1118479580.zip. (I don’t think Wrox will mind, as I and many of my co-workers have written books for them, nice folks.)

Next, please note this script was written with SQL Server 2012 in mind. It could easily be adapted for 2008R2 by tweaking a few paths. Speaking of which, I use the default paths for everything, you’ll need to alter if you used other paths.

Not wanting to mess with the original AdventureWorksDW2012, in Step 1 (these steps are numbered in the script below) I make a backup of the existing 2012 version. I then do a restore, renaming it to AdventureWorksDW2013. Be warned, if you have run this before and AdventureWorksDW2013 exists it will be deleted. This might be good if you want an easy way to reset your 2013 version, if not alter the script for your needs.

Later I will be inserting dates. I have a handy little routine that converts a traditional datetime data type to an integer, using the traditional YYYYMMDD common for data warehouse date keys. I probably could have done this using some version of FORMAT but I already had the routine written so I just grabbed and reused it. Note it also does some bounds checking, etc that really wasn’t needed here, but like I said I did a grab and reuse. So in Step 2 I create the function.

In step 3 I tackle the biggest task of inserting new rows into the date dimension. The DimDate table already had dates through the end of 2010, so I only had to generate 2011-2013. Inside a WHILE loop I iterate over each date individually, do the calculations to break out the various pieces of a date such as month number, quarter number, etc, and do an INSERT into the DimDate table. If you recall, the DimDate table in AdventureWorks has mult-language versions of the month and day names. I simply read the existing ones into table variables, then in the SELECT part of the INSERT INTO… SELECT statement do a join to these two table variables.

Of course to do that, I had to have a table to select from. None of my date data though existed in the table, each piece of data was generated from the CurrentDate variable. So I simply created a third table variable named BogusTable, and inserted a single row in it. This gave me something to join the month and day name tables to. I suppose I could have used CASE statements for each of the names, but this was more fun.

With the dates added to DimDate, it was time to move on to the Fact tables. In some cases it was very simple. For example, in Step 4.1 I just add 50,000 to the date key. Why 50,000? Simple date math. The dates are integers, 20080101 is really 20,080,101. To bring it up to 2013, I simply added 50,000, thus 20,013,101 or 20130101.

The two Sales fact tables had dates on leap year from 2008. To fix those I simply backed those up a day, shifting them to February 28th. I took a slightly different approach with the Currency Rate fact table, simply shifting the 2008 leap year to 2012 leap year, then omitting February 29th from the rest of the update. Also note that on this and the Product Inventory table, the Date Key was actually part of the Primary Key of the tables. Thus I had to first drop the Primary Key, make the changes to the dates, then recreate the Primary Key.

One last note on the Fact tables, all of the dates in the Call Center table were set to 2010. For those I merely added 30,000, shifting them from 2010 to 2013. (Don’t ask me why those have 2010 dates when the rest of the sample data is 2005-2008. I have not a clue.)

As a last and final step, Step 5, I drop the little helper function DateToDateId I created way back in Step 2. And that’s it! You now have a handy demo / practice database with dates that are actually current.

A big thanks to my co-workers at Pragmatic Works (@PragmaticWorks | http://pragmaticworks.com ) for helping me test this out and making sure it worked with their stuff.

Enjoy!

 

PS Most browsers don’t seem to render the code in a monospace font. Be assured when you paste into SSMS everything should line back up again, assuming of course you use a monospace font in SSMS.

 

/*-----------------------------------------------------------------------------------------------*/
/* Updating AdventureWorks2012 for Today */
/* */
/* Robert C. Cain, http://arcanecode.com @ArcaneCode */
/* */
/* Script Copyright (c) 2013 by Robert C. Cain */
/* AdventureWorks database Copyright (c) Microsoft. */
/* */
/* This script will make a backup of the AdventureWorks2012DW database, then copy and restore it */
/* as AdventureWorksDW2013. It will then update it for current dates. 2008 now becomes 2013, */
/* 2007 is now 2012, and so forth. This script is dependent on the AdventureWorks2012DW sample */
/* database already being installed. It won't change AdventureWorksDW2012 in anyway. */
/* */
/* Be warned, if AdventureWorksDW2013 exists, it will be deleted as part of this process. */
/* */
/*-----------------------------------------------------------------------------------------------*/

PRINT 'Updating AdventureWorks2012 for Today - Starting'
GO

/*-----------------------------------------------------------------------------------------------*/
/* Step 1 - Make a copy of AdventureWorksDW2012 and restore as AdventureWorksDW2013 */
/*-----------------------------------------------------------------------------------------------*/
SET NOCOUNT ON

USE [master]

-- Step 1.1. Make a backup of AdventureWorksDW2012 ----------------------------------------------
PRINT 'Backing up AdventureWorksDW2012'
GO

BACKUP DATABASE [AdventureWorksDW2012]
TO DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Backup\AdventureWorksDW2012.bak'
WITH NOFORMAT,
INIT,
NAME = N'AdventureWorksDW2012-Full Database Backup',
SKIP,
NOREWIND,
NOUNLOAD,
STATS = 10
GO


-- Step 1.2. Delete the database AdventureWorksDW2013 if it exists ------------------------------
PRINT 'Deleting AdventureWorksDW2013, if it exists'
GO

IF (EXISTS (SELECT 1
FROM master.dbo.sysdatabases
WHERE name = 'AdventureWorksDW2013' )
)
EXEC msdb.dbo.sp_delete_database_backuphistory @database_name = N'AdventureWorksDW2013'
GO

IF (EXISTS (SELECT 1
FROM master.dbo.sysdatabases
WHERE name = 'AdventureWorksDW2013' )
)
DROP DATABASE [AdventureWorksDW2013]
GO

-- Step 1.3. Restore the database to a new copy -------------------------------------------------
PRINT 'Restoring AdventureWorksDW2012 to AdventureWorksDW2013'
GO

RESTORE DATABASE [AdventureWorksDW2013]
FROM DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Backup\AdventureWorksDW2012.bak'
WITH FILE = 1,
MOVE N'AdventureWorksDW2012_Data'
TO N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdventureWorksDW2013_Data.mdf',
MOVE N'AdventureWorksDW2012_Log'
TO N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdventureWorksDW2013_log.ldf',
NOUNLOAD, STATS = 5

GO

PRINT 'Done Creating AdventureWorksDW2013'
GO



/*-----------------------------------------------------------------------------------------------*/
/* Step 2. Create a helper function to convert dates to a YYYYMMDD format Date Id. */
/*-----------------------------------------------------------------------------------------------*/

USE [AdventureWorksDW2013]
GO

IF EXISTS (SELECT [name] FROM [sys].[all_objects] WHERE [name] = 'DateToDateId')
DROP FUNCTION [dbo].[DateToDateId];
GO

CREATE FUNCTION [dbo].[DateToDateId]
(
@Date DATETIME
)
RETURNS INT
AS
BEGIN

DECLARE @DateId AS INT
DECLARE @TodayId AS INT

SET @TodayId = YEAR(GETDATE()) * 10000
+ MONTH(GETDATE()) * 100
+ DAY(GETDATE())

-- If the date is missing, or a placeholder for a missing date, set to the Id for missing dates
-- Else convert the date to an integer
IF @Date IS NULL OR @Date = '1900-01-01' OR @Date = -1
SET @DateId = -1
ELSE
BEGIN
SET @DateId = YEAR(@Date) * 10000
+ MONTH(@Date) * 100
+ DAY(@Date)
END

-- If there's any data prior to 2000 it was incorrectly entered, mark it as missing
IF @DateId BETWEEN 0 AND 19991231
SET @DateId = -1

-- Commented out for this project as future dates are OK
-- If the date is in the future, don't allow it, change to missing
-- IF @DateId > @TodayId
-- SET @DateId = -1

RETURN @DateId

END

GO




/*-----------------------------------------------------------------------------------------------*/
/* Step 3. Add new dates to the dbo.DimDate table. */
/*-----------------------------------------------------------------------------------------------*/
PRINT 'Adding new dates to dbo.DimDate'
GO

SET NOCOUNT ON

-- Later we will be writing an INSERT INTO... SELECT FROM to insert the new record. I want to
-- join the day and month name memory variable tables, but need to have something to join to.
-- Since everything is calculated, we'll just create this little bogus table to have something
-- to select from.
DECLARE @BogusTable TABLE
( PK TINYINT)

INSERT INTO @BogusTable SELECT 1;


-- Create a table variable to hold the days of the week with their various language versions
DECLARE @DayNameTable TABLE
( [DayNumberOFWeek] TINYINT
, [EnglishDayNameOfWeek] NVARCHAR(10)
, [SpanishDayNameOfWeek] NVARCHAR(10)
, [FrenchDayNameOfWeek] NVARCHAR(10)
)

INSERT INTO @DayNameTable
SELECT DISTINCT
[DayNumberOFWeek]
, [EnglishDayNameOfWeek]
, [SpanishDayNameOfWeek]
, [FrenchDayNameOfWeek]
FROM dbo.DimDate

-- Create a month table to hold the months and their language versions.
DECLARE @MonthNameTable TABLE
( [MonthNumberOfYear] TINYINT
, [EnglishMonthName] NVARCHAR(10)
, [SpanishMonthName] NVARCHAR(10)
, [FrenchMonthName] NVARCHAR(10)
)

INSERT INTO @MonthNameTable
SELECT DISTINCT
[MonthNumberOfYear]
, [EnglishMonthName]
, [SpanishMonthName]
, [FrenchMonthName]
FROM dbo.DimDate

-- This is the start and end date ranges to use to populate the
-- dbo.DimDate dimension. Change if it's 2014 and you run across this script.
DECLARE @FromDate AS DATE = '2011-01-01'
DECLARE @ThruDate AS DATE = '2013-12-31'

-- CurrentDate will be incremented each time through the loop below.
DECLARE @CurrentDate AS DATE
SET @CurrentDate = @FromDate

-- FiscalDate will be set six months into the future from the CurrentDate
DECLARE @FiscalDate AS DATE

-- Now we simply loop over every date between the From and Thru, inserting the
-- calculated values into DimDate.
WHILE @CurrentDate <= @ThruDate
BEGIN

SET @FiscalDate = DATEADD(m, 6, @CurrentDate)

INSERT INTO dbo.DimDate
SELECT [dbo].[DateToDateId](@CurrentDate)
, @CurrentDate
, DATEPART(dw, @CurrentDate) AS DayNumberOFWeek
, d.EnglishDayNameOfWeek
, d.SpanishDayNameOfWeek
, d.FrenchDayNameOfWeek
, DAY(@CurrentDate) AS DayNumberOfMonth
, DATEPART(dy, @CurrentDate) AS DayNumberOfYear
, DATEPART(wk, @CurrentDate) AS WeekNumberOfYear
, m.EnglishMonthName
, m.SpanishMonthName
, m.FrenchMonthName
, MONTH(@CurrentDate) AS MonthNumberOfYear
, DATEPART(q, @CurrentDate) AS CalendarQuarter
, YEAR(@CurrentDate) AS CalendarYear
, IIF(MONTH(@CurrentDate) < 7, 1, 2) AS CalendarSemester
, DATEPART(q, @FiscalDate) AS FiscalQuarter
, YEAR(@FiscalDate) AS FiscalYear
, IIF(MONTH(@FiscalDate) < 7, 1, 2) AS FiscalSemester
FROM @BogusTable
JOIN @DayNameTable d
ON DATEPART(dw, @CurrentDate) = d.[DayNumberOFWeek]
JOIN @MonthNameTable m
ON MONTH(@CurrentDate) = m.MonthNumberOfYear

SET @CurrentDate = DATEADD(d, 1, @CurrentDate)
END
GO

-- If you want to verify you can uncomment this line.
-- SELECT * FROM dbo.DimDate WHERE DateKey > 20110000

PRINT 'Done adding new dates to dbo.DimDate'
GO





/*-----------------------------------------------------------------------------------------------*/
/* Step 4. Update the Fact Tables with the new dates. */
/*-----------------------------------------------------------------------------------------------*/


PRINT 'Update Fact Tables'
GO

SET NOCOUNT ON

-- To move forward five years, we simply add 50,000 to the date key

-- 4.1 FactFinance ------------------------------------------------------------------------------
PRINT ' FactFinance'
GO

UPDATE [dbo].[FactFinance]
SET [DateKey] = [DateKey] + 50000;


-- 4.2 FactInternetSales ------------------------------------------------------------------------
PRINT ' FactInternetSales'
GO

-- There are a few rows where the due date is on leap year. Update these to back off a day
-- so the date add works OK
UPDATE [dbo].[FactInternetSales]
SET [OrderDateKey] = 20080228
, [OrderDate] = '2008-02-28'
WHERE [OrderDateKey] = 20080229

UPDATE [dbo].[FactInternetSales]
SET [DueDateKey] = 20080228
, [DueDate] = '2008-02-28'
WHERE [DueDateKey] = 20080229

UPDATE [dbo].[FactInternetSales]
SET [ShipDateKey] = 20080228
, [ShipDate] = '2008-02-28'
WHERE [ShipDateKey] = 20080229

-- Now update the rest of the days.
UPDATE [dbo].[FactInternetSales]
SET [OrderDateKey] = [OrderDateKey] + 50000
, [DueDateKey] = [DueDateKey] + 50000
, [ShipDateKey] = [ShipDateKey] + 50000
, [OrderDate] = DATEADD(yy, 5, [OrderDate])
, [DueDate] = DATEADD(yy, 5, [DueDate])
, [ShipDate] = DATEADD(yy, 5, [ShipDate])


-- 4.3 FactResellerSales ------------------------------------------------------------------------
PRINT ' FactResellerSales'
GO

-- As with Internet Sales, there are rows where the due date is on leap year.
-- Update these to back off a day so the date add works OK
UPDATE [dbo].[FactResellerSales]
SET [OrderDateKey] = 20080228
, [OrderDate] = '2008-02-28'
WHERE [OrderDateKey] = 20080229

UPDATE [dbo].[FactResellerSales]
SET [DueDateKey] = 20080228
, [DueDate] = '2008-02-28'
WHERE [DueDateKey] = 20080229

UPDATE [dbo].[FactResellerSales]
SET [ShipDateKey] = 20080228
, [ShipDate] = '2008-02-28'
WHERE [ShipDateKey] = 20080229

-- Now update the table
UPDATE [dbo].[FactResellerSales]
SET [OrderDateKey] = [OrderDateKey] + 50000
, [DueDateKey] = [DueDateKey] + 50000
, [ShipDateKey] = [ShipDateKey] + 50000
, [OrderDate] = DATEADD(yy, 5, [OrderDate])
, [DueDate] = DATEADD(yy, 5, [DueDate])
, [ShipDate] = DATEADD(yy, 5, [ShipDate])

-- 4.4 FactSalesQuota ---------------------------------------------------------------------------
PRINT ' FactSalesQuota'
GO

UPDATE [dbo].[FactSalesQuota]
SET [DateKey] = [DateKey] + 50000

-- 4.5 FactSurveyResponse -----------------------------------------------------------------------
PRINT ' FactSurveyResponse'
GO

UPDATE [dbo].[FactSurveyResponse]
SET [DateKey] = [DateKey] + 50000

-- 4.6 FactCallCenter ---------------------------------------------------------------------------
PRINT ' FactCallCenter'
GO

-- All the rows in call center have a 2010 date, just add 3 years to make these 2013
UPDATE [dbo].[FactCallCenter]
SET [DateKey] = [DateKey] + 30000


-- 4.7 FactCurrencyRate -------------------------------------------------------------------------
PRINT ' FactCurrencyRate'
GO

-- Because the DateKey is part of the PK, we have to drop the key before we can update it
ALTER TABLE [dbo].[FactCurrencyRate] DROP CONSTRAINT [PK_FactCurrencyRate_CurrencyKey_DateKey]
GO

-- Shift the 2008 Leap Year days to 2012 Leap Year
UPDATE [dbo].[FactCurrencyRate]
SET [DateKey] = 20120229
WHERE [DateKey] = 20080229

-- Update everything except the leap year we fixed already
UPDATE [dbo].[FactCurrencyRate]
SET [DateKey] = [DateKey] + 50000
WHERE [DateKey] <> 20120229

-- Add the PK back
ALTER TABLE [dbo].[FactCurrencyRate]
ADD CONSTRAINT [PK_FactCurrencyRate_CurrencyKey_DateKey] PRIMARY KEY CLUSTERED
( [CurrencyKey] ASC,
[DateKey] ASC
)
WITH ( PAD_INDEX = OFF
, STATISTICS_NORECOMPUTE = OFF
, SORT_IN_TEMPDB = OFF
, IGNORE_DUP_KEY = OFF
, ONLINE = OFF
, ALLOW_ROW_LOCKS = ON
, ALLOW_PAGE_LOCKS = ON
) ON [PRIMARY]
GO


-- 4.8 FactProductInventory ---------------------------------------------------------------------
PRINT ' FactProductInventory'
GO

-- As with the previous step, the date is part of the primary key, so we need to drop it first.
ALTER TABLE [dbo].[FactProductInventory] DROP CONSTRAINT [PK_FactProductInventory]
GO

-- Shift the 2008 Leap Year days to 2012 Leap Year
UPDATE [dbo].[FactProductInventory]
SET [DateKey] = 20120229
WHERE [DateKey] = 20080229

-- Update everything except the leap year we fixed already
UPDATE [dbo].[FactProductInventory]
SET [DateKey] = [DateKey] + 50000
WHERE [DateKey] <> 20120229

-- Add the PK back
ALTER TABLE [dbo].[FactProductInventory]
ADD CONSTRAINT [PK_FactProductInventory] PRIMARY KEY CLUSTERED
( [ProductKey] ASC
, [DateKey] ASC
)
WITH ( PAD_INDEX = OFF
, STATISTICS_NORECOMPUTE = OFF
, SORT_IN_TEMPDB = OFF
, IGNORE_DUP_KEY = OFF
, ONLINE = OFF
, ALLOW_ROW_LOCKS = ON
, ALLOW_PAGE_LOCKS = ON
) ON [PRIMARY]
GO

PRINT 'Done updating the Fact tables'
GO



/*-----------------------------------------------------------------------------------------------*/
/* Step 5. Cleanup, remove the helper function we added earlier. */
/*-----------------------------------------------------------------------------------------------*/
PRINT 'Removing Helper Function'
GO

IF EXISTS (SELECT 1 FROM [sys].[all_objects] WHERE [name] = 'DateToDateId')
DROP FUNCTION [dbo].[DateToDateId];
GO

/*-----------------------------------------------------------------------------------------------*/
/* All done! */
/*-----------------------------------------------------------------------------------------------*/
PRINT 'Updating AdventureWorks2012 for Today - Completed'
GO