Zero to Hero with PowerShell and SQL Server–Precon at DevDataDay

{dev = data} DAY Birmingham 2016Are you looking to learn about PowerShell? Do you use SQL Server? Then boy have I got a bargain for you!

I’m doing a precon for our upcoming devdataday event. The precon will take place on Friday, August 19th, the day before devdataday.

This is designed to be a course for the PowerShell novice. No prior knowledge required! I’ll start the day with an introduction to PowerShell, basic usage and cmdlets. Next we’ll dive into programming scripts in PowerShell, starting with the basics of loops, conditional logic, and functions. We’ll move into advanced topics, including creating your own reusable modules and testing them.
The real meat of the day comes next, learning how to interact with SQL Server from PowerShell. Both the SQL Provider and SMO (SQL Management Objects) DLL library will be covered in full. Most importantly you’ll learn how to read the online SMO library documentation and how to convert the examples into PowerShell syntax. The day will culminate by building a reusable module for working with many SQL Server common tasks, such as T-SQL code generation, or performing health checks on your servers.
The day will wrap up with a look at Pester, the new open source PowerShell testing tool. You’ll see how to test all the code you generated during the day. Don’t be left behind, learn how to leverage the power of PowerShell in your SQL Server environment.

To register, just go to https://devdatadaypowershellprecon.eventbrite.com/

Simple Way to Fix SQL Server Page Level Corruption

I’ve decided to try an experiment, and allow the occasional guest blogger. This will give new bloggers additional exposure, and draw eyes to both our sites.

Our first guest blogger is Andrew Jackson. Andrew is a SQL DBA and SQL Server blogger too. He likes to share about SQL Server and the problems related to it as well as their solution, handling database related user queries, server or database maintenance, database management etc.

You can find Andrew online at:

Facebook: https://www.facebook.com/people/Andrew-Jackson/100008825676608

Linkedin: https://www.linkedin.com/in/jackson-andrew-401147a5

Twitter: https://twitter.com/jacksonandrew32

Editor & Blog Contributor at: http://www.sqlmvp.org/

Without further delay, here is Andrew’s first contribution, Simple Way to Fix SQL Server Page Level Corruption.


 

Database corruption is the worst situation for any Database Administrator. The SQL database stores very crucial data, the occurrence of corruption make it inaccessible which causes huddle in workflow in any organization or business. Whenever the corruption issues are discovered, it must be resolved on time by the admin before the issue spread through the data infrastructure. Therefore, it is very necessary to take proper steps to protect the data from such damage. This article explains the causes, troubleshooting, and fixing SQL server page level corruption.

Page Level Corruption

Pages are the most basic unit of data stored in database, all .mdf or .ndf files in a database are logically divided into pages numbered from 0 to n. Whenever the pages of database file are infected by the corruption, DBA will repair each damaged page in isolation. Repairing and restoring the few pages individually is faster than repairing the entire file.

Causes:

· Hardware malfunction, most of database corruption occurs due to hardware failure.

· Issues in SQL server itself, sometimes when there are internal issues in SQL server results corruption

· Unplanned shut-down when database is opened

· Virus intrusion

· Updating the SQL server in newer version also results corruption

How to Fix Page Level Corruption

Moreover, Admin have their eventual plans to tackle SQL Server database corruption issue, there are strategies like replication, database monitoring, disaster recovery plans etc. However, there are the conditions when admin do not have such solution. In that case, admin need a solution to handle this problem, In SQL 2005 and later version there is a feature called Page Checksum that creates checksum value to detect the scale of the damage in database.

If the problem in SQL database is unreadable by SQL server, then it requires the complete restoration of the database. On other hand if damage is not that severe and lesser pages are infected then DBA can use DBCC utility.

DBCC CHECKDB Command for Handling Page Level Corruption

DBCC CHECKDB is a command that fix the corruption issue in SQL database. It fixes the inconsistency in SQL server database by performing Database Consistency Checks.

It is a combination of DBCC CHECKCATALOG, CHECKALLOC and DBCCCHECKTABLE.

The syntax of DBCC CHECKDB command is given below:

clip_image003

Other Possible Solutions

There is also an alternative solution to handle Page level corruption in SQL database, user can opt a third party tool namely that can easily tackle this corruption. It can recover highly corrupted page from both primary and secondary database. It supports all the versions of SQL Database Server. Using these defined ways; user or DBA can easily recover the page level corruption.

 


Thanks Andrew for your contribution. If you are interested in becoming a guest blogger, just email me info <@> arcanetc.com.

SQL Saturday 498 Chattanooga

Do you like PowerShell? SQL Server? Are you anywhere close to Chattanooga TN? Then don’t miss this SQL Saturday, June 25th 2016.

I’m giving two sessions (yes two for the price of one!). The first session will be PowerShell 201. It covers advanced concepts you need to know, such as debugging, remoting, security, and code signing.

The second session will educate on using PowerShell with SQL Server. We’ll cover the use of the SQL Provider (SQLPS) as well as the more advanced SQL Management Object library (SMO).

You’ll find the full schedule here, as well as links to register, directions, and the like.

http://www.sqlsaturday.com/498/Sessions/Schedule.aspx 

Best of all, my demos are already uploaded, so you can download early and play along during the presentation!

I’ll mention Chattanooga is a great vacation town, so bring your whole family. There’s a huge aquarium, discovery museum, Lookout Mountain, and tons of attractions to keep them occupied while you’re having fun at SQL Saturday.

Chattanooga SQL Saturday June 27 2015–Zero to Hero with PowerShell and SQL Server

This Saturday, June 27 2015, I will be at SQL Saturday #410 in Chattanooga, TN. I’ll be presenting a session “Zero to Hero with PowerShell and SQL Server”, in which we’ll start with the basics of PowerShell, then move into working with SQL Server via the PS SQL Provider.

This is going to be an extremely fast paced session, so you may wish to download the code samples ahead of time. I have already uploaded the samples to the event site. Just go to the Sessions menu, Schedule, and you’ll see a bit download button under the session. As of right now I’m the last session of the day, in room 219, but that is subject to change so be sure to check the schedule upon arrival.

Be warned, there is far more code in the download then we’ll be able to cover in a one hour session. It is well commented though, and you should be able to understand it after the session.

If you are looking for even more PowerShell goodness, my friend Aaron Nelson ( @sqlvariant | blog ) is doing a PowerShell for Data Professionals just after lunch. Our two sessions should work well together for those interested in using PowerShell in the world of SQL Server.

Hope to see you there!

Zero to Hero with PowerShell and SQL Server at 24 Hours of Pass

On September 9th I am co-presenting “Zero to Here with SQL Server and PowerShell” for the 24 Hours of PASS. If you’ve not heard of 24 Hours of PASS, it is 24 straight hours of online presentations. This time the sessions are a preview of the SQL PASS Summit in Seattle, WA in November.

At the PASS Summit I, along with two co-workers, am presenting a full day Pre-Con entitled Zero to Hero with PowerShell and SQL Server. I’m also doing a regular session, Make SQL Server POP with PowerShell.

The session for 24 Hours of PASS will take place at 00:00 GMT on September 10th, or for those of us in the states, September the 9th, 8 PM Eastern, 7 PM Central, 6 Mountain, or 5 Pacific. The session is titled the same as the precon, Zero to Hero with PowerShell and SQL Server. Through the preceding link you can see more about the session, get to the full schedule, and most importantly register!

Be sure to check out my co-presenters too, Bradley Ball (@SqlBalls | http://sqlballs.com ) and Jason Strate (@StrateSQL | http://www.jasonstrate.com )

Arcane-SQL–A PowerShell Module for Generating SQL Code

Overview

There are many PowerShell modules available for assisting the busy DBA with managing their SQL Server environment. This isn’t one of them. This module is targeted toward SQL Developers, with special functionality for data warehouse developers. A common task for BI professionals, one that is performed on almost every project, is the creation of a staging area. This might be a set of tables in the data warehouse, perhaps in their own schema, or in an entirely separate database often called an operational data store (ODS).

The staging tables are typically similar in structure to the ones in the source database. Similar, but not identical, as there are some small modifications which are commonly made to the staging tables. First, large data types such as VARCHAR(MAX) are seldom useful in data analysis and thus could be removed. Next, even the most casual user of SSIS will quickly see SSIS prefers to work with the double byte character sets (WSTR in SSIS, which maps to NVARCHAR in T-SQL) as opposed to the single byte (STR/VARCHAR) character sets. It can be helpful to convert these in the staging area.

This Module can (optionally) do all of these things and more when it is used to generate CREATE TABLE or SELECT statements. Imagine if you will a source system with thousands of tables and the need to create a staging area for it in a new data warehouse. This quickly becomes a long, boring tedious task. Now imagine being able to write a bit of PowerShell code and generate these tables in just a few minutes time.

Before diving in, it is highly suggested you download and review the example script, Arcane-SQL-Example.ps1. This demonstrates the most commonly used functions and provides patterns for their use.

Functionality

While the module is full of functions, there are a few core ones that should be highlighted. Complete documentation can be found in the module itself, which has been fully documented using the native PowerShell Help system. In addition there is an example script file which demonstrates some of the most common tasks.

  • Enable-SqlAssemblies – This is the most important function, without calling it nothing else works. Be aware the SQL Server assemblies (including the SMO – SQL Management Objects – and SQL Provider) need to be on the machine where this script is run. This module has been tested on, and intended for, SQL developers with SQL Server Developer Edition installed on their workstations.
  • Join-InvokeInstance and Join-ProviderInstance – Most of the interaction done with the SQL Provider requires the server name and instance name, assembled in a path like syntax. The Invoke-SQLCommand likewise requires this formatting, however it has a little quirk. If the instance is "default" then the Invoke requires it to be omitted while the provider requires it to be present. These two functions reduce the confusion, simply pass in the server name and instance, and they will format things correctly.
  • Get-TablesCollection – When working with tables it is common to iterate over all the tables in a database. This function will generate a PowerShell array of table objects, each object being of type Microsoft.SqlServer.Management.Smo.Table. By having table objects the wide variety of properties for the table are available, such as Schema name, Table name, and Row Count.
  • Get-TableByName – Most commonly scripts will retieve an array of tables using the above Get-TablesCollection, then iterate over them in a foreach loop. There are times however when only a single table from the collection is desired. For those types the Get-TableByName can be used to retrieve a specific table object based on the name of the table. 
  • Remove-SchemasFromTableCollection and Select-SchemasInTableCollection – Get-TablesCollection will return an array of all the tables in a database. Often there is a need to only work with a subset of that table collection. These two functions will filter based on the schema and return a new array. The first, Remove-SchemasFromTableCollection, removes all tables from the array of schemas that are passed in. The second, Select-SchemasInTableCollection, will retain only those tables in the schemas passed into the function.
  • Remove-TablesFromTableCollection and Select-TablesInTableCollection – These work as filters, similar to the functions above. Instead of the schema however, they are based on table name. All tables that begin with the text passed in are either removed, or in the latter function the only ones retained.
  • Get-PrimaryKeyIndex – Returns the primary key object for the passed in table object.
  • Get-PrimaryKeyColumnNames – returns a comma delimited list of the column names in the primary key
  • Decode-IsPrimaryKeyColumn – Will determine if the passed in column name is part of the primary key index
  • Get-TruncateStatement – Will generate a SQL Truncate Table statement based on the table object passed in.
  • Get-DropTableStatement – Generates a Drop Table statement, including the check to see if the table exists, for the passed in table object.
  • Get-CreateStatement – To simply say this function generates a create table statement would do it disservice. It will take a table object and reverse engineer it, generating a create table statement. Unlike other code generators, it has a suite of parameters which allow customization of the generated statement with an eye toward the needs of a data warehouse developer. A few are:
    • DataTypeAlignColumn – Set the column number to line up the data type declarations on. Passing in a value of 1, will suppress alignment and simply place the data type after the column name. The default is column 50.
    • OverrideSchema – It is common place staging tables in the data warehouse in their own schema, often named ‘Staging’ or ‘ETL’. Passing in a value here will include the new schema name in the create table declaration. If the table object passed in had a schema other than dbo, it is placed in front of the table name with an underscore. If it was dbo, the source schema is simply omitted.
    • PrependToTableName and AppendToTableName – Allows extra text to be placed before or after the table name. For example, it is common to create tables with _Delete, _Update, and _Insert in the staging area. This provides a simple way to do that.
    • AdditionalColumns – When creating tables in a data warehousing environment, there are often extra columns to hold metadata about the ETL process. A user of this function can create an array of additional columns using the Add-ColumnDefinition function and have them added to the create table statement.
    • Scrub – This is a very powerful switch. When added it will perform a cleanup to make the output suitible for data warehousing. Columns with large data types such as VARCHAR(MAX) are removed. All single byte character sets in the source are converted to double byte sets.
    • SuppressIdentity – Source systems will sometimes use the IDENTITY clause in the primary key column. Using this switch will suppress that identity clause from being generated in the new create table statement.
    • SuppressNotNull – Often staging tables will not be concerned with null versus not null values. Using this switch will create all columns as nullible, regardless of their setting in the source.
    • IncludeDropTable – Adding this switch will include a ‘if exists drop table’ style clause prior to the create table statement.
    • PrimaryKeysOnly – Will generate a create table statement that only has the primary keys found in the source system.

Finally, if a column in the source table object has a custom data type, the script will reverse engineer the data type back to its basic SQL data type.

  • Get-SelectStatement – Like its sister function Get-CreateStatement, under the covers this function provides a lot of power and flexibility to the statement it creates. Additional columns can be added, columns can be specified to order the output by, table aliases can be used, and most powerful of all is the ability to generate a HASHBYTES column, including the ability to remove specified columns from the hash byte calculation. Here are some of its parameters:
    • AsColumn – The routine will line up the AS <column alias> at the column number passed in here. The default is 50. To not use aligning, set this to 1.
    • PrependToColumnName – Text to include before each column name.
    • AppendToColumnName – Text to place after each column name.
    • AdditionalColumns – A collection of additional columns to be added to the SELECT statement. Useful for adding metadata columns. All items in the AdditionalColumns collection should be generated using the Add-OutputColumn function.
    • OrderByColumns – A list of columns to add to the ORDER BY clause. All items in the OrderByColumns collection should be generated using the Add-OutputColumn function.
    • TableAlias – Allows user to specify an alias to use for the table. The alias is then put in front of each column name.
    • HashBytes – If included the select statement will include a HASHBYTES function with all columns except the primary keys and any columns included in the OmitFromHashBytes collection. The name passed in this parameter will be used for the name of the HashBytes column.
    • OmitFromHashBytes – A collection of column names that should be excluded from the HashBytes calculated column. Useful for excluding metadata columns. All items in the OmitFromHashBytes collection should be generated using the Add-OutputColumn function.
    • Scrub – When included this will remove certain data columns from the output, such as BINARY, NVARCHAR(MAX), XML, and other large types not normally used in data warehouses. Additionally VARCHAR/CHAR are converted to NVARCHAR/NCHAR, and DATETIME converted to DATETIME2(4).
    • Flatten – When included will return the SELECT statement as one long string, without any Carriage Return / Line Feed characters. Additionally, any additional spacing (such as indicated with the AsColumn) is eliminated.
    • IncludeOrderByPK – When included the Primary Keys in the table object are included in the order by clause. If any columns are passed in the OrderByColumns parameter, the Primary Keys occur first in the Order By clause, followed by any columns in the OrderByColumns parameter.
    • IncludeNoLock – When included, a WITH NOLOCK clause is added to the SELECT statement.

Construction

Those PowerShell experts who review the code may note that in many places code does not follow the most "powershelly" way of doing things. In some places rather than using pipelining it was instead decided to use a foreach loop, for example. The intended audience for this module are T-SQL developers who may not be as comfortable in PowerShell as they are T-SQL. Thus using code that more closely aligned with T-SQL patterns would make it more useful and modifiable by SQL developers.

When development first started attempts were made to use advanced functions, using the pipeline for input and output. At some point however this didn’t make sense for a majority of the functions. Time constraints further impinged this effort. Some future revision may attempt to migrate selected functions back to an advanced design, but for now they will have to stand as is.

Development Environment

This module is intended to be used on a developer workstation, not on a server, and especially not on a production server. As such deployment has been made simple, just copy the Arcane-SQL folder to the developers PowerShell module library. On a standard Windows 7 machine this would be C:\Users\<<usernamehere>>Documents\WindowsPowerShell\Modules. If the Windows PowerShell folder and modules subfolder do not exist, they will need to be created first.

To keep things simple, no attempt was made to sign the script. If this is an issue the developer using this module can self sign it on their PC. Check the execution policy on the workstation where the module is installed to ensure sufficient rights to run the module.

This module was developed on machines with both SQL Server (Developer Edition) 2008R2 and 2012 installed (some machines with both) and worked without issue. One machine it was tested on had 3 versions of SQL Server, 2008R2, 2012, and 2014. On that one machine there were some errors with some of the functions passing in the SMO table objects. Those are still being investigated.

SQL Security was assumed to be handled using built in Windows Credentials. Thus the logged in user would need to have rights based on Windows credentials to the SQL Server they are targeting. 

The machine being developed on was using PowerShell v4, however v3 should work as well.

This module was developed using SAPIEN PowerShell Studio 2014. To make life easy for other developers the PowerShell Studio files (Arcane-SQL.psproj, Arcane-SQL.psproj.build, and Arcane-SQL.psprojs) were included in the code. If you are using a different editor, such as the PowerShell ISE, simply discard these files.

Warranty

To put it succinctly, there is none. No guarantee is made for the code in this module, users of this module assume all risks. While I am happy to receive bug reports, I make no promises or guarantees if, or when, they will be fixed.

Contributions

No, not the money kind, code contributions. If anyone wishes to extend the functionality of this module I am happy to collaborate as long as the coding standards demonstrated in this module are adhered to, and the contributions are relevant to the goals of this module. Be aware though this is not a money making effort, so expect no monetary reimbursement for any contributions.

Download

You can download the module and its example at:

http://gallery.technet.microsoft.com/Arcane-SQL-A-PowerShell-185651ad

Importing MongoDB Data Using SSIS 2012

I have embarked on a little quest to learn other database platforms (especially NoSQL) as more and more of our clients at Pragmatic Works have them in their enterprise, and want to be able to import data from them into their SQL Server data warehouses using SQL Server Integration Services (SSIS). While I found several articles that showed how to do so, these were outdated due to changes in the MongDB C# driver. After quite a bit of effort figuring out how to get this working, I thought I’d pass along my hard fought knowledge.

First, I assume you are familiar with MongoDB (http://www.mongodb.org/) and SQL Server (https://www.microsoft.com/en-us/sqlserver/default.aspx). In my examples I am using SSIS 2012 and MongoDB 2.4.8, along with the C# driver version 1.7 for MongoDB available at http://docs.mongodb.org/ecosystem/drivers/csharp/ .

First, download and install the C# driver. This next step is important, as there was a change that occurred with version 1.5 of the driver: the DLLs are no longer installed in the GAC (Global Assembly Cache) automatically. They must be there, however, for SSIS to be able to use them.

By default, my drivers were installed to C:\Program Files (x86)\MongoDB\CSharpDriver 1.7. You’ll want to open a CMD window in Administrator mode, and navigate to this folder. Next you’ll need GACUTIL, on my computer I found the most recent version at:

C:\Program Files (x86)\Microsoft SDKs\Windows\v8.1A\bin\NETFX 4.5.1 Tools\x64\

A simple trick to find yours: Since you are already in the CMD window, just move to the C:\Program Files (x86) folder, and do a “dir /s gacutil.exe”. It will list all occurrences of the program, just use the one with the most recent date. Register the dlls by entering these commands:

“C:\Program Files (x86)\Microsoft SDKs\Windows\v8.1A\bin\NETFX 4.5.1 Tools\x64\gacutil” /i MongDB.Bson.dll

“C:\Program Files (x86)\Microsoft SDKs\Windows\v8.1A\bin\NETFX 4.5.1 Tools\x64\gacutil” /i MongDB.Driver.dll

Note the “ quote marks around the path are important for the CMD window to correctly separate the gacutil program from the parameters.

Once that is done, create a new SQL Server Integration Services project in SQL Server Data Tools (SSDT), what used to be called BIDS in SQL Server 2008R2 (and previous). Put a Data Flow Task on the Control Flow design surface. Then open the Data Flow Task for editing.

Next, drag and drop a Script Component transformation onto the Data Flow design surface. When prompted, change the component type to Source.

image

Now edit the script transform by double clicking on it. Move to the Inputs and Outputs page. For my test, I am using the dbo.DimCurrency collection I created using the technique I documented in the previous post, Exporting Data from SQL Server to CSV Files for Import to MongoDB Using PowerShell ( https://arcanecode.com/2014/01/13/exporting-data-from-sql-server-to-csv-files-for-import-to-mongodb-using-powershell/ )

I renamed the output from “output” to “MongoDB_DimCurrency”. I then added four columns, CurrencyName, CurrencyAlternateKey, CurrencyKey, and ID.

image

Make sure to set CurrencyName, CurrencyAlternateKey, and ID to “Unicode string [DT_WSTR]” Data Type. Then change CurrencyKey to “four byte signed integer [DT_I4]”.

Now return to the Script page and click Edit Script. In the Solution Explorer pane, expand References, right click and pick Add Reference. Go to Browse, and navigate to the folder where the MongoDB C# drivers are installed. On my system it was in C:\Program Files (x86)\MongoDB\CSharpDriver 1.7\. Add both MongoDB.Driver.dll and MongoDB.Bson.dll.

image

Click OK when done, your Solution Explorer should now look something like:

image

Now in the script, expand the Namespaces region and add these lines:

using MongoDB.Bson;
using MongoDB.Driver;
using MongoDB.Bson.Serialization;

Now scroll down to the CreateNewOutputRows() procedure. Here is a sample of the code I used:

public override void CreateNewOutputRows()
{
  string connectionString = "mongodb://localhost";
  string databaseName = "AdventureWorksDW2014";

  var client = new MongoClient(connectionString);
  var server = client.GetServer();
  var database = server.GetDatabase(databaseName);
  string CurrencyKey = "";

  foreach (BsonDocument document in database.GetCollection<BsonDocument>("dbo.DimCurrency").FindAll())
  {
    MongoDBDimCurrencyBuffer.AddRow();
    MongoDBDimCurrencyBuffer.CurrencyName = document["CurrencyName"] == null ? "" : document["CurrencyName"].ToString();
    MongoDBDimCurrencyBuffer.CurrencyAlternateKey = document["CurrencyAlternateKey"] == null ? "" : document["CurrencyAlternateKey"].ToString();
    CurrencyKey = document["CurrencyKey"] == null ? "" : document["CurrencyKey"].ToString();

    MongoDBDimCurrencyBuffer.CurrencyKey = Convert.ToInt32(CurrencyKey);
    MongoDBDimCurrencyBuffer.ID = document["_id"] == null ? "" : document["_id"].ToString();
  }

}

I start by defining a connection string to the MongoDB server, followed by the database name. I then create a MongoClient object. Note the MongoClient is the new way of connecting to the MongoDB server. In earlier versions of the C# driver, you used MongoServer objects.

I then cycle through each document in the collection “dbo.DimCurrency”, using the FindAll() method. For each item I use the AddRow() method to add a row to the buffer. In order to find the proper name for the buffer I went to the Solution Explorer and expanded the BufferWrapper.cs file. This is a class created by the script transform with the name of the output buffer.

image

For each column in my outputs, I map a column from the document. Note the use of the ternary operator ? : to strip out nulls and replace them with empty strings. String columns you can map directly from the document object to the output buffers columns.

The CurrencyKey column, being an integer, had to be converted from a string to an integer. To make it simple I created a string variable to hold the return value from the document, then used the Convert class to convert it to an INT 32.

Once you’ve done all the above, validate the code by building the code. If that all checks out save your work, close the code window, then close the Script Transformation Editor by clicking OK.

Now place a destination of some kind on the Data Flow. Since I have my company’s Task Factory tools I used a TF Terminator Destination, but you could also use a Row Count destination. On the precedence constraint between the two, right click and Enable Data Viewer. Execute the package, if all goes well you should see:

image

A few final notes. This test was done using a MongoDB document schema that was flat, i.e. it didn’t have any documents embedded in the documents I was testing with. (Hopefully I’ll be able to test that in the future, but it will be the subject of a future post.) Second, the key was the registering of the DLLs in the GAC. Until I did that, I couldn’t get the package to execute. Finally, by using the newer API for the MongoDB objects I’ve ensured compatibility for the future.