Installing Windows PowerShell Modules on Multiple User’s Computers–Updated

A few years ago (2010, wow 4 years now) Ed Wilson, aka the Scripting Guy, wrote a blog post on how to copy PowerShell modules onto multiple user’s computers. You’ll find the original version of his script at:

http://blogs.technet.com/b/heyscriptingguy/archive/2010/01/19/hey-scripting-guy-january-19-2010.aspx

Although I got my copy from the new version of his excellent book, Windows PowerShell 4.0 Best Practices.

I have used this for a while, but it turned out there was a drawback. Let me explain.

I’m currently developing what is becoming a rather large module. For management, I wanted to break it down into smaller pieces. The solution is to put logical groupings of functions into individual PS1 files, then dot source those from the module.

. $PSScriptRoot\MyPiecesPartsScript.ps1

Then I could break it up as much as I wanted. The problem with the original script was it only copied PSM1 and PSD1 files. In his script he has a Get-ChildItem which feeds his Copy-Module function. When I added *.PS1 to the list of things to include, the script created a folder for each script. Not what I wanted.

The solution for me was to write another function which would get all the folders in my source and copy the PS1s to the appropriate module folder. I had one other criteria though. In each of my module folders I have a test script where I can test out my module. I name these with the same name as the module but with a –Test on the end. Naturally I don’t want to copy these to the users module folder.

Here then is the function I created. You could copy and paste this right below the functions in Ed’s script:

function Copy-SupportingScripts ()
{
  [CmdletBinding()]
  param ([Parameter( Mandatory = $true,
                     ValueFromPipeline = $true,
                     ValueFromPipelineByPropertyName = $true,
                     HelpMessage = ‘Please pass a Directoy object.’
                     )]
         [System.IO.DirectoryInfo] $Folder,
         [Parameter( Mandatory = $false,
                     ValueFromPipeline = $true,
                     ValueFromPipelineByPropertyName = $true,
                     HelpMessage = ‘Enter files to exclude.’
                     )]
         [string] $Exclude
        )

  foreach($dir in $Folder)
  {
    $UserPath = $env:PSModulePath.split(";")[0]
    $targetPath = "$UserPath\$($dir.Name)"
    $sourcePath = "$($dir.FullName)\*.ps1"
    Write-Verbose "Copy $sourcePath to $targetPath"
    if ($Exclude.Length -gt 0)
    {
      Write-Verbose "    Excluding $Exclude in the copy."
      Copy-Item -Path $sourcePath `
                -Destination $targetPath `
                -Exclude *-Test.ps1 `
                -Force | Out-Null
    }
    else
    {
      Copy-Item -Path $sourcePath `
                -Destination $targetPath `
                -Force | Out-Null
    }
  }
}

To call it, at the bottom of Ed’s original script you can use:

Get-ChildItem -Path C:\PS\Arcane-Modules -Directory |
  ForEach-Object { Copy-SupportingScripts -Folder $_ -Exclude *-Test.ps1 -Verbose }

Here is the final result of my script merged with Ed’s. Make sure to give him plenty of kudo’s for the original.

# —————————————————————————–
# Script: Copy-Modules.ps1
# Author: ed wilson, msft
# Date: 09/07/2013 17:33:15
# Updated by: Robert C. Cain, @ArcaneCode, Pragmatic Works
# Updated Date: 02/20/2014
# Keywords: modules
# comments: installing
# Windows PowerShell 4.0 Best Practices, Microsoft Press, 2013
# Chapter 10
# —————————————————————————–
Function Get-OperatingSystemVersion
{
(Get-WmiObject -Class Win32_OperatingSystem).Version
} #end Get-OperatingSystemVersion

Function Test-ModulePath
{
$VistaPath = "$env:userProfile\documents\WindowsPowerShell\Modules"
$XPPath =  "$env:Userprofile\my documents\WindowsPowerShell\Modules"
if ([int](Get-OperatingSystemVersion).substring(0,1) -ge 6)
   {
     if(-not(Test-Path -path $VistaPath))
       {
         New-Item -Path $VistaPath -itemtype directory | Out-Null
       } #end if
   } #end if
Else
   { 
     if(-not(Test-Path -path $XPPath))
       {
         New-Item -path $XPPath -itemtype directory | Out-Null
       } #end if
   } #end else
} #end Test-ModulePath

Function Copy-Module([string]$name)
{
$UserPath = $env:PSModulePath.split(";")[0]
$ModulePath = Join-Path -path $userPath `
               -childpath (Get-Item -path $name).basename
if ( (Test-Path $modulePath) -eq $false)
   { New-Item -path $modulePath -itemtype directory | Out-Null }
Copy-Item -path $name -destination $ModulePath -force | Out-Null

}

function Copy-SupportingScripts ()
{
  [CmdletBinding()]
  param ([Parameter( Mandatory = $true,
                     ValueFromPipeline = $true,
                     ValueFromPipelineByPropertyName = $true,
                     HelpMessage = ‘Please pass a Directoy object.’
                     )]
         [System.IO.DirectoryInfo] $Folder,
         [Parameter( Mandatory = $false,
                     ValueFromPipeline = $true,
                     ValueFromPipelineByPropertyName = $true,
                     HelpMessage = ‘Enter files to exclude.’
                     )]
         [string] $Exclude
        )

  foreach($dir in $Folder)
  {
    $UserPath = $env:PSModulePath.split(";")[0]
    $targetPath = "$UserPath\$($dir.Name)"
    $sourcePath = "$($dir.FullName)\*.ps1"
    Write-Verbose "Copy $sourcePath to $targetPath"
    if ($Exclude.Length -gt 0)
    {
      Write-Verbose "    Excluding $Exclude in the copy."
      Copy-Item -Path $sourcePath `
                -Destination $targetPath `
                -Exclude *-Test.ps1 `
                -Force | Out-Null
    }
    else
    {
      Copy-Item -Path $sourcePath `
                -Destination $targetPath `
                -Force | Out-Null
    }
  }
}

# *** Entry Point to Script ***
$sourceFolder = "C:\PS\Arcane-Modules"

# Ensure the PowerShell folder exists in the users Documents folder
Test-ModulePath

# Copy the modules (psd1 and psm1) files
Get-ChildItem -Path $sourceFolder -Include *.psm1,*.psd1 -Recurse |
  ForEach-Object { Copy-Module -name $_.fullName }

# Copy any supporting ps1 files.
# Remove the -Exclude directive if you don’t want to exclude anything.
Get-ChildItem -Path $sourceFolder -Directory |
  ForEach-Object { Copy-SupportingScripts -Folder $_ `
                                          -Exclude *-Test.ps1 `
                                          -Verbose
                 }

One final and very important note. Ed’s original script was written in the PowerShell v2 days. My function uses the new –Directory switch introduced in PowerShell v3, so you will need at least v3 to make this work.

Exporting Data from SQL Server to CSV Files for Import to MongoDB Using PowerShell

I’ve been exploring other database systems, in order to determine how to import data from them using SQL Server Integration Services (SSIS). My first step though was to create some test data. I wanted something familiar, so I decided to export the Adventure Works Data Warehouse sample database and import into MongoDB. While I had many options I decided the simplest way was to first export the data to CSV files, then use the MongoDB utility mongimport. Naturally I turned to PowerShell to create an automated, reusable process.

First, if you need the Adventure Works DW database, you’ll find it at http://msftdbprodsamples.codeplex.com/. Second, I did my export from a special version of Adventure Works DW I created called AdventureWorksDW2014. This is optional, but if you want to have a version of Adventure Works DW updated with current dates, see my post at https://arcanecode.com/2013/12/08/updating-adventureworksdw2012-for-2014/. Third, I assume you are familiar with MongoDB, but if you want to learn more go to http://www.mongodb.org/.

Below is the PowerShell 3 script I created. The script is broken into four regions. The first, User Settings, contains the variables that you the user might need to change to get the script to run. It has things like the name of the SQL Server database, the path to MongoDB, etc.

The second region, Common, establishes variables that are used by the remaining two regions. You shouldn’t need to change or alter these. The third region accesses SQL Server and exports each table in the selected database to a CSV format file.

The final region, “Generate MongoDB import commands”, creates a batch (.BAT) file which has all the commands needed to run mongoimport for each CSV file. I decided not to have the PowerShell script execute the .BAT file so it could be reviewed before it is run. There might be some tables you don’t want to import, etc.

It is also quite easy to adapt this script to just create CSV files from SQL Server without using the MongoDB piece. Simply remove the fourth and final region, then in the Common and User Settings regions remove any variables what begin with the prefix “mongo”.

As the comments do a good job of explaining what happens I’ll let you review the included documentation for step by step instructions.

#==================================================================================================
# SQLtoCSVtoMongoDb.ps1
# Robert C. Cain | @ArcaneCode |
http://arcanecode.com
#
# If you need a simple way to export data from SQL Server to MongoDb, here is one way to do it.
# The script starts by setting up some variables to the server environment (see the User Settings
# region)
#
# Next, it exports data from each table in the selected database to individual CSV files.
# Finally, it generates a batch file which executes mongoimport for each csv file to import
# into MongoDb.
#
# I broke this into four regions so if all that is desired is a simple export of data to CSVs,
# you can simply omit the final region along with any variables that begin with "mongo".
#
# While I could have gone ahead and run the batch file at the end, I chose not to in order to
# give you time to review the output prior to running the batch file.
#==================================================================================================

Clear-Host

#region User Settings

  # In this section, set the variables so they are appropriate for your project / environment
 
  # This is the spot where you want to store the generated CSVs.
  # Make sure it does NOT end in a \
  $csvPath = "C:\mongodb"

  # If you are running this on a computer other than the server, set the name of the server below
  $sqlServer = $env:COMPUTERNAME

  # If you have a named instance be sure replace "default" with the name of the instance
  $sqlInstance = "\default"

  # Enter the name of the database to export below
  $sqlDatabaseName = "AdventureWorksDW2014"

  # The settings below only apply to the MongoDB code generation
  # Assemble path to mongodb. This assumes utlities are stored in the default bin folder
  $mongoPath = "C:\mongodb"
  $mongoImport = "$mongoPath\bin\mongoimport"

  # Set the server name and port
  $mongoHost = "localhost"   # Leave blank to default to localhost
  $mongoPort = ""            # Leave blank to default to 27107
 
  # Set the user name and password, leave blank if it isn’t needed
  $mongoUser = ""
  $mongoPW = ""

  # Enter the name of the database to import to.
  $mongoDatabaseName = "AdventureWorksDW2014"

  # Upserts are REALLY slow, especially on large datasets. Setting this to $true will turn off
  # the upsert option. If set to true, you are responsible for either deleting all documents
  # in the collection before hand, or allowing the risk of duplicates.
  #
  # Setting to false will enable the upsert option for mongoimport, and attempt to determine the
  # keys and (if found) add them to the final mongoimport command.
  $mongoNoUpsert = $true

#endregion

#region Common ————————————————————————————
 
  # This section sets variables used by both regions below. There is no need to alter anything
  # in this region.

  # Import the SQLPS provider (if it’s not already loaded)
  if (-not (Get-PSProvider SqlServer))
    { Import-Module SQLPS -DisableNameChecking }

  # Assemble the full servername \ instance
  $sqlServerInstance = "$sqlServer\$sqlInstance"

  # Assemble the full path for the SQL Provider to get to the database
  $sqlDatabaseLocation = "SQLSERVER:\sql\$sqlServerInstance\databases\$sqlDatabaseName"

  # Now tack on the Tables ‘folder’ to the SQL Provider path, the move there
  $sqlTablesLocation = $sqlDatabaseLocation + "\Tables"
  Set-Location $sqlTablesLocation

  # Get a list of tables in this database
  $sqlTables = Get-ChildItem

#endregion

#region Export SQL Data —————————————————————————
  # In this section we will export data from each table in the database to a CSV file.
  # WARNING: If the CSV file exists, it will be overwritten.

  # These are just used to display informational messages during processing
  $sqlTableIterator = 0
  $sqlTableCount = $sqlTables.Count

  # Iterate over each table in the database
  foreach($sqlTable in $sqlTables)
  {
    $sqlTableName = $sqlTable.Schema + "." + $sqlTable.Name   

    # I’ll grant you the next little bit of formatting for the progress messages is a bit
    # OCD on my part, but I like my output formatted and easy to read.
    $sqlTableIterator++
    $padCount = " " * (1 + $sqlTableCount.ToString().Length – $sqlTableIterator.ToString().Length)
    $sqlTableIteratorFormatted = $padCount + $sqlTableIterator

    if( $sqlTableName.Length -gt 50 )
      { $padTable = " " }
    else
      { $padTable = " " * (50 – $sqlTableName.Length) }

    Write-Host -ForegroundColor White -NoNewline "Processing Table $sqlTableIteratorFormatted of $sqlTableCount : $sqlTableName $padTable"
   
    # If the instance is "default", we have to exclude it when we use Invoke-SqlCmd
    if($sqlInstance.ToLower() -eq "\default")
      { $sqlSI = $sqlServer }
    else
      { $sqlSI = $sqlServerInstance }

    # Load an object with all the data in the table
    # Note if you have especially large tables you may need to modify this
    # section to break things into smaller chunks.
    $sqlCmd = "SELECT * FROM " + $sqlTableName
    $sqlData = Invoke-Sqlcmd -Query $sqlCmd `
                             -ServerInstance $sqlSI `
                             -SuppressProviderContextWarning `

    # Now write the data out.
    # Note utf8 encoding is important, as it is all mongoimport understands
    # Also need to omit the Type Info header PowerShell wants to write out
    Write-Host -ForegroundColor Yellow "    Writing to table $sqlTableName.csv"
    $sqlData | Export-Csv -NoTypeInformation -Encoding "utf8" -Path "$csvPath\$sqlTableName.csv"

  }

  # Just add a blank line after the processing ends
  Write-Host

#endregion

#region Generate MongoDB import commands ———————————————————-

  # In this region we will generage the commands to import our newly exported data
  # into an existing database in MongoDB. This is an example of our desired output (wrapped
  # onto multiple lines for readability, in the output it will be a single line):

  #  C:\mongodb>bin\mongoimport –host localhost -port 27107
  #                             –db AdventureWorksDW2014 –collection DimSalesReason
  #                             –username Me –password mySuperSecureP@ssW0rd!
  #                             –type csv –headerline –file DimSalesReason.csv
  #                             –upsert –upsertFields SalesReasonKey

  # Note several of these parameters are optional, and could use defaults, or be potentially
  # omitted from the final output, based on the choices at the very beginning of this script

  # Feel free to alter the $mongoCommand as needed for other circumstances

  # Final warning, the database must already exist in MongoDb in order to import the data. This
  # script will not generate the database for you.

  # Create the name for the batch file we will generate
  $mongoBat = $csvPath + "\Import_SQL_" + $sqlDatabaseName + "_to_MongoDb_" + $mongoDatabaseName + ".bat"

  # See if file exists, if so delete it
  if (Test-Path $mongoBat)
    { Remove-Item $mongoBat }

  # These are just used to display informational messages during processing
  $sqlTableIterator = 0
  $sqlTableCount = $sqlTables.Count

  # mongoimport allows us to do upserts, helping to eliminate duplicate rows on import.
  #
  # To make an upsert work there has to be a key column to match up on. Fortunately,
  # most tables in the SQL Server world have Primary Keys, so we can find out what
  # columns those are and add it to the command. Note if there is no PK in SQL Server,
  # no upsert will be attempted.
  #
  # Note though that upserts are REALLY slow, so the option to skip them is
  # built into the script and set at the top (mongoNoUpsert). The generated batch file
  # assumes that either a) you have deleted all data from the collection ahead of time,
  # or b) you are OK with the risk of duplicate data.

  # Iterate over each table in the database to build the mongoimport command
  foreach($sqlTable in $sqlTables)
  {
    $sqlTableName = $sqlTable.Schema + "." + $sqlTable.Name

    # A bit more OCD progress messages
    $sqlTableIterator++
    $padCount = " " * (1 + $sqlTableCount.ToString().Length – $sqlTableIterator.ToString().Length)
    $sqlTableIteratorFormatted = $padCount + $sqlTableIterator
    Write-Host -ForegroundColor Green "Building mongoimport command for table $sqlTableIteratorFormatted of $sqlTableCount : $sqlTableName"

    # Begin building the command
    $mongoCommand = "$mongoImport "
   
    if ($mongoHost.Length -ne 0)
      { $mongoCommand += "–host $mongoHost " }

    if ($mongoPort.Length -ne 0)
      { $mongoCommand += "–port $mongoPort " }

    $mongoCommand += "–db $mongoDatabaseName –collection $sqlTableName "

    if ($mongoUser.Length -ne 0)
      { $mongoCommand += " –username $mongoUser –password $mongoPW " }

    $mongoCommand += " –type csv –headerline –file $csvPath\$sqlTableName.csv "
       
    # Build the upsert clause, if the user has elected to use it.
    if ($mongoNoUpsert -eq $false)
    {
      $mongoPKs = ""
      foreach($sqlIndex in $sqlTable.Indexes)
      {
        if($sqlIndex.IndexKeyType -eq ‘DriPrimaryKey’)
        {
          foreach($sqlCol in $sqlIndex.IndexedColumns) #$sqlPKColumns)
          {
            if ($mongoPKs.Length -ne 0)
              { $mongoPKs += "," }
            # Note column names are returned with [ ] around them, and must be removed
            # Have to use -replace instead of .Replace() because $sqlCol is an column not a string
            $mongoPKs += ($sqlCol -replace "\[", "") -replace "\]", ""
          }
               
          $mongoCommand += " –upsert –upsertFields $mongoPKs"
        }           
      }
    }

    # Append the command to the batch file
    $mongoCommand | Out-File -FilePath $mongoBat -Encoding utf8 -Append

  }

  # Just add a blank line after the processing ends
  Write-Host

#endregion

 

Atlanta Code Camp 2013

On Saturday August 24, 2013 I’m presenting “What’s New In PowerShell v3… and 4!” at the Atlanta Code Camp. If you would like a copy of the demos, you can e-mail me either arcanecode at gmail.com or rcain at pragmaticworks.com.

Or, if you have BitTorrent Sync, you can use this secret key to get a copy of all my demos: BYEHZZ5K2DQ2AEBJDVSS4UUHZEGG646GZ  (Note this is the read-only key.)

I’m Speaking! SQL Saturday Nashville and PowerShell Saturday Atlanta

Just wanted to let folks know I’ll be doing presentations at two upcoming events.

The first is SQL Saturday #145 in Nashville. That’s this weekend, October 13th. I’ll be giving my “Introduction to Data Warehousing / Business Intelligence” presentation. Here is the slide deck I’ll be using: introtodatawarehousing.pdf

My second presentation will be October 27 in Atlanta at PowerShell Saturday #003. Yep, the PowerShell guys are taking the Saturday concept and kicking off a series of PowerShell Saturdays. This is only the third, but I see many more coming in the future.

At PowerShell Saturday I’ll be presenting “Make SQL Server Pop with PowerShell”. I’ll cover both the SMO and SQL Provider during this session.

Looks like it’ll be a busy October, but I’d hurry as both events are filling up so don’t wait and get registered now!

Preventing PowerShell from Running in the ISE

Once in a great while you will have the need to force your scripts to only run in the console. Mostly likely you have created special formatting commands that only work in the console, but would cause the script to crash in the ISE. It is prudent then to prevent that code from even executing. But how?

Turns out it’s pretty simple. The PowerShell ISE has a built in variable called $psise. Within the ISE you can use this variable to alter various settings in the environment, such as colors, or the position of the code window. When you run from the console however, $psise will be null.

So the solution is simple, just check to see if $psise is null. To make it even easier, I put the code inside a simple function I can import to my various scripts. It checks to see if we’re in the ISE, and if so presents the user with an error and breaks out.

Note, the break will only be effective if you call this from the outermost level script, that is the script you actually run from the console command line. Break will return control to the outer script, which in that case would be nothing.

I admit this is not something you’ll use all that often, if it were you might consider throwing an error as opposed to breaking so you can nest it at any level.

Here is my simple version of the script.

function Prevent-IseExecution ()
{
<#
  .SYNOPSIS
  Prevents a script from executing any further if it's in the ISE
  
  .DESCRIPTION
  The function determines whether it's running in the ISE or a window, 
  if it's in the ISE it breaks, causing further execution of the script to halt. 
#>

  # If we're in the PowerShell ISE, there will be a special variable
  # named $PsIse. If it's null, then we're in the command prompt
  if ($psise -ne $null)
  {
    "You cannot run this script inside the PowerShell ISE. Please execute it from the PowerShell Command Window."
    break # The break will bubble back up to the parent
  }

}

# Stop the user if we're in the ISE
Prevent-IseExecution

# Code will only continue if we're in command prompt
"I must not be in the ISE"

PowerShell Editors: PowerSE

I’ve seen interest really growing in PowerShell over the last year or two. The editor included with PowerShell v2 though, does leave much to be desired. I can say from experience the v3 editor is FAR better, but still has some room to grow.

There are many good editors on the market, but many people just getting into PowerShell need something cheap (i.e free) to learn on. I’ve found the PowerSE editor from PowerWF to be an excellent choice to fill this gap.

As I indicated it is indeed free. And for a free editor it’s remarkably full featured. It has a very good help system, intellisense, and the ability to see your call stack. It also has a variable pane where you can see all of the variables, and their values for the current session.

You can have multiple tabs open, each with a different script. It has some customizability, you can set the font for both the editor and the console. My favorite feature is the very easy to use debugger, complete with breakpoints and the ability to examine variables through the aforementioned variable pane.

I still think it’s important to know the PowerShell IDE, as it’s on all the latest builds of Windows. But once you have grasped it’s basics, go grab yourself a copy of PowerSE. Only then will you really be able to enjoy the happiness that is PowerShell.

Simulating SQL Server Work with PowerShell

No, I’m not talking about simulating your job so you can sit home in your PJs and play Call of Duty. (Although if you work out a way to do so, call me!) What I’m speaking of is emulating a workload on your server.

There’s many reasons for wanting to do this. For folks like me, who do a lot of demos, it can be a good way to get your server busy so you can demo things like DMVs or the SQL Provider. Another use would be stress or performance testing your applications. Ensuring you get good retrieval times when the server is loaded.

I have to give a little shout out here to my friend and fellow MVP Allen White (Blog | Twitter). I had attended one of his sessions back at a SQL Saturday, and when I downloaded his samples found a “SimulateWork” PowerShell script among the sample code. In Allen’s script he used the .Net SQLClient library to do his work. I decided to emulate the concept but to rewrite the entire thing using the SQL Provider. I borrowed a few of his queries so I could be sure my results were similar.

The script is pretty simple. I have a function that will load up the SQL Provider, if it’s not already loaded. Next is the SimulateWork function. It has one parameter, how many times you want to execute the code in the loop.

Within the function I load up a series of T-SQL queries against the AdventureWorks2012 database using “here” strings. I then execute them using the Invoke-SqlCmd cmdlet. Finally I have a little code which calls the functions.

My goal with this script was to get the server ram loaded up and simulate some I/O so I could demo a few simple things. Astute observers will notice that, because these are just executing the same T-SQL commands over and over, for the most part SQL Server will just hit its plan cache and memory each time. If those are important to you, I’d suggest altering the T-SQL commands to include a where clause of some sort then each time through the loop add your new where condition to the variable holding the T-SQL.

So without further ado, here’s the full script.

#******************************************************************************
# Simulate Work
#
# This routine will simulate work being done against a SQL Server. Ideal
# for demoing things like the SQL Profiler or DMV Stats. 
#
# The T-SQL queries were tested against the AdventureWorks2012 database. You
# may have to alter some queries if you are on previous versions of
# AdventureWorks.
#
# Author......: Robert C. Cain
# Blog........: http://arcanecode.com
# Twitter.....: http://twitter.com/arcanecode
# Last Revised: July 17, 2012
#
# Credit...: This script is based on one by SQL MVP Allen White.
#   Blog...: http://sqlblog.com/blogs/allen_white/default.aspx
#   Twitter: http://twitter.com/sqlrunr
#
# In his original script, he used the System.Data.SqlClient .Net library to 
# simulate work being done on a SQL Server. I liked the idea, but rewrote 
# using the SQL Provider. A couple of the SQL queries I borrowed from 
# his routine. 
#
# Other uses: The techniques below might also be a good way to stress test
# a system. Alter the t-sql (and probably variable names) and away you go. 
#******************************************************************************

#------------------------------------------------------------------------------
# Loads the SQL Provider into memory so we can use it. If the provider is 
# already loaded, the function does nothing. This makes it safe to call
# multiple times. 
#------------------------------------------------------------------------------
function Load-Provider
{
  # Get folder where SQL Server PS files should be
  $SqlPsRegistryPath = "HKLM:SOFTWARE\Microsoft\PowerShell\1\ShellIds\Microsoft.SqlServer.Management.PowerShell.sqlps"
  $RegValue = Get-ItemProperty $SqlPsRegistryPath
  $SqlPsPath = [System.IO.Path]::GetDirectoryName($RegValue.Path) + "\"
  
  # Check to see if the SQL provider is loaded. If not, load it. 
  [String] $IsLoaded = Get-PSProvider | Select Name | Where { $_ -match "Sql*" }

  if ($IsLoaded.Length -eq 0)
  { 
    # In this case we're only using the SQL Provider, so the code to load the
    # SMO has been commented out. Leaving it though in case you copy and paste it from somewhere
    # and need it. 
    <#
  # ----------------------------------------------------------------------------------------
    # Load the assemblies so we can use the SMO objects if we want. 
    # Note if all you need is the basic SMO functionality like was in 2005, you can get away
    # with loading only the first three assemblies. 
  # ----------------------------------------------------------------------------------------
    $assemblylist = 
    "Microsoft.SqlServer.ConnectionInfo ", 
    "Microsoft.SqlServer.SmoExtended ", 
    "Microsoft.SqlServer.Smo", 
    "Microsoft.SqlServer.Dmf ", 
    "Microsoft.SqlServer.SqlWmiManagement ", 
    "Microsoft.SqlServer.Management.RegisteredServers ", 
    "Microsoft.SqlServer.Management.Sdk.Sfc ", 
    "Microsoft.SqlServer.SqlEnum ", 
    "Microsoft.SqlServer.RegSvrEnum ", 
    "Microsoft.SqlServer.WmiEnum ", 
    "Microsoft.SqlServer.ServiceBrokerEnum ", 
    "Microsoft.SqlServer.ConnectionInfoExtended ", 
    "Microsoft.SqlServer.Management.Collector ", 
    "Microsoft.SqlServer.Management.CollectorEnum"
    foreach ($asm in $assemblylist) {[void][Reflection.Assembly]::LoadWithPartialName($asm)}
    #>

    # Set the global variables required by the SQL Provider    
    Set-Variable -scope Global -name SqlServerMaximumChildItems -Value 0
    Set-Variable -scope Global -name SqlServerConnectionTimeout -Value 30
    Set-Variable -scope Global -name SqlServerIncludeSystemObjects -Value $false
    Set-Variable -scope Global -name SqlServerMaximumTabCompletion -Value 1000

    # Load the actual providers  
    Add-PSSnapin SqlServerCmdletSnapin100
    Add-PSSnapin SqlServerProviderSnapin100
  
    # The Types file lets the SQL Provider recognize SQL specific type data.
    # The Format file tells the SQL Provider how to format output for the 
    # Format-* cmdlets.
    # First, set a path to the folder where the Type and format data should be
    $sqlpTypes = $SqlPsPath + "SQLProvider.Types.ps1xml"
    $sqlpFormat = $sqlpsPath + "SQLProvider.Format.ps1xml"
  
    # Now update the type and format data. 
    # Updating if its already loaded won't do any harm. 
    Update-TypeData -PrependPath $sqlpTypes
    Update-FormatData -prependpath $sqlpFormat
  }
  
  # Normally I wouldn't print out a message, but since this is a demo
  # it will give us a nice 'warm fuzzy' the provider is ready
  Write-Host "SQL Server Libraries are Loaded"

}

#------------------------------------------------------------------------------
# This function simply loads a series of SQL Commands into variables, then
# executes them. The idea is to simulate work being done on the server, so
# we can demo things like SQL Profiler. 
# 
# Parameters: $iterations - The number of times to repeat the loop
#------------------------------------------------------------------------------
function SimulateWork ($iterations) 
{

  $sqlSalesOrder = @"  
  SELECT d.SalesOrderID
       , d.OrderQty
       , h.OrderDate
       , o.Description
       , o.StartDate
       , o.EndDate
    FROM Sales.SalesOrderDetail d
   INNER JOIN Sales.SalesOrderHeader h ON d.SalesOrderID = h.SalesOrderID
   INNER JOIN Sales.SpecialOffer o ON d.SpecialOfferID = o.SpecialOfferID
   WHERE d.SpecialOfferID <> 1
"@

  $sqlSalesTaxRate = @"  
  SELECT TOP 5
         sp.Name
       , st.TaxRate
    FROM Sales.SalesTaxRate st
    JOIN Person.StateProvince sp 
         ON st.StateProvinceID = sp.StateProvinceID
   WHERE sp.CountryRegionCode = 'US'
   ORDER BY st.TaxRate desc ;
"@
  
  $sqlGetEmployeeManagers = @"
  EXECUTE dbo.uspGetEmployeeManagers 1
"@

  $sqlSalesPeople = @"
  SELECT h.SalesOrderID
       , h.OrderDate
       , h.SubTotal
       , p.SalesQuota
    FROM Sales.SalesPerson p
   INNER JOIN Sales.SalesOrderHeader h 
         ON p.BusinessEntityID = h.SalesPersonID ;
"@

  $sqlProductLine = @"
  SELECT Name
       , ProductNumber
       , ListPrice AS Price
    FROM Production.Product
   WHERE ProductLine = 'R'
     AND DaysToManufacture < 4
   ORDER BY Name ASC ;
"@

  $sqlHiringTrend = @"
  WITH HiringTrendCTE(TheYear, TotalHired)
    AS
    (SELECT YEAR(e.HireDate), COUNT(e.BusinessEntityID) 
     FROM HumanResources.Employee AS e
     GROUP BY YEAR(e.HireDate)
     )
   SELECT thisYear.*, prevYear.TotalHired AS HiredPrevYear, 
    (thisYear.TotalHired - prevYear.TotalHired) AS Diff,
    ((thisYear.TotalHired - prevYear.TotalHired) * 100) / 
                 prevYear.TotalHired AS DiffPerc
   FROM HiringTrendCTE AS thisYear 
      LEFT OUTER JOIN 
        HiringTrendCTE AS prevYear
   ON thisYear.TheYear =  prevYear.TheYear + 1;
"@

  
  $mi = $env:COMPUTERNAME + "\SQL2012"  
  Set-Location SQLSERVER:\sql\$mi\databases\AdventureWorks2012

  for ($i=1; $i -lt $iterations; $i++) 
  {
    Write-Host "Loop $i"
    $outSalesOrder = Invoke-Sqlcmd -Query $sqlSalesOrder -ServerInstance $mi -SuppressProviderContextWarning
    $outSalesTaxRate = Invoke-Sqlcmd -Query $sqlSalesTaxRate -ServerInstance $mi -SuppressProviderContextWarning
    $outGetEmployeeManagers = Invoke-Sqlcmd -Query $sqlGetEmployeeManagers -ServerInstance $mi -SuppressProviderContextWarning
    $outSalesPeople = Invoke-Sqlcmd -Query $sqlSalesPeople -ServerInstance $mi -SuppressProviderContextWarning
    $outProductLine = Invoke-Sqlcmd -Query $sqlProductLine -ServerInstance $mi -SuppressProviderContextWarning
    $outHiringTrend = Invoke-Sqlcmd -Query $sqlHiringTrend -ServerInstance $mi -SuppressProviderContextWarning
  }

}

#------------------------------------------------------------------------------
# This is the code that executes the above functions. To change the workload
# simply change the number of iterations. 
#------------------------------------------------------------------------------
Load-Provider

# Number of times to repeat the work simulation. 
$iterations = 100 

Write-Host "Starting simulated work"
SimulateWork $iterations
Write-Host "Done Working"

SAPIEN PowerShell Studio 2012

Over the next few blog posts I thought I’d present some of the various PowerShell IDE’s on the market. Yes, that’s right, there’s more out there then just the IDE that ships with PowerShell. I thought I’d kick things off with the “Cadillac” of tools, SAPIEN PowerShell Studio 2012.

This by far is the most comprehensive tool of any on the market. It’s key selling point is the ability to quickly and easily create Windows Forms that can be called from, and raise events in, your PowerShell scripts.

OK, I can already hear you. “Hey Robert this is supposed to be scripts, what do we need Windows Forms for?” That’s a great question.

A very common task to do in scripting is the creation of virtual machines. You can imagine though all of the things you would need to enter in order to create the machine. What’s the name? What’s the activation key? What do you want installed? SQL Server? SharePoint?

You could, of course, have the script prompt you one question at a time. Or have a vast array of command line switches and parameters you need to enter just right in order to run the script. You could reduce complexity by having multiple scripts, but then you increase the workload. Then there’s the issue of who is going to run the script.

It would be far preferable to have a simple, single windows dialog pop up and ask all these questions at once. The person running the script could enter the information in any order, and when they were done just click an big OK button to launch the script.

This also expands the sphere of people who can run the script. Now you will be able to let an experienced PowerShell developer create the complex script, then give it to someone who may not even know PowerShell. They just run a single command, enter in some information into an easy to understand Windows Form, and they are off and running.

But Windows Forms are just the tip of the iceberg. For example, PowerShell Studio makes it easy to package your scripts into executables. Yep, you can take all your proprietary code and keep it safe from prying eyes by compiling it into an easy to distribute EXE file. 

It also has the ability to run your scripts in either 32 or 64 bit mode, all within the same editor. Very nice if you are having to support older systems as well as more modern ones.

From a development standpoint, it has the nice feature of organizing your scripts into projects. This makes management of them much easier.

It has an outstanding editor, with a great help system, and a snippet library loaded up with snippet goodness. The object browser is one of my favorite features. Using it you can drill down into not only PowerShell objects, but .Net, WMI, the file system, and even databases. I find this incredibly useful, I can quickly lookup info without having to leave my IDE.

I will admit that at $349 (as I write this), it’s not the cheapest of the PowerShell IDE’s on the market. However I’m a firm believer in “you get what you pay for”. This price is low enough that it should be a no brainer for any sized company, from a huge multinational corporation to a single person consultant. By taking advantage of the features in SAPIEN’s PowerShell Studio 2012 you’ll recap that back in a very short time.

Without a doubt this is the most feature rich PowerShell IDE I’ve seen. It seems to have everything but the kitchen sink. And I wouldn’t be surprised if that’s in there too and I just haven’t found it yet.

PowerShell v3 Auto Loading of Modules

Modules are the main way to extend functionality in PowerShell. In v2, if you wanted to access the functionality of a module you had to explicitly load it, using the Import-Module functionality. Starting with v3, if PowerShell encounters a command it doesn’t recognize it will go through the list of valid modules looking for that command, and if found load it automatically. Let’s take a look at an example.

First, let’s see what modules are already loaded.

    Get-Module                      # Show modules loaded
ModuleType Name                                ExportedCommands                                                                                            
---------- ----                                ----------------                                                                                            
Manifest   Microsoft.PowerShell.Management     {Add-Computer, Add-Content, Checkpoint-Computer, Clear-Content...}                                          
Manifest   Microsoft.PowerShell.Utility        {Add-Member, Add-Type, Clear-Variable, Compare-Object...}                                                   

Next, let’s see what’s available.

    Get-Module -ListAvailable       # Show what's available to load
    Directory: C:\Users\rcain\Documents\WindowsPowerShell\Modules


ModuleType Name                                ExportedCommands                     
---------- ----                                ----------------                     
Script     adoLib                              {New-Connection, new-sqlcommand, i...
Script     Agent                               {Get-SqlConnection, Get-SqlServer,...
Script     ISECreamBasic                       {Add-IseMenu, Remove-IseMenu}        
Script     mySQLLib                            {New-MySQLConnection, new-MySqlCom...
Script     OracleClient                        {new-oracle_connection, invoke-ora...
Script     OracleIse                           {Connect-Oracle, Disconnect-Oracle...
Script     PBM                                 {Get-PolicyStore, Get-TargetServer...
Script     PerfCounters                        {Invoke-Sqlcmd2, Get-ProcessPerfco...
Script     Repl                                {Get-SqlConnection, Get-ReplServer...
Script     ShowMbrs                            {New-ShowMbrs, Set-ShowMbrs, Get-G...
Script     SQLIse                              {Test-SqlScript, Out-SqlScript, In...
Script     SQLMaint                            {Invoke-DBMaint, Get-SqlConnection...
Binary     SQLParser                           {Test-SqlScript, Out-SqlScript}      
Script     SQLProfiler                         {Invoke-Sqlcmd2, Save-InfoToSQLTab...
Script     SQLPSX                                                                   
Script     SQLServer                           {Get-SqlConnection, Get-SqlServer,...
Script     SSIS                                {New-ISApplication, Copy-ISItemSQL...
Script     WPK                                 {Add-CodeGenerationRule, ConvertFr...


    Directory: C:\Windows\system32\WindowsPowerShell\v1.0\Modules


ModuleType Name                                ExportedCommands                     
---------- ----                                ----------------                     
Manifest   AppLocker                           {Set-AppLockerPolicy, Get-AppLocke...
Manifest   BitsTransfer                        {Add-BitsFile, Remove-BitsTransfer...
Manifest   CimCmdlets                          {Get-CimAssociatedInstance, Get-Ci...
Manifest   Microsoft.PowerShell.Diagnostics    {Get-WinEvent, Get-Counter, Import...
Manifest   Microsoft.PowerShell.Host           {Start-Transcript, Stop-Transcript}  
Manifest   Microsoft.PowerShell.Management     {Add-Content, Clear-Content, Clear...
Manifest   Microsoft.PowerShell.Security       {Get-Acl, Set-Acl, Get-PfxCertific...
Manifest   Microsoft.PowerShell.Utility        {Format-List, Format-Custom, Forma...
Manifest   Microsoft.WSMan.Management          {Disable-WSManCredSSP, Enable-WSMa...
Manifest   PSDiagnostics                       {Disable-PSTrace, Disable-PSWSManC...
Binary     PSScheduledJob                      {New-JobTrigger, Add-JobTrigger, R...
Manifest   PSWorkflow                          New-PSWorkflowExecutionOption        
Manifest   TroubleshootingPack                 {Get-TroubleshootingPack, Invoke-T...

The CIM cmdlets are a new set that ships with v3. Looking at the output above, you’ll note the CIM module (CimCmdlets, highlighted in second set of modules) is not loaded. Let’s run one of the cmdlets from the module.

    # Note the CimCmdlets module isn't loaded. 
    # Now run a command from that module
    Get-CimInstance win32_bios
SMBIOSBIOSVersion : 8DET50WW (1.20 )
Manufacturer      : LENOVO
Name              : Default System BIOS
SerialNumber      : XXXXXXXX
Version           : LENOVO - 1200

Now let’s run Get-Module again, and you’ll see that the module has now been loaded. Pretty slick.

    # Run again to show CimCmdlets is now loaded
    Get-Module                      
ModuleType Name                                ExportedCommands                     
---------- ----                                ----------------                     
Binary     CimCmdlets                          {Get-CimAssociatedInstance, Get-Ci...
Manifest   Microsoft.PowerShell.Management     {Add-Computer, Add-Content, Checkp...
Manifest   Microsoft.PowerShell.Utility        {Add-Member, Add-Type, Clear-Varia...

Of course this does have some drawbacks. First, it will take time for PowerShell to search through all the paths looking for the commands. Second, if the requested module isn’t in one of the predefined path, it will still fail. Finally, just for the sake of self-documentation it’s better to know what modules your script depends on. For that reason when writing scripts or modules of my own I will continue to explicitly use Import-Module, and suggest you do too.

However, when working in interactive mode, i.e. running a PowerShell console and entering commands, auto loading can be a huge timesaver. No longer do I have to take time to issue an Import-Module cmdlet before I can use the commands I need.

Warning: This post was written using PowerShell v3 BETA. Changes between now and the final release may affect the validity of this post.

PowerShell v3 Online Help and Updateable Help

A good help system is important in any language, but especially when it comes to v3 of PowerShell. The number of cmdlets has jumped from around 260 to over 2,300. That’s a lot of new cmdlets to have to learn.

In version 2 syntax, Get-Help accessed the local install to get all of it’s help. And it’s syntax is of course still valid in v3.

  # Get help from the local help files (v2 way, still works in v3)
  Get-Help Get-CimClass

New with version 3 however, is the ability to use Get-Help with the new –Online switch to see the online version of help:

  # You can now pass in the -Online swtich to launch a web browser and
  # see the most recent help in TechNet
  Get-Help Get-CimClass -Online

This will open the default web browser and take you to the MSDN/TechNet entry for the command you are requesting help for. (Note: as I write this v3 is still in Beta, so running the command just takes you to a placeholder web page.)

The other new feature is the ability to update all your local help files dynamically. No longer do you have to wait to install a new release just to get your help updated. The command is very simple:

  # Make sure you are running as an admin!
  Update-Help

As the comment says, make sure you are running the PowerShell window in Admin mode. After executing the command wait a few minutes while PowerShell goes online and updates all the local help files. Very nice improvements for keeping your help system up to date.

Warning: This post was written using PowerShell v3 BETA. Changes between now and the final release may affect the validity of this post.

PowerShell v3 -In and–NotIn Operators

Two new operators have been added to PowerShell v3, –In and –NotIn. These are great for folks like me who are used to having this functionality in T-SQL. The syntax is very simple.

    $value = 3
    if ($value -in (1,2,3)) 
      {"The number $value was here"}

Produces:

The number 3 was here

The not in syntax is just as easy:

    $array = (3,4,5,7)
    if (6 -notin $array) 
      {"6 ain't there"}

Will yield:

6 ain't there

Very simple, but much needed additions to the PowerShell language.

Warning: This post was created using the PowerShell v3 BETA. Changes between now and the final release may affect the validity of this post.

PowerShell v3 Simplified Where-Object Syntax

A great improvement to the new version 3 of PowerShell is the simplification of the Where-Object syntax. Here is an example of the way a Where-Object cmdlet works in v2.

    Set-Location "C:\Windows"  # Just to give us an interesting place
  
    # Old Syntax
    Get-ChildItem | Where-Object {$_.Length -ge 100kb}

Will produce this output:

Mode                LastWriteTime     Length Name                                                                                                          
----                -------------     ------ ----                                                                                                          
-a---         2/25/2011  12:19 AM    2871808 explorer.exe                                                                                                  
-a---         7/13/2009   8:39 PM     733696 HelpPane.exe                                                                                                  
-a---          6/3/2012   1:10 PM  649849097 MEMORY.DMP                                                                                                    
-a---         7/13/2009   8:39 PM     193536 notepad.exe                                                                                                   
-----         3/15/2012   6:07 AM    2693696 PWMBTHLV.EXE                                                                                                  
-a---         7/13/2009   8:39 PM     427008 regedit.exe                                                                                                   
-a---        12/20/2010   5:11 PM     307712 UltraMon.scr                                                                                                  
-a---         3/21/1996   2:00 PM     284160 uninst.exe                                                                                                    
-a---          6/4/2012   2:04 PM    1134024 WindowsUpdate.log                                                                                             
-a---          3/8/2012   5:37 PM     302448 WLXPGSS.SCR                                                                                                   
-a---         6/10/2009   3:52 PM     316640 WMSysPr9.prx    

Version 3 now allows a simplified version of the cmdlet:

    Get-ChildItem | Where-Object Length -ge 100kb

With v3 you can now eliminate the script block notation (the {} curly braces), the $_ current object placeholder, as well as the . property notation. Now you can simply enter the name of the property and the condition. It produces the same output as what you see above. You can also use it with variables, or with the question mark as an alias for Where-Object.

    $size = 100kb
    Get-ChildItem | Where-Object Length -ge $size
    Get-ChildItem | ? Length -ge 100kb

Please note that this new syntax is only valid for simple where clauses. Anytime you have something more complex, such as multiple conditions, you will be required to revert to the v2 $_ syntax.

Warning: This post was created using the PowerShell v3 BETA. Changes between now and the final release may affect the validity of this post.

PowerShell v3 Ordered HashTables

In version 2 of PowerShell, you could create hashtables, but the order in which the data is stored in the hashtable is not necessarily the same order you input it in. Take this example:

    $hashTableV2 = @{a=1;b=2;c=3;d=4}  
    $hashTableV2 

Produces this output:

Name Value

---- -----

  c     3

  d     4

  a     1

  b     2

Version 3 of PowerShell now offers the ability to force the hashtable to hold the data in the same order it was created, using the new [ordered] tag.

    $hashTableV3 = [ordered]@{a=1;b=2;c=3;d=4}  
    $hashTableV3 

The new syntax produces this output:

Name Value

---- -----

  a     1

  b     2

  c     3

  d     4

Now if it’s important that your hashtable is in a specific order, you have the [ordered] to make that happen.

Warning: This post was created using PowerShell v3 BETA. Changes to the final version may alter the validity of this post.

Using PowerShell To Find A Value In A Set Of Files

I often present PowerShell at SQL Saturdays. One question I’m often asked is “Why learn PowerShell? I can do everything in T-SQL.” Well here’s a great example.

I was developing some SSIS packages that took a set of text files and loaded them into multiple dimension and fact tables. In the process I had one particular value that was creating duplicate values in one of the target tables. Obviously, I needed to find that value in the various files and see what’s different about it versus the other data.

The problem was there are close to 200 files I’m loading. Each file has hundreds of lines, and each line is over 200 characters long. Yikes! That’s a lot of data to try to look through. It would have been very time consuming to look in all of those files manually.

PowerShell to the rescue! In just a few minutes I created a PowerShell script to loop over the files, find the ones that had the value I was looking for, and list those file names. Made it easy then to open just the files I needed and look at the rows. I could have gone further, actually printed out the data, or perhaps load it into some array where it would count it, or more.

I decided to share the script, which you will see below. It actually took me longer to write up all the comments than it did to write the initial code. With the comments removed there’s only 9 lines of real code in the script.

Using PowerShell I was able to quickly find the rows I needed and find the issue. This is just one simple example of how PowerShell can be used to quickly solve a problem that might take a long time to accomplish in other languages.

 

#-------------------------------------------------------------------------
# Script: Search for a string in a group of files
# Author: Robert C. Cain
#
# Revision History
#   V01 - 05/28/2012 - Initial version
#-------------------------------------------------------------------------
# This PowerShell script will loop over a collection of text files 
# which are delimited somehow (comma, tab, etc) and look for a 
# specific value. It will then print out the name of the file. 
#-------------------------------------------------------------------------

# Here I've hardcoded a value to look for, you could also
# pass this in as a paramter if you convert this to a function
$ValueToLookFor = '123456789'

# Our data files have a header row, this should match the 
# names of those columns. This will allow us to address 
# the columns by name later in the routine
$header = "Column1", "Column2", "ColumnToSearch", "Column3", "Column4"

# Move to the location with all the files
Set-Location 'C:\MyData'

# Retrive a list of files that we need to look thru
# Note the use of the filter to only select some files
# If you are doing all files you can omit the filter
$files = Get-ChildItem -Filter "Data*.txt" | Select-Object Name

foreach($file in $files)
{
  # Load the data in the file into $data
  #
  # In this example the files are tabbed delimted, hence the need
  # to specify the tab character as a delimter. If your file is
  # comma delimted you can omit the -Delimter, otherwise specify
  # whatever yours is. 
  #
  # We also need to pass in the $header variable so PowerShell 
  # will be able to associate each column of data with a name
  $data = Import-Csv $file.Name -Delimiter "`t" -Header $header

    foreach($line in $data)
    {
      # Here's where the header comes in handy, I can now use the 
      # column name as the .Property to look in
      if ($line.ColumnToSearch -eq $ValueToLookFor)
      { 
        # Print out the name of the file. You could do other things
        # like print out the actual line, or load it into a variable,
        # just to name a few. 
        $file.Name
        
        # Use a break if you only want the file name listed once
        # I actually let it print out multiple times to indicate how
        # many times it's in the file

        # break  
      }
    } # foreach($line in $data)
} # foreach($file in $files)

Column Cut Copy Paste in VS SSMS and PowerShell

Did you know it’s possible to do Column based cut, copy and paste in Visual Studio, SQL Server Management Studio, and PowerShell v3? Not many people do. Even less people know that with VS 2010 and SSMS 2012 you get a little “extra” functionality. Watch the video to find out all the juicy details.