Arcane-SQL–A PowerShell Module for Generating SQL Code

Overview

There are many PowerShell modules available for assisting the busy DBA with managing their SQL Server environment. This isn’t one of them. This module is targeted toward SQL Developers, with special functionality for data warehouse developers. A common task for BI professionals, one that is performed on almost every project, is the creation of a staging area. This might be a set of tables in the data warehouse, perhaps in their own schema, or in an entirely separate database often called an operational data store (ODS).

The staging tables are typically similar in structure to the ones in the source database. Similar, but not identical, as there are some small modifications which are commonly made to the staging tables. First, large data types such as VARCHAR(MAX) are seldom useful in data analysis and thus could be removed. Next, even the most casual user of SSIS will quickly see SSIS prefers to work with the double byte character sets (WSTR in SSIS, which maps to NVARCHAR in T-SQL) as opposed to the single byte (STR/VARCHAR) character sets. It can be helpful to convert these in the staging area.

This Module can (optionally) do all of these things and more when it is used to generate CREATE TABLE or SELECT statements. Imagine if you will a source system with thousands of tables and the need to create a staging area for it in a new data warehouse. This quickly becomes a long, boring tedious task. Now imagine being able to write a bit of PowerShell code and generate these tables in just a few minutes time.

Before diving in, it is highly suggested you download and review the example script, Arcane-SQL-Example.ps1. This demonstrates the most commonly used functions and provides patterns for their use.

Functionality

While the module is full of functions, there are a few core ones that should be highlighted. Complete documentation can be found in the module itself, which has been fully documented using the native PowerShell Help system. In addition there is an example script file which demonstrates some of the most common tasks.

  • Enable-SqlAssemblies – This is the most important function, without calling it nothing else works. Be aware the SQL Server assemblies (including the SMO – SQL Management Objects – and SQL Provider) need to be on the machine where this script is run. This module has been tested on, and intended for, SQL developers with SQL Server Developer Edition installed on their workstations.
  • Join-InvokeInstance and Join-ProviderInstance – Most of the interaction done with the SQL Provider requires the server name and instance name, assembled in a path like syntax. The Invoke-SQLCommand likewise requires this formatting, however it has a little quirk. If the instance is "default" then the Invoke requires it to be omitted while the provider requires it to be present. These two functions reduce the confusion, simply pass in the server name and instance, and they will format things correctly.
  • Get-TablesCollection – When working with tables it is common to iterate over all the tables in a database. This function will generate a PowerShell array of table objects, each object being of type Microsoft.SqlServer.Management.Smo.Table. By having table objects the wide variety of properties for the table are available, such as Schema name, Table name, and Row Count.
  • Get-TableByName – Most commonly scripts will retieve an array of tables using the above Get-TablesCollection, then iterate over them in a foreach loop. There are times however when only a single table from the collection is desired. For those types the Get-TableByName can be used to retrieve a specific table object based on the name of the table. 
  • Remove-SchemasFromTableCollection and Select-SchemasInTableCollection – Get-TablesCollection will return an array of all the tables in a database. Often there is a need to only work with a subset of that table collection. These two functions will filter based on the schema and return a new array. The first, Remove-SchemasFromTableCollection, removes all tables from the array of schemas that are passed in. The second, Select-SchemasInTableCollection, will retain only those tables in the schemas passed into the function.
  • Remove-TablesFromTableCollection and Select-TablesInTableCollection – These work as filters, similar to the functions above. Instead of the schema however, they are based on table name. All tables that begin with the text passed in are either removed, or in the latter function the only ones retained.
  • Get-PrimaryKeyIndex – Returns the primary key object for the passed in table object.
  • Get-PrimaryKeyColumnNames – returns a comma delimited list of the column names in the primary key
  • Decode-IsPrimaryKeyColumn – Will determine if the passed in column name is part of the primary key index
  • Get-TruncateStatement – Will generate a SQL Truncate Table statement based on the table object passed in.
  • Get-DropTableStatement – Generates a Drop Table statement, including the check to see if the table exists, for the passed in table object.
  • Get-CreateStatement – To simply say this function generates a create table statement would do it disservice. It will take a table object and reverse engineer it, generating a create table statement. Unlike other code generators, it has a suite of parameters which allow customization of the generated statement with an eye toward the needs of a data warehouse developer. A few are:
    • DataTypeAlignColumn – Set the column number to line up the data type declarations on. Passing in a value of 1, will suppress alignment and simply place the data type after the column name. The default is column 50.
    • OverrideSchema – It is common place staging tables in the data warehouse in their own schema, often named ‘Staging’ or ‘ETL’. Passing in a value here will include the new schema name in the create table declaration. If the table object passed in had a schema other than dbo, it is placed in front of the table name with an underscore. If it was dbo, the source schema is simply omitted.
    • PrependToTableName and AppendToTableName – Allows extra text to be placed before or after the table name. For example, it is common to create tables with _Delete, _Update, and _Insert in the staging area. This provides a simple way to do that.
    • AdditionalColumns – When creating tables in a data warehousing environment, there are often extra columns to hold metadata about the ETL process. A user of this function can create an array of additional columns using the Add-ColumnDefinition function and have them added to the create table statement.
    • Scrub – This is a very powerful switch. When added it will perform a cleanup to make the output suitible for data warehousing. Columns with large data types such as VARCHAR(MAX) are removed. All single byte character sets in the source are converted to double byte sets.
    • SuppressIdentity – Source systems will sometimes use the IDENTITY clause in the primary key column. Using this switch will suppress that identity clause from being generated in the new create table statement.
    • SuppressNotNull – Often staging tables will not be concerned with null versus not null values. Using this switch will create all columns as nullible, regardless of their setting in the source.
    • IncludeDropTable – Adding this switch will include a ‘if exists drop table’ style clause prior to the create table statement.
    • PrimaryKeysOnly – Will generate a create table statement that only has the primary keys found in the source system.

Finally, if a column in the source table object has a custom data type, the script will reverse engineer the data type back to its basic SQL data type.

  • Get-SelectStatement – Like its sister function Get-CreateStatement, under the covers this function provides a lot of power and flexibility to the statement it creates. Additional columns can be added, columns can be specified to order the output by, table aliases can be used, and most powerful of all is the ability to generate a HASHBYTES column, including the ability to remove specified columns from the hash byte calculation. Here are some of its parameters:
    • AsColumn – The routine will line up the AS <column alias> at the column number passed in here. The default is 50. To not use aligning, set this to 1.
    • PrependToColumnName – Text to include before each column name.
    • AppendToColumnName – Text to place after each column name.
    • AdditionalColumns – A collection of additional columns to be added to the SELECT statement. Useful for adding metadata columns. All items in the AdditionalColumns collection should be generated using the Add-OutputColumn function.
    • OrderByColumns – A list of columns to add to the ORDER BY clause. All items in the OrderByColumns collection should be generated using the Add-OutputColumn function.
    • TableAlias – Allows user to specify an alias to use for the table. The alias is then put in front of each column name.
    • HashBytes – If included the select statement will include a HASHBYTES function with all columns except the primary keys and any columns included in the OmitFromHashBytes collection. The name passed in this parameter will be used for the name of the HashBytes column.
    • OmitFromHashBytes – A collection of column names that should be excluded from the HashBytes calculated column. Useful for excluding metadata columns. All items in the OmitFromHashBytes collection should be generated using the Add-OutputColumn function.
    • Scrub – When included this will remove certain data columns from the output, such as BINARY, NVARCHAR(MAX), XML, and other large types not normally used in data warehouses. Additionally VARCHAR/CHAR are converted to NVARCHAR/NCHAR, and DATETIME converted to DATETIME2(4).
    • Flatten – When included will return the SELECT statement as one long string, without any Carriage Return / Line Feed characters. Additionally, any additional spacing (such as indicated with the AsColumn) is eliminated.
    • IncludeOrderByPK – When included the Primary Keys in the table object are included in the order by clause. If any columns are passed in the OrderByColumns parameter, the Primary Keys occur first in the Order By clause, followed by any columns in the OrderByColumns parameter.
    • IncludeNoLock – When included, a WITH NOLOCK clause is added to the SELECT statement.

Construction

Those PowerShell experts who review the code may note that in many places code does not follow the most "powershelly" way of doing things. In some places rather than using pipelining it was instead decided to use a foreach loop, for example. The intended audience for this module are T-SQL developers who may not be as comfortable in PowerShell as they are T-SQL. Thus using code that more closely aligned with T-SQL patterns would make it more useful and modifiable by SQL developers.

When development first started attempts were made to use advanced functions, using the pipeline for input and output. At some point however this didn’t make sense for a majority of the functions. Time constraints further impinged this effort. Some future revision may attempt to migrate selected functions back to an advanced design, but for now they will have to stand as is.

Development Environment

This module is intended to be used on a developer workstation, not on a server, and especially not on a production server. As such deployment has been made simple, just copy the Arcane-SQL folder to the developers PowerShell module library. On a standard Windows 7 machine this would be C:\Users\<<usernamehere>>Documents\WindowsPowerShell\Modules. If the Windows PowerShell folder and modules subfolder do not exist, they will need to be created first.

To keep things simple, no attempt was made to sign the script. If this is an issue the developer using this module can self sign it on their PC. Check the execution policy on the workstation where the module is installed to ensure sufficient rights to run the module.

This module was developed on machines with both SQL Server (Developer Edition) 2008R2 and 2012 installed (some machines with both) and worked without issue. One machine it was tested on had 3 versions of SQL Server, 2008R2, 2012, and 2014. On that one machine there were some errors with some of the functions passing in the SMO table objects. Those are still being investigated.

SQL Security was assumed to be handled using built in Windows Credentials. Thus the logged in user would need to have rights based on Windows credentials to the SQL Server they are targeting. 

The machine being developed on was using PowerShell v4, however v3 should work as well.

This module was developed using SAPIEN PowerShell Studio 2014. To make life easy for other developers the PowerShell Studio files (Arcane-SQL.psproj, Arcane-SQL.psproj.build, and Arcane-SQL.psprojs) were included in the code. If you are using a different editor, such as the PowerShell ISE, simply discard these files.

Warranty

To put it succinctly, there is none. No guarantee is made for the code in this module, users of this module assume all risks. While I am happy to receive bug reports, I make no promises or guarantees if, or when, they will be fixed.

Contributions

No, not the money kind, code contributions. If anyone wishes to extend the functionality of this module I am happy to collaborate as long as the coding standards demonstrated in this module are adhered to, and the contributions are relevant to the goals of this module. Be aware though this is not a money making effort, so expect no monetary reimbursement for any contributions.

Download

You can download the module and its example at:

http://gallery.technet.microsoft.com/Arcane-SQL-A-PowerShell-185651ad

Exporting Data from SQL Server to CSV Files for Import to MongoDB Using PowerShell

I’ve been exploring other database systems, in order to determine how to import data from them using SQL Server Integration Services (SSIS). My first step though was to create some test data. I wanted something familiar, so I decided to export the Adventure Works Data Warehouse sample database and import into MongoDB. While I had many options I decided the simplest way was to first export the data to CSV files, then use the MongoDB utility mongimport. Naturally I turned to PowerShell to create an automated, reusable process.

First, if you need the Adventure Works DW database, you’ll find it at http://msftdbprodsamples.codeplex.com/. Second, I did my export from a special version of Adventure Works DW I created called AdventureWorksDW2014. This is optional, but if you want to have a version of Adventure Works DW updated with current dates, see my post at https://arcanecode.com/2013/12/08/updating-adventureworksdw2012-for-2014/. Third, I assume you are familiar with MongoDB, but if you want to learn more go to http://www.mongodb.org/.

Below is the PowerShell 3 script I created. The script is broken into four regions. The first, User Settings, contains the variables that you the user might need to change to get the script to run. It has things like the name of the SQL Server database, the path to MongoDB, etc.

The second region, Common, establishes variables that are used by the remaining two regions. You shouldn’t need to change or alter these. The third region accesses SQL Server and exports each table in the selected database to a CSV format file.

The final region, “Generate MongoDB import commands”, creates a batch (.BAT) file which has all the commands needed to run mongoimport for each CSV file. I decided not to have the PowerShell script execute the .BAT file so it could be reviewed before it is run. There might be some tables you don’t want to import, etc.

It is also quite easy to adapt this script to just create CSV files from SQL Server without using the MongoDB piece. Simply remove the fourth and final region, then in the Common and User Settings regions remove any variables what begin with the prefix “mongo”.

As the comments do a good job of explaining what happens I’ll let you review the included documentation for step by step instructions.

#==================================================================================================
# SQLtoCSVtoMongoDb.ps1
# Robert C. Cain | @ArcaneCode |
http://arcanecode.com
#
# If you need a simple way to export data from SQL Server to MongoDb, here is one way to do it.
# The script starts by setting up some variables to the server environment (see the User Settings
# region)
#
# Next, it exports data from each table in the selected database to individual CSV files.
# Finally, it generates a batch file which executes mongoimport for each csv file to import
# into MongoDb.
#
# I broke this into four regions so if all that is desired is a simple export of data to CSVs,
# you can simply omit the final region along with any variables that begin with "mongo".
#
# While I could have gone ahead and run the batch file at the end, I chose not to in order to
# give you time to review the output prior to running the batch file.
#==================================================================================================

Clear-Host

#region User Settings

  # In this section, set the variables so they are appropriate for your project / environment
 
  # This is the spot where you want to store the generated CSVs.
  # Make sure it does NOT end in a \
  $csvPath = "C:\mongodb"

  # If you are running this on a computer other than the server, set the name of the server below
  $sqlServer = $env:COMPUTERNAME

  # If you have a named instance be sure replace "default" with the name of the instance
  $sqlInstance = "\default"

  # Enter the name of the database to export below
  $sqlDatabaseName = "AdventureWorksDW2014"

  # The settings below only apply to the MongoDB code generation
  # Assemble path to mongodb. This assumes utlities are stored in the default bin folder
  $mongoPath = "C:\mongodb"
  $mongoImport = "$mongoPath\bin\mongoimport"

  # Set the server name and port
  $mongoHost = "localhost"   # Leave blank to default to localhost
  $mongoPort = ""            # Leave blank to default to 27107
 
  # Set the user name and password, leave blank if it isn’t needed
  $mongoUser = ""
  $mongoPW = ""

  # Enter the name of the database to import to.
  $mongoDatabaseName = "AdventureWorksDW2014"

  # Upserts are REALLY slow, especially on large datasets. Setting this to $true will turn off
  # the upsert option. If set to true, you are responsible for either deleting all documents
  # in the collection before hand, or allowing the risk of duplicates.
  #
  # Setting to false will enable the upsert option for mongoimport, and attempt to determine the
  # keys and (if found) add them to the final mongoimport command.
  $mongoNoUpsert = $true

#endregion

#region Common ————————————————————————————
 
  # This section sets variables used by both regions below. There is no need to alter anything
  # in this region.

  # Import the SQLPS provider (if it’s not already loaded)
  if (-not (Get-PSProvider SqlServer))
    { Import-Module SQLPS -DisableNameChecking }

  # Assemble the full servername \ instance
  $sqlServerInstance = "$sqlServer\$sqlInstance"

  # Assemble the full path for the SQL Provider to get to the database
  $sqlDatabaseLocation = "SQLSERVER:\sql\$sqlServerInstance\databases\$sqlDatabaseName"

  # Now tack on the Tables ‘folder’ to the SQL Provider path, the move there
  $sqlTablesLocation = $sqlDatabaseLocation + "\Tables"
  Set-Location $sqlTablesLocation

  # Get a list of tables in this database
  $sqlTables = Get-ChildItem

#endregion

#region Export SQL Data —————————————————————————
  # In this section we will export data from each table in the database to a CSV file.
  # WARNING: If the CSV file exists, it will be overwritten.

  # These are just used to display informational messages during processing
  $sqlTableIterator = 0
  $sqlTableCount = $sqlTables.Count

  # Iterate over each table in the database
  foreach($sqlTable in $sqlTables)
  {
    $sqlTableName = $sqlTable.Schema + "." + $sqlTable.Name   

    # I’ll grant you the next little bit of formatting for the progress messages is a bit
    # OCD on my part, but I like my output formatted and easy to read.
    $sqlTableIterator++
    $padCount = " " * (1 + $sqlTableCount.ToString().Length – $sqlTableIterator.ToString().Length)
    $sqlTableIteratorFormatted = $padCount + $sqlTableIterator

    if( $sqlTableName.Length -gt 50 )
      { $padTable = " " }
    else
      { $padTable = " " * (50 – $sqlTableName.Length) }

    Write-Host -ForegroundColor White -NoNewline "Processing Table $sqlTableIteratorFormatted of $sqlTableCount : $sqlTableName $padTable"
   
    # If the instance is "default", we have to exclude it when we use Invoke-SqlCmd
    if($sqlInstance.ToLower() -eq "\default")
      { $sqlSI = $sqlServer }
    else
      { $sqlSI = $sqlServerInstance }

    # Load an object with all the data in the table
    # Note if you have especially large tables you may need to modify this
    # section to break things into smaller chunks.
    $sqlCmd = "SELECT * FROM " + $sqlTableName
    $sqlData = Invoke-Sqlcmd -Query $sqlCmd `
                             -ServerInstance $sqlSI `
                             -SuppressProviderContextWarning `

    # Now write the data out.
    # Note utf8 encoding is important, as it is all mongoimport understands
    # Also need to omit the Type Info header PowerShell wants to write out
    Write-Host -ForegroundColor Yellow "    Writing to table $sqlTableName.csv"
    $sqlData | Export-Csv -NoTypeInformation -Encoding "utf8" -Path "$csvPath\$sqlTableName.csv"

  }

  # Just add a blank line after the processing ends
  Write-Host

#endregion

#region Generate MongoDB import commands ———————————————————-

  # In this region we will generage the commands to import our newly exported data
  # into an existing database in MongoDB. This is an example of our desired output (wrapped
  # onto multiple lines for readability, in the output it will be a single line):

  #  C:\mongodb>bin\mongoimport –host localhost -port 27107
  #                             –db AdventureWorksDW2014 –collection DimSalesReason
  #                             –username Me –password mySuperSecureP@ssW0rd!
  #                             –type csv –headerline –file DimSalesReason.csv
  #                             –upsert –upsertFields SalesReasonKey

  # Note several of these parameters are optional, and could use defaults, or be potentially
  # omitted from the final output, based on the choices at the very beginning of this script

  # Feel free to alter the $mongoCommand as needed for other circumstances

  # Final warning, the database must already exist in MongoDb in order to import the data. This
  # script will not generate the database for you.

  # Create the name for the batch file we will generate
  $mongoBat = $csvPath + "\Import_SQL_" + $sqlDatabaseName + "_to_MongoDb_" + $mongoDatabaseName + ".bat"

  # See if file exists, if so delete it
  if (Test-Path $mongoBat)
    { Remove-Item $mongoBat }

  # These are just used to display informational messages during processing
  $sqlTableIterator = 0
  $sqlTableCount = $sqlTables.Count

  # mongoimport allows us to do upserts, helping to eliminate duplicate rows on import.
  #
  # To make an upsert work there has to be a key column to match up on. Fortunately,
  # most tables in the SQL Server world have Primary Keys, so we can find out what
  # columns those are and add it to the command. Note if there is no PK in SQL Server,
  # no upsert will be attempted.
  #
  # Note though that upserts are REALLY slow, so the option to skip them is
  # built into the script and set at the top (mongoNoUpsert). The generated batch file
  # assumes that either a) you have deleted all data from the collection ahead of time,
  # or b) you are OK with the risk of duplicate data.

  # Iterate over each table in the database to build the mongoimport command
  foreach($sqlTable in $sqlTables)
  {
    $sqlTableName = $sqlTable.Schema + "." + $sqlTable.Name

    # A bit more OCD progress messages
    $sqlTableIterator++
    $padCount = " " * (1 + $sqlTableCount.ToString().Length – $sqlTableIterator.ToString().Length)
    $sqlTableIteratorFormatted = $padCount + $sqlTableIterator
    Write-Host -ForegroundColor Green "Building mongoimport command for table $sqlTableIteratorFormatted of $sqlTableCount : $sqlTableName"

    # Begin building the command
    $mongoCommand = "$mongoImport "
   
    if ($mongoHost.Length -ne 0)
      { $mongoCommand += "–host $mongoHost " }

    if ($mongoPort.Length -ne 0)
      { $mongoCommand += "–port $mongoPort " }

    $mongoCommand += "–db $mongoDatabaseName –collection $sqlTableName "

    if ($mongoUser.Length -ne 0)
      { $mongoCommand += " –username $mongoUser –password $mongoPW " }

    $mongoCommand += " –type csv –headerline –file $csvPath\$sqlTableName.csv "
       
    # Build the upsert clause, if the user has elected to use it.
    if ($mongoNoUpsert -eq $false)
    {
      $mongoPKs = ""
      foreach($sqlIndex in $sqlTable.Indexes)
      {
        if($sqlIndex.IndexKeyType -eq ‘DriPrimaryKey’)
        {
          foreach($sqlCol in $sqlIndex.IndexedColumns) #$sqlPKColumns)
          {
            if ($mongoPKs.Length -ne 0)
              { $mongoPKs += "," }
            # Note column names are returned with [ ] around them, and must be removed
            # Have to use -replace instead of .Replace() because $sqlCol is an column not a string
            $mongoPKs += ($sqlCol -replace "\[", "") -replace "\]", ""
          }
               
          $mongoCommand += " –upsert –upsertFields $mongoPKs"
        }           
      }
    }

    # Append the command to the batch file
    $mongoCommand | Out-File -FilePath $mongoBat -Encoding utf8 -Append

  }

  # Just add a blank line after the processing ends
  Write-Host

#endregion

 

Installing Ubuntu 8.04 under Microsoft Virtual PC 2007

Update: Nov. 10, 2008 – New blog post on installing Ubuntu 8.10 is now out: http://tinyurl.com/vpcubuntu810

I’m pleased to say that Ubuntu 8.04 is probably the easiest install I’ve had to do with VPC yet! One quick reminder before we begin, when working inside the VPC your mouse will get “trapped” or captured by the virtual computer. You won’t be able to move outside of it. To get it released, just press the RIGHT side ALT key. Left side won’t work, has to be the RIGHT side of your keyboard.

To start with, create a new Virtual PC. For a tutorial, see either my step by step tutorial or the video tutorial if you need more instructions. Since I had the space, I was using 768 meg of ram, and left the disk space at the default of 16 gig. If you can, try and use at least 512 meg of ram for good performance. Use the CD menu option to capture the desktop ISO you downloaded from Ubuntu, or if you have a real CD put it in the drive and capture that. When it launches, you’ll see this screen. (By the way, you can click on any of the screens to see the full size graphic, these have been resized slightly to fit in with most common browser sizes).

[image - Select Language]

Pick your language, I just took the default of English.

[image - Safe graphics mode]

Now press F4 to select an alternate starting mode. When it pops up, change to Safe graphics mode, as you see above, and press Enter. Now pick “Try Ubuntu…” (should already be selected) and press enter. Do NOT pick the Install Ubuntu option, I kept getting VPC errors when trying to install directly.

Additionally, don’t be alarmed if the screen goes black for a while, then you see some garbled graphics. This is perfectly normal, it is just passing through and will be OK when Ubuntu gets done doing it’s thing. It took me about 7 minutes to get from the previous screen to the next one.

[image - live environment]

After it boots you should be in the live session trial environment. Double click the Install icon to begin the install process.

[image - Installer welcome screen]

Screen 1 is just a welcome screen, although you can change your language here if you need to. Press Forward to continue.

[image - Installer Set Time Zone]

Next it wants to know where you are, at least time zone wise. I’m in the central time zone, but set yours appropriately and click Forward.

[image - Installer Pick your Keyboard]

Next you can specify your keyboard. Since I’m using a typical USA style keyboard, I just clicked Forward.

[image - Installer Prepare Disk Space]

Next it asks how you want your disk space partitioned. Since we’re in a virtual environment, it made the most sense to just take the defaults and click Forward.

Be aware, after clicking forward my mouse went into the “I’m busy” mode, and there was a delay while the disks were prepared. Mine went about five minutes. Don’t be alarmed, just wait a few minutes and you’ll then proceed to the next screen.

[image - Installer Who Are You]

On this screen, first supply your name; this will be used in documents and the like. The next text box is the important one – it is for your Ubuntu user name. By default it uses your first name, now is your chance to change it. I rather like mine so will accept it. Next you’ll need to key in a good password and confirm, and finally name the computer. When you are happy, click Forward.

Now is where you may get confused. In the screen above, you are on step 5 of 7. When you click forward, you are suddenly on step 7 of 7. I’m not sure what happened to step 6, I even ran the installer yet one more time just to make sure it was gone. Perhaps it was kidnapped by space aliens?

[image - Installer is Ready]

Apparently even without the missing step 6, the installer has everything it needs. Just click Install to begin the install process. Kick back and wait. Don’t be alarmed if the screen goes black during the process, it’s just the screen saver kicking in. Just click in the VPC and wiggle your mouse and your display will return. I had it kick in several times during the 45 (or so) minutes it took to get everything installed.

[image - Install complete time to reboot]

Eventually Ubuntu will complete it’s install, then give you the above message. On the Virtual PC menu click CD, then release the cd. Then click on the big Restart now button inside VPC.

This was the only real snag I hit in the whole install process, I waited a while and it never did restart on its own. I gave it about five minutes, then in the Virtual PC menu I clicked Action, Reset. I figured since it’s already installed, I wouldn’t lose anything, and I was right.

The boot process does take a few minutes; you’ll see some text then a black screen for about 90 seconds. Then it comes up to the big Ubuntu logo and the orange bar as it loads. You’ll see some garbled graphics for a few seconds, then the login screen finally appeared. I gave it my user id and password, and minutes later I was in Ubuntu.

One last piece of business, fixing the networking. First make sure the network card is mapped to a real network card in your computer. For more instructions on this, see my video, Virtual PC Advanced Settings. After that, click on the network icon in the upper right side of the toolbar, as you see below.

[image - Fix Networking]

Then just pick Wired Network. Once connected you’ll be free to visit your favorite websites!

[image - Ubuntu open for business]

I haven’t had much time to check out other features, or get the sound working so if anyone has a quick fix for that by all means leave a comment below.

The Developer Experience

In case you’re wondering why the slowdown in the blog this week, I’ve been spending all my free time getting ready for Alabama Code Camp 6. My first presentation of the day is “The Developer Experience”. It’s chock full of practical, low cost (or even free!) ways to make your life as a programmer more productive.

As promised in the session, here’s the complete PDF of my slides:  The Developer Experience

Just Code It

Jeff Atwood has an interesting post on his Coding Horror site entitled “Yes, But What Have You *Done*?” ( http://www.codinghorror.com/blog/archives/000809.html ). Programmers, Jeff says tend to be natural introverts, and left to their natural devices will migrate toward head down coding.

“But it is possible to go too far in the other direction, too. It’s much rarer, because it bucks the natural introversion of most software developers, but it does happen. Take me, for example. Sometimes I worry that I spend more time talking about programming than actually programming.”

I know how Jeff feels. In my role as a development lead I spend a lot of time in meetings, or talking to other developers about their projects. As a result I wind up spending a lot of time late at night doing coding, just to keep up with the latest and greatest techniques.

The need for code experience directly resulted in one of my “How To Be A Better Developer…” ( https://arcanecode.wordpress.com/2007/07/13/being-a-better-developer-in-6-months/ ) pledges.

I will work all the code samples in the book. Reading is one thing, but doing is even better. Personally, I find I get a better understanding when I actually type in the code samples and run them. And not just run what’s in the book, but tweak it, experiment with it.

I’m amazed at the number of times I meet some guy who comes across as a self proclaimed expert on a subject. When I quiz the person or try to ask tough questions, it turns out said individual read a book, but never actually wrote any code. Book learning is great, I certainly buy enough books every year to know, but there’s no substitute for doing.

When learning something new, start with the samples. Work it, tweak it, understand it. Then, if appropriate use it on your project at work. If it’s not appropriate, find someone else’s project that it would be a good fit for. Offer to work a few hours unpaid overtime and contribute some code to their project. They’d probably be grateful for the help, and might repay with some pointers and critiques.

Can’t find an appropriate project at work? There are thousands of open source projects out there, find one where you could contribute. Or look around the community; find a charity that needs some programming done.

It’s easier than you think to find some real world places to apply your coding skills. So what are you waiting for, just code it!

Microsoft Goes Open Source

For years critics have been blasting Microsoft over their proprietary standards and applications. Over the last few years, however, Microsoft has slowly been answering those critics by adopting internet standards instead of insisting on their own, and releasing more things to the community.

The ability to save Office 2007 documents as XPS comes to mind, as does the ability for CardSpace to use open standards like OpenID. Now, in their next step they are embracing the open source community through the addition of a new Open Source page within Microsoft.

http://www.microsoft.com/opensource/default.mspx

On this site you can find all sorts of information and resources for those wanting to do open source projects using Microsoft software. Links to articles, websites, and the Visual Studio Express editions can be found. I won’t try to reiterate the entire site here, but if you have an interest in Open Source it’s well worth your time to have a look.

In addition is another site called Port 25. It is the outreach site for Microsoft’s Open Source Software Lab. Some really cool stuff here on Linux interoperability, as well as the new Dynamic Language support such as IronRuby and IronPython.

http://port25.technet.com/

I can tell right now I’m going to be spending a lot of time on Port 25.

Finally, I should mention a site that’s been around for a bit by the name of CodePlex. It’s Microsoft’s site to host open source project done by both Microsoft folks and those of us in the community. (Well, I say us, one day I keep swearing I’ll find time to crank out some cool project and put it on CodePlex.)

http://www.codeplex.com/

Currently they show about 2000 projects right now, so there should be a lot for you to check out.

No, I don’t foresee Vista going open source anytime soon. But I really have to hand it to Microsoft. Somewhere over the last few years they realized they weren’t the only game in town. Since then they have really made an effort to “play nice” with other communities, and embrace many new open standards. The creation of their Microsoft Open Source site is just another step in that journey.