Calling SSIS from .Net

In a recent DotNetRocks show, episode 483, Kent Tegels was discussing SQL Server Integration Services and how it can be useful to both the BI Developer as well as the traditional application developer. While today I am a SQL Server BI guy, I come from a long developer background and could not agree more. SSIS is a very powerful tool that could benefit many developers even those not on Business Intelligence projects. It was a great episode, and I high encourage everyone to listen.

There is one point though that was not made very clear, but I think is tremendously important. It is indeed possible to invoke an SSIS package from a .Net application if that SSIS package has been deployed to the SQL Server itself. This article will give an overview of how to do just that. All of the sample code here will also be made available in download form from the companion Code Gallery site, http://code.msdn.microsoft.com/ssisfromnet .

In this article, I do assume a few prerequisites. First, you have a SQL Server with SSIS installed, even if it’s just your local development box with SQL Server Developer Edition installed. Second, I don’t get into much detail on how SSIS works, the package is very easy to understand. However you may wish to have a reference handy. You may also need the assistance of your friendly neighborhood DBA in setting up the SQL job used in the process.

Summary

While the technique is straightforward, there are a fair number of detailed steps involved. For those of you just wanting the overview, we need to start with some tables (or other data) we want to work with. After that we’ll write the SSIS package to manipulate that data.

Once the package is created it must be deployed to the SQL Server so it will know about it. This deploy can be to the file system or to SQL Server.

Once deployed, a SQL Server Job must be created that executes the deployed SSIS package.

Finally, you can execute the job from your .Net application via ADO.NET and a call to the sp_start_job stored procedure built into the msdb system database.

OK, let’s get to coding!

Setup the Tables

First we need some data to work with. What better than a listing of previous Dot Net Rocks episodes? I simply went to the Previous Shows page, highlighted the three columns of show number, show name, and date, and saved them to a text file. (Available on the Code Gallery site.)

Next we need a place to hold data so SSIS can work with it. I created a database and named it ArcaneCode, however any database should work. Next we’ll create a table to hold “staging” DNR Show data.

CREATE TABLE [dbo].[staging_DNRShows](
  [ShowData] [varchar](250) NOT NULL
) ON [PRIMARY]

This table will hold the raw data from the text file, each line in the text file becoming one row here. Next we want a table to hold the final results.

CREATE TABLE [dbo].[DNRShows](
  [ShowNumber] [int] NOT NULL,
  [ShowName] [varchar](250) NULL,
  [ShowDate] [datetime] NULL,
  CONSTRAINT [PK_DNRShows] PRIMARY KEY CLUSTERED
  (
  [ShowNumber] ASC
  )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
  ) ON [PRIMARY]

The job of the SSIS package will be to read each row in the staging table and split it into 3 columns, the show’s number, name, and date, then place those three columns into the DNRShows table above.

The SSIS Package

The next step is to create the SSIS package itself. Opening up Visual Studio / BIDS, create a new Business Intelligence SQL Server Integration Services project. First let’s setup a shared Data Source to the local server, using the ArcaneCode database as our source.

The default package name of “Package.dtsx” isn’t very informative, so let’s rename it ”LoadDNRShows.dtsx”. Start by adding a reference to the shared data source in the Connection Managers area, taking the default. Then in the Control Flow surface add 3 tasks, as seen here:

clip_image001

The first task is an Execute SQL Task that simply runs a “DELETE FROM dbo.DNRShows” command to wipe out what was already there. Of course in a true application we’d be checking for existing records in the data flow and doing updates or inserts, but for simplicity in this example we’ll just wipe and reload each time.

The final task is also an Execute SQL Task, after we have processed the data we no longer need it in the staging table, so we’ll issue a “DELETE FROM dbo.staging_DNRShows” to remove it.

The middle item is our Data Flow Task. This is what does the heavy lifting of moving the staging data to the main table. Here is a snapshot of what it looks like:

clip_image002

The first task is our OLEDB Source, it references the staging_DNRShows table. Next is what’s called a Derived Column Transformation. This will allow you to add new calculated columns to the flow, or add columns from variables. In this case we want to add three new columns, based on the single column coming from the staging table.

clip_image003

As you can see in under Columns in the upper left, we have one column in our source, ShowData. In the lower half we need to add three new columns, ShowNumber, ShowDate, and ShowName. Here are the expressions for each:

ShowNumber
    (DT_I4)SUBSTRING(ShowData,1,FINDSTRING(ShowData,"\t",1))

ShowDate
    (DT_DBDATE)SUBSTRING(ShowData,FINDSTRING(ShowData,"\t",2) + 1,LEN(ShowData) – FINDSTRING(ShowData,"\t",2))

ShowName
    (DT_STR,250,1252)SUBSTRING(ShowData,FINDSTRING(ShowData,"\t",1) + 1,FINDSTRING(ShowData,"\t",2) – FINDSTRING(ShowData,"\t",1) – 1)

The syntax is an odd blend of VB and C#. Each one starts with a “(DT_”, these are type casts, converting the result of the rest of the expression to what we need. For example, (DT_I4) converts to a four byte integer, which we need because in our database the ShowNumber column was defined as an integer. You will see SUBSTRING and LEN which work like their VB counterparts. FINDSTRING works like the old POS statement, it finds the location of the text and returns that number. The “\t” represents the tab character, here the C# fans win out as the Expression editor uses C# like escapes for special characters. \t for tab, \b for backspace, etc.

Finally we need to write out the data. For this simply add an OLEDB Destination and set it to the target table of dbo.DNRShows. On the mappings tab make sure our three new columns map correctly to the columns in our target table.

Deploy the Package

This completes the coding for the package, but there is one final step we need to do. First, in the solution explorer right click on the project (not the solution, the project as highlighted below) and pick properties.

clip_image004

In the properties dialog, change the “CreateDeploymentUtility” option from false (the default) to True.

clip_image006

Now click the Build, Build Solution menu item. If all went well you should see the build was successful. It’s now time to deploy the package to the server. Navigate to the folder where your project is stored, under it you will find a bin folder, and in it a Deployment folder. In there you should find a file with a “.SSISDeploymentManifest” extension. Double click on this file to launch the Package Installation Wizard.

When the wizard appears there are two choices, File system deployment and SQL Server deployment. For our purposes we can use either one, there are pros and cons to each and many companies generally pick one or the other. In this example we’ll pick SQL Server deployment, but again know that I’ve tested this both ways and either method will work.

Once you pick SQL Server deployment, just click Next. Now it asks you for the server name, I’ve left it at (local) since I’m working with this on a development box; likewise I’ve left “Use Windows Authentication”. Finally I need the package path, I can select this by clicking the ellipse (the …) to the right of the text box. This brings up a dialog where I can select where to install.

clip_image007

In a real world production scenario we’d likely have branches created for each of our projects, but for this simple demo we’ll just leave it in the root and click OK.

Once your form is filled out as below, click Next.

clip_image008

We are next queried to what our installation folder should be. This is where SSIS will cache package dependencies. Your DBA may have a special spot setup for these, if not just click next to continue.

Finally we are asked to confirm we know what we are doing. Just click Next. If all went well, the install wizard shows us it’s happy with a report, and we can click Finish to exit.

Setup the SQL Server Job

We’ve come a long way and we’re almost to the finish line, just one last major step. We will need to setup a SQL Server Job which will launch the SSIS package for us. In SQL Server Management Studio, navigate to the “SQL Server Agent” in your Object Explorer. If it’s not running, right click and pick “Start”. Once it’s started, navigate to the Jobs branch. Right click and pick “New Job”.

When the dialog opens, start by giving your job a name. As you can see below I used LoadDNRShows. I also entered a description.

clip_image010

Now click on the Jobs page over on the left “Select a page” menu. At the bottom click “New” to add a new job step.

In the job step properties dialog, let’s begin by naming the step “Run the SSIS package”. Change the Type to “SQL Server Integration Services Package”. When you do, the dialog will update to give options for SSIS. Note the Run As drop down, this specifies the account to run under. For this demo we’ll leave it as the SQL Server Agent Service Account, check with your DBA as he or she may have other instructions.

In the tabbed area the General tab first allows us to pick the package source. Since we deployed to SQL Server we’ll leave it at the default, however if you had deployed to the file system this is where you’d need to change it to pick your package.

At the bottom we can use the ellipse to pick our package from a list. That done your screen should look something like:

clip_image011

For this demo that’s all we need to set, I do want to take a second to encourage you to browse through the other tabs. Through these tabs you can set many options related to the package. For example you could alter the data sources, allowing you to use one package with multiple databases.

Click OK to close the job step, then OK again to close the Job Properties window. Your job is now setup!

Calling from .Net

The finish line is in sight! Our last step is to call the job from .Net. To make it a useful example, I also wanted the .Net application to upload the data the SSIS package will manipulate. For simplicity I created a WinForms app, but this could easily be done in any environment. I also went with C#, again the VB.Net code is almost identical.

I started by creating a simple WinForm with two buttons and one label. (Again the full project will be on the Code Gallery site).

clip_image012

In the code, first be sure to add two using statements to the standard list:

using System.Data.SqlClient;

using System.IO;

Behind the top button we’ll put the code to copy the data from the text file we created from the DNR website to the staging table.

    private void btnLoadToStaging_Click(object sender, EventArgs e)

    {

      /* This method takes the data in the DNRShows.txt file and uploads them to a staging table */

      /* The routine is nothing magical, standard stuff to read as Text file and upload it to a  */

      /* table via ADO.NET                                                                      */

 

      // Note, be sure to change to your correct path

      string filename = @"D:\Presentations\SQL Server\Calling SSIS From Stored Proc\DNRShows.txt";

      string line;

 

      // If you used a different db than ArcaneCode be sure to set it here

      string connect = "server=localhost;Initial Catalog=ArcaneCode;Integrated Security=SSPI;";

      SqlConnection connection = new SqlConnection(connect);

      connection.Open();

 

      SqlCommand cmd = connection.CreateCommand();

 

      // Wipe out previous data in case of a crash

      string sql = "DELETE FROM dbo.staging_DNRShows";

      cmd.CommandText = sql;

      cmd.ExecuteNonQuery();

 

      // Now setup for new inserts

      sql = "INSERT INTO dbo.staging_DNRShows (ShowData) VALUES (@myShowData)";

 

      cmd.CommandText = sql;

      cmd.Parameters.Add("@myShowData", SqlDbType.VarChar, 255);

 

      StreamReader sr = null;

 

      // Loop thru text file, insert each line to staging table

      try

      {

        sr = new StreamReader(filename);

        line = sr.ReadLine();

        while (line != null)

        {

          cmd.Parameters["@myShowData"].Value = line;

          cmd.ExecuteNonQuery();

          lblProgress.Text = line;

          line = sr.ReadLine();

        }

      }

      finally

      {

        if (sr != null)

          sr.Close();

        connection.Close();

        lblProgress.Text = "Data has been loaded";

      }

 

Before you ask, yes I could have used any number of data access technologies, such as LINQ. I went with ADO.NET for simplicity and believing most developers are familiar with it due to its longevity. Do be sure and update the database name and path to the file in both this and the next example when you run the code.

This code really does nothing special, just loops through the text file and uploads each line as a row in the staging table. It does however serve as a realistic example of something you’d do in this scenario, upload some data, then let SSIS manipulate it on the server.

Once the data is there, it’s finally time for the grand finale. The code behind the second button, Execute SSIS, does just what it says; it calls the job, which invokes our SSIS package.

    private void btnRunSSIS_Click(object sender, EventArgs e)

    {

      string connect = "server=localhost;Initial Catalog=ArcaneCode;Integrated Security=SSPI;";

      SqlConnection connection = new SqlConnection(connect);

      connection.Open();

 

      SqlCommand cmd = connection.CreateCommand();

 

      // Wipe out previous data in case of a crash

      string sql = "exec msdb.dbo.sp_start_job N’LoadDNRShows’";

      cmd.CommandText = sql;

      cmd.ExecuteNonQuery();

      connection.Close();

      lblProgress.Text = "SSIS Package has been executed";

 

    }

The key is this sql command:

exec msdb.dbo.sp_start_job N’LoadDNRShows’

“exec” is the T-SQL command to execute a stored procedure. “sp_start_job” is the stored procedure that ships with SQL Server in the MSDB system database. This stored procedure will invoke any job stored on the server. In this case, it invokes the job “LoadDNRShows”, which as we setup will run an SSIS package.

Launch the application, and click the first button. Now jump over to SQL Server Management Studio and run this query:

select * from dbo.staging_DNRShows;

select * from dbo.DNRShows;

You should see the first query bring back rows, while the second has nothing. Now return to the app and click the “Execute SSIS” button. If all went well running the query again should now show no rows in our first query, but many nicely processed rows in the second. Success!

A few thoughts about xp_cmdshell

In researching this article I saw many references suggesting writing a stored procedure that uses xp_cmdshell to invoke dtexec. DTEXEC is the command line utility that you can use to launch SSIS Packages. Through it you can override many settings in the package, such as connection strings or variables.

xp_cmdshell is a utility built into SQL Server. Through it you can invoke any “DOS” command. Thus you could dynamically generate a dtexec command, and invoke it via xp_cmdshell.

The problem with xp_cmdshell is you can use it to invoke ANY “DOS” command. Any of them. Such as oh let’s say “DEL *.*” ? xp_cmdshell can be a security hole, for that reason it is turned off by default on SQL Server, and many DBA’s leave it turned off and are not likely to turn it on.

The techniques I’ve demonstrated here do not rely on xp_cmdshell. In fact, all of my testing has been done on my server with the xp_cmdshell turned off. Even though it can be a bit of extra work, setting up the job, etc., I still advise it over the xp_cmdshell method for security and the ability to use it on any server regardless of its setting.

In Closing

That seemed like a lot of effort, but can lead to some very powerful solutions. SSIS is a very powerful tool designed for processing large amounts of data and transforming it. In addition developing under SSIS can be very fast due to its declarative nature. The sample package from this article took the author less than fifteen minutes to code and test.

When faced with a similar task, consider allowing SSIS to handle the bulk work and just having your .Net application invoke your SSIS package. Once you do, there are no ends to the uses you’ll find for SQL Server Integration Services.

Advertisement

Accessing FILESTREAM Data From A Client .NET Application – Part 2 Downloading a File

In the previous entry we covered how to upload a file to SQL Server using the FILESTREAM, new to SQL Server 2008. In this post we will look at retrieving a file from SQL Server using FILESTREAM. If you missed yesterday’s installment, a simple front end was created, the full project can be found at the Code Gallery site http://code.msdn.microsoft.com/FileStreamFTS .

The interface is very simple:

image

The grid is a Data View Grid that shows the ID and Document information from the table we previously created. (If you want to see the code to populate the grid see the project.) The user picks a row, then clicks on the Get File button.

    private void btnGetFile_Click(object sender, EventArgs e)

    {

      // Reset in case it was used previously

      lblStatus.Text = "";

 

      if (dgvFiles.CurrentRow != null)

      {

        // Grab the ID (Primary Key) for the current row

        int ID = (int)dgvFiles.CurrentRow.Cells[0].Value;

        // Now Save the file to the folder passed in the second

        // paramter.

        FileTransfer.GetFile2(ID, @"D:\Docs\Output\");

        // And let user know it’s OK

        lblStatus.Text = "File Retrieved";

      }

    }

The code is very simple, the heart of it is the FileTransfer.GetFile static method. Two values are passed in, the integer ID, which is the primary key from the database, and the path to save the file to. Here I simply hard coded a path, in a real life application you will want to give the user the ability to enter a path. Let’s take a look at the GetFile routine.

    public static void GetFile(int ID, string outputPath)

    {

      // Setup database connection

      SqlConnection sqlConnection = new SqlConnection(

                "Integrated Security=true;server=(local)");

 

      SqlCommand sqlCommand = new SqlCommand();

      sqlCommand.Connection = sqlConnection;

 

      try

      {

        sqlConnection.Open();

 

        // Everything we do with FILESTREAM must always be in

        // the context of a transaction, so we’ll start with

        // creating one.

        SqlTransaction transaction

          = sqlConnection.BeginTransaction("mainTranaction");

        sqlCommand.Transaction = transaction;

 

        // The SQL gives us 3 values. First the PathName() method of

        // the Document field is called, we’ll need it to use the API

        // Second we call a special function that will tell us what

        // the context is for the current transaction, in this case

        // the "mainTransaction" we started above. Finally it gives

        // the name of the document, which the app will use when it

        // creates the document but is not strictly required as

        // part of the FILESTREAM.

        sqlCommand.CommandText

          = "SELECT Document.PathName()"

          + ", GET_FILESTREAM_TRANSACTION_CONTEXT() "

          + ", DocumentName "

          + "FROM FileStreamFTS.dbo.DocumentRepository "

          + "WHERE ID=@theID ";

 

        sqlCommand.Parameters.Add(

          "@theID", SqlDbType.Int).Value = ID;

 

        SqlDataReader reader = sqlCommand.ExecuteReader();

        if (reader.Read() == false)

        {

          throw new Exception("Unable to get BLOB data");

        }

 

        // OK we have some data, pull it out of the reader into locals

        string path = (string)reader[0];

        byte[] context = (byte[])reader[1];

        string outputFilename = (string)reader[2];

        int length = context.Length;

        reader.Close();

 

        // Now we need to use the API we declared at the top of this class

        // in order to get a handle.

        SafeFileHandle handle = OpenSqlFilestream(

          path

          , DESIRED_ACCESS_READ

          , SQL_FILESTREAM_OPEN_NO_FLAGS

          , context

          , (UInt32)length, 0);

 

        // Using the handle we just got, we can open up a stream from

        // the database.

        FileStream databaseStream = new FileStream(

          handle, FileAccess.Read);

 

        // This file stream will be used to copy the data to disk

        FileStream outputStream

          = File.Create(outputPath + outputFilename);

 

        // Setup a buffer to hold the streamed data

        int blockSize = 1024 * 512;

        byte[] buffer = new byte[blockSize];

 

        // There are two ways we could get the data. The simplest way

        // is to read the data, then immediately feed it to the output

        // stream using it’s Write feature (shown below, commented out.

        // The second way is to load the data into an array of bytes

        // (here implemented using the generic LIST). This would let

        // you manipulate the data in memory, then write it out (as

        // shown here), reupload it to another data stream, or do

        // something else entirely.

        // If you want to go the simple way, just remove all the

        // fileBytes lines and uncomment the outputStream line.

        List<byte> fileBytes = new List<byte>();

        int bytesRead = databaseStream.Read(buffer, 0, buffer.Length);

        while (bytesRead > 0)

        {

          bytesRead = databaseStream.Read(buffer, 0, buffer.Length);

          //outputStream.Write(buffer, 0, buffer.Length);

          foreach (byte b in buffer)

            fileBytes.Add(b);

        }

 

        // Write out what is in the LIST to disk

        foreach (byte b in fileBytes)

        {

          byte[] barr = new byte[1];

          barr[0] = b;

          outputStream.Write(barr, 0, 1);

        }

 

        // Close the stream from the databaseStream

        databaseStream.Close();

 

        // Write out the file

        outputStream.Close();

 

        // Finally we should commit the transaction.

        sqlCommand.Transaction.Commit();

      }

      catch (System.Exception ex)

      {

        MessageBox.Show(ex.ToString());

      }

      finally

      {

        sqlConnection.Close();

      }

      return;

 

    }

The routine kicks off by opening a connection, then establishing a transaction. Remember from the previous lesson that every time you work with a FILESTREAM it has to be in a transaction. Next we basically duplicate the SQL used in the previous lesson, returning the path name, transaction context, and document name. The only difference is we pass in the ID as a parameter. With that, just like with the previous example we call the OpenSqlFilestream API. Note a difference, in this example the second parameter is “DESIRED_ACCESS_READ” as opposed to the write access we indicated previosly.

Once we have the “handle” we can create a FileStream for reading from the database. In this example I loop through the file stream, loading the data into a LIST of bytes. Once in memory we are free to work with it as we need to. In this example I simply loop back through the generic List and write the data to the file stream we opened on the disk for writing. If all you are doing is writing, it would be somewhat more efficient to write the code like so:

        int bytesRead = databaseStream.Read(buffer, 0, buffer.Length);

        while (bytesRead > 0)

        {

          bytesRead = databaseStream.Read(buffer, 0, buffer.Length);

          outputStream.Write(buffer, 0, buffer.Length);

        }

 

        // Close the stream from the databaseStream

        databaseStream.Close();

I simply eliminate the local byte array and write the buffer directly to the disk. Either way, the remainder is simple, just closing all the streams, commiting the transaction and closing the database connection.

This concludes the series on how to use FILESTREAM, in future posts we look into how to do Full Text Search with FILESTREAM stored objects.

Accessing FILESTREAM Data From A Client .NET Application – Part 1 Uploading a File

The best way to work with documents in a database is via a .Net application. I created a simple Windows forms project to access the table I created in previous lessons. I named the application FileLoader, you can the entire project at the Code Gallery site http://code.msdn.microsoft.com/FileStreamFTS .

The interface is very simple:

image

As you can see there are two main functions, the upper half uploads a file to the SQL Server. The lower half displays the files already in the table, lets the user pick one and then click the GetFile button to save it locally. Today we’ll look at the Upload File functionality. Here is the code:

    private void btnUploadFile_Click(object sender, EventArgs e)

    {

      // Reset in case it was used previously

      lblStatus.Text = "";

 

      // Make sure user entered something

      if (txtFile.Text.Length == 0)

      {

        MessageBox.Show("Must supply a file name");

        lblStatus.Text = "Must supply file name";

        return;

      }

 

      // Make sure what user entered is valid

      FileInfo fi = new FileInfo(txtFile.Text);

      if (!fi.Exists)

      {

        MessageBox.Show("The file you entered does not exist.");

        lblStatus.Text = "The file you entered does not exist.";

        return;

      }

 

      // Upload the file to the database

      FileTransfer.UploadFile(txtFile.Text);

 

      // Refresh the datagrid to show the newly added file

      LoadDataGridView();

 

      // Let user know it was uploaded

      lblStatus.Text = fi.Name + " Uploaded";

    }

The real line of importance is the FileTransfer.UploadFile. This calls a static method in a class I named FileTransfer.cs. In order to use FILESTREAM there is an API call we have to make, so at the header area of the FileTransfer we have a lot of declarations. These are pretty much a straight copy from the MSDN help files.

    //These contants are passed to the OpenSqlFilestream()

    //API DesiredAccess parameter. They define the type

    //of BLOB access that is needed by the application.

 

    const UInt32 DESIRED_ACCESS_READ = 0x00000000;

    const UInt32 DESIRED_ACCESS_WRITE = 0x00000001;

    const UInt32 DESIRED_ACCESS_READWRITE = 0x00000002;

 

    //These contants are passed to the OpenSqlFilestream()

    //API OpenOptions parameter. They allow you to specify

    //how the application will access the FILESTREAM BLOB

    //data. If you do not want this ability, you can pass in

    //the value 0. In this code sample, the value 0 has

    //been defined as SQL_FILESTREAM_OPEN_NO_FLAGS.

 

    const UInt32 SQL_FILESTREAM_OPEN_NO_FLAGS = 0x00000000;

    const UInt32 SQL_FILESTREAM_OPEN_FLAG_ASYNC = 0x00000001;

    const UInt32 SQL_FILESTREAM_OPEN_FLAG_NO_BUFFERING = 0x00000002;

    const UInt32 SQL_FILESTREAM_OPEN_FLAG_NO_WRITE_THROUGH = 0x00000004;

    const UInt32 SQL_FILESTREAM_OPEN_FLAG_SEQUENTIAL_SCAN = 0x00000008;

    const UInt32 SQL_FILESTREAM_OPEN_FLAG_RANDOM_ACCESS = 0x00000010;

 

    //This structure defines the format of the final parameter to the

    //OpenSqlFilestream() API.

 

    //This statement imports the OpenSqlFilestream API so that it

    //can be called in the Main() method below.

    [DllImport("sqlncli10.dll", SetLastError = true, CharSet = CharSet.Unicode)]

    static extern SafeFileHandle OpenSqlFilestream(

                string Filestreamath,

                uint DesiredAccess,

                uint OpenOptions,

                byte[] FilestreamTransactionContext,

                uint FilestreamTransactionContextLength,

                Int64 AllocationSize);

 

    //This statement imports the Win32 API GetLastError().

    //This is necessary to check whether OpenSqlFilestream

    //succeeded in returning a valid / handle

 

    [DllImport("kernel32.dll", SetLastError = true)]

    static extern UInt32 GetLastError();

OK, with that out of the way, I’ve created a public, static method to upload the file. Here is the full routine:

    public static void UploadFile(string fileName)

    {

      // Establish db connection

      SqlConnection sqlConnection = new SqlConnection(

                "Integrated Security=true;server=(local)");

      SqlTransaction transaction = null;

 

      // Create a File Info object so you can easily get the

      // name and extenstion. As an alternative you could

      // choose to pass them in,  or use some other way

      // to extract the extension and name.

      FileInfo fi = new FileInfo(fileName);

 

      try

      {

        // Open the file as a stream

        FileStream sourceFile = new FileStream(fileName

          , FileMode.OpenOrCreate, FileAccess.Read);

 

        // Create the row in the database

        sqlConnection.Open();

 

        SqlCommand cmd = new SqlCommand();

        cmd.Connection = sqlConnection;

        cmd.CommandText = "INSERT INTO "

          + "FileStreamFTS.dbo.DocumentRepository"

          + "(DocumentExtension, DocumentName) VALUES (‘"

          + fi.Extension + "’, ‘"

          + fi.Name + "’)";

        cmd.ExecuteNonQuery();

 

        // Now upload the file. It must be done inside a transaction.

        transaction = sqlConnection.BeginTransaction("mainTranaction");

        cmd.Transaction = transaction;

        cmd.CommandText = "SELECT Document.PathName(), "

         + "GET_FILESTREAM_TRANSACTION_CONTEXT() "

         + "FROM FileStreamFTS.dbo.DocumentRepository "

         + "WHERE ID=(select max(id) from FileStreamFTS.dbo.DocumentRepository)";

        SqlDataReader rdr = cmd.ExecuteReader();

        if (rdr.Read() == false)

        {

          throw new Exception("Could not get file stream context");

        }

 

        // Get the path

        string path = (string)rdr[0];

        // Get a file stream context

        byte[] context = (byte[])rdr[1];

        int length = context.Length;

        rdr.Close();

 

        // Now use the API to get a reference (handle) to the filestream

        SafeFileHandle handle = OpenSqlFilestream(path

          , DESIRED_ACCESS_WRITE

          , SQL_FILESTREAM_OPEN_NO_FLAGS

          , context, (UInt32)length, 0);

 

        // Now create a true .Net filestream to the database

        // using the handle we got in the step above

        FileStream dbStream = new FileStream(handle, FileAccess.Write);

 

        // Setup a buffer to hold the data we read from disk

        int blocksize = 1024 * 512;

        byte[] buffer = new byte[blocksize];

 

        // Read from file and write to DB

        int bytesRead = sourceFile.Read(buffer, 0, buffer.Length);

        while (bytesRead > 0)

        {

          dbStream.Write(buffer, 0, buffer.Length);

          bytesRead = sourceFile.Read(buffer, 0, buffer.Length);

        }

 

        // Done reading, close all of our streams and commit the file

        dbStream.Close();

        sourceFile.Close();

        transaction.Commit();

 

      }

      catch (Exception e)

      {

        if (transaction != null)

        {

          transaction.Rollback();

        }

        throw e;

      }

      finally

      {

        sqlConnection.Close();

      }

 

    }

First we open a connection to the SQL Server, then create a FileInfo object to make it simple to extract the file name and extension. Next a record is inserted into the database that will act as a place holder. It has the name of the file and the extension, but no file yet. I did go ahead and open a FileStream to the source file, located on the disk. We’ll need this later to upload the file.

Next you will see that I begin a transaction. Every time you work with a FILESTREAM it must always be in the context of a transaction. After that a SQL Data Reader is created that has three pieces of information. First, it calls the PathName() function for the Document field in our table. The PathName() will be needed later when we call the API. The second field is returned from the GET_FILESTREAM_TRANSACTION_CONTEXT function, and returns the transaction context for the transaction. Note this is not the name (in this example “mainTransaction”), but the context which is a special value. These two values are then copied into local variables which will be used in calling the OpenSqlFilestream API. In this example I also retrieve the DocumentName field, this is used by the code when it writes the file to the database, but is not strictly needed for the FILESTREAM.

Next you will see the call to the OpenSqlFilestream API, which returns a “handle”. This handle is then used to create a FileStream object. Using this newly created FileStream (here named dbStream) we can then upload the file. Now the main work begins. After setting up a buffer, we then simply read from the source file stream into the buffer, then write the exact same buffer to the database FileStream. The loop continues until there are no more bytes in the source.

At this point we are essentially done. We close the streams, commit the transaction, and in the finally block close the SQL database connection. The file should now be in the database. I do want to point out one thing. In the SQL to get the information to the row just uploaded, I use a subquery to get the max(id), essentially returning the last row just inserted. This is fine for this simple example, when the database has just one user. In your production systems where you are likely to have many users, however, you should use an alternate method to return the row you need. Otherwise two users could insert rows at the same time, and thus a conflict could occur with both of them getting back the same max(id). It will not happen often, but at some point it could happen and be very hard to debug.

This handled the uploading of files to the SQL Server via FILESTREAM, in the next installment we’ll look at how to retrieve the file we just uploaded.

Generating a PDF file from a Reporting Services Report Viewer Control

In yesterday’s post, I demonstrated how to generate a SQL Server Reporting Services report without having to have SQL Server Reporting Services. The sample application used the Microsoft Report Viewer control to display the report. A common need in business is to generate documents, such as PDFs, that will later be archived. It turns out if you are using a report viewer control, this is easy to do programmatically.

First, you need to add one using statement to the top of your class, in addition to the others that were added yesterday.

using System.IO;

Next, we only need a few lines of code to generate the PDF.

      Warning[] warnings;

      string[] streamids;

      string mimeType;

      string encoding;

      string extension;

 

      byte[] bytes = reportViewer1.LocalReport.Render(

        "PDF", null, out mimeType, out encoding, out extension,

        out streamids, out warnings);

 

      FileStream fs = new FileStream(@"D:\ReportOutput.pdf", FileMode.Create);

      fs.Write(bytes, 0, bytes.Length);

      fs.Close();

This code snippet came right from the MSDN Books on Line, and is pretty simple. I could have selected another format by changing the first value passed into the Render method, for example “EXCEL” would have rendered it as a Microsoft Excel document.

In the code samples I placed the above sample in it’s own button, but I could just have easily placed it under one of the other demo buttons.

This ability brings up some interesting possibilities. For example, the report viewer control does not have to be visible to the user in order for this to work. Thus you could create an application that every night generated a series of reports and saved them as PDFs to some central location, such as a web server or document control server. All the user (assuming one was around) would have to see is a progress bar, the reports themselves never get displayed.

Using SQL Server Reporting Services in Client Mode

Recently I did a presentation at the March BSDA meeting. I showed how to use SQL Server Reporting Services without a SQL Server, or more specifically a SQL Server running Reporting Services. It got an enthusiastic response so I thought I’d add to it here by adding some reminder documentation, as much for myself as for all of you wonderful readers.

Using Reporting Services in Client, or Local mode is a 4 step process. First, you will need an XSD schema file to create the report on. Once you have the XSD you will be able to move to the second step, creating the report. Third you will need to place a Report Viewer control on your windows form, WPF form, or ASP.NET page. Finally you will need to write some code that generates an ADO.NET dataset, loads the report in the report viewer control, then binds it all together. Lets look at this step by step.

Normally when you create a report you connect to a database, then base it off of some object like a query, view, or stored procedure. The report is then uploaded to a Reporting Services server, which takes care of hosting it, displaying it, and generating the data for it. With client mode you have no server available, so we have to instead create a surrogate. That’s where our XSD file comes in.

Right click in Solution Explorer and “Add a new item”, and from the list of goodies select “XML Schema”. Name it something appropriate, letting the default extension be XSD. For this example I will be getting customer order data, so I’ve given it the name CustomerOrders.xsd. Visual Studio will think about it then add it to the project, and even helpfully open it for you. I don’t know about you, but hand typing XML Schema’s isn’t my idea of fun, so you should glace at it, go “that’s nice” then close it.

Now right click on the XSD file in Solution Explorer, and pick “Open with….”. In the dialog that appears, select “Dataset Editor”. When you do, Visual Studio presents a big scary warning message letting you know that you could lose contents, and that this will forever be a dataset XSD file. We have nothing in the file, so we’re cool with this, just click OK.

You will now be presented with a big surface area. In the middle it tells you to drag items from the server explorer or right click. If you have a table, view, or stored procedure you are free to drag it in, but most of the time you’ll want to base this off of a SQL query. Right click on the surface, and select Add…., Table Adapter from the menu. The first screen asks you for the database connection. This is the only time you’ll actually need a connection, in this example I am using the good old Northwind database. I pointed at Northwind and clicked next.

image

Next we are asked how we are going to access the data. Since we have a SQL Statement just pick the default of “Use SQL statements” and click Next.

image

Now take your SQL Statement and paste it in, and click Next.

image

OK, click Finish to wrap up the addition of the XSD. By default the adapter has a generic name, we should give ours something more meaningful. Click in the top bar, then enter a new name. Since my example report is for customer order data, I’ll name it CustomerOrders. I then went to the bottom bar and renamed the TableAdapter1 to CustomerOrderTableAdapter. You should now see something like:

image

Note that this will become your Data Source for the report. The data source will have the name of the XSD followed by the name of the source, in this case it will read CustomerOrders_CutomerOrders. OK, now it’s time to create the report.

Go back to Solution Explorer, right click and pick Add New Item. Navigate to the Reporting area and pick Report Wizard. Note the file extension should end in RDLC. If you have used Reporting Services before, you will know that reports typically end in RDL. However, client mode reports have just a slightly different syntax to them, thus the RDLC extension to differentiate the two. While you can modify an RDL to become an RDLC and vice versa, you have to do so by hacking the XML behind the report.

Note you can also choose just Report, but then you’ll have to setup everything manually. For this simple example though, we’ll just use the Report Wizard.

image

Give your report a meaningful name and click Add. The report wizard then shows you a welcoming screen if you’ve never run it before, just click Next.

Now we need to pick the data source. In this example, you want the CustomerOrders branch, so select it and click Next.

image

The next screen asks if we want a Tabular or Matrix report. Select the one for you, in my example I picked Tabular and clicked Next. The next screen asks how we want to display the data. For my example, I opted to group by the customers company name and contact name, then the order data went into the details area. Fill out as appropriate for your report and click Next.

image

The next screen asks how we want things laid out. This affects the look and feel of the report. For my example I just took the default and clicked next, however you are free to play with this to experiment with the different looks and feels your reports might have.

Likewise the next screen is also a look and feel one, asking what colorings we want to apply. Pick one that makes you happy and click next. You can always change it later, many times I pick the Generic one (which adds no colors) then fix it up afterward.

The final screen is the wrap up. Give your report a meaningful name and click Finish.

image

OK, you have a report, now you need a container. Open up the user interface you want to place the report viewer control on. In my example I went with a very simple Windows Forms application.

In my toolbox, I navigated to the Reporting section, where I only found one control, the MicrosoftReportViewer control. (Note I am using Visual Studio 2008 SP1, if you are on an earlier version your names may differ slightly). Grab it and drop it onto your design surface. I also added a Button control to the form to kick off the report display process.

image

Now it’s time for the last step, adding some code. In this example I’ve used a Windows Form. Opening it, the first thing we find in the form load area is::

      this.reportViewer1.RefreshReport();

(Note I left my report viewer control named reportViewer1.) Delete it, we’ll have it refresh elsewhere.

Now we need to add some using statements to the top of our class.

//Add these to the standard list above

using System.Data.Sql;

using System.Data.SqlClient;

using Microsoft.Reporting.WinForms;

The first two will be used in accessing our Northwind database, you may need to use different libraries if you were going to another database. I’ve also included a referenced to the Reporting.WinForms library so we can manipulate the report programmatically.

Now let’s go to the code for the button click event. First, we need to reset the report viewer in case we’d been using it to host another report.

      // Reset in case report viewer was holding another reportViewer1

      reportViewer1.Reset();

Next We need to set the report viewer to local mode. This tells it we’ll be supplying the report name from a local file, and binding the report to a local ADO.NET datasource.

      // Set the processing mode for the ReportViewer to Local

      reportViewer1.ProcessingMode = ProcessingMode.Local;

Our third step is to create a local report variable, and set it’s reference to the report viewer’s local report. This will make it easier to work with. Then we’ll set the location of the report we want to use.

      LocalReport localReport = reportViewer1.LocalReport;

      localReport.ReportPath = @"D:\Presentations\SQL Server\SSRS RDLC\SSRS_RDLC\Report2.rdlc";

Now we need to create an ADO.Net dataset, and populate it. I implemented most of that functionality in a method called GetCustomerOrders, which I’ll append at the bottom of these instructions. It’s very straight forward code.

      DataSet dataset = new DataSet("Northwind");

 

      // Get the sales order data

      GetCustomerOrders(ref dataset);

At this stage we have told it where our report is, and have created the dataset. Now we need to create a datasource for the report itself. We’ll use the ReportDataSource object. For the name, we’ll use the same name as the XSD schema, CustomerOrders_CustomerOrders. Then for the value we will give it the table from the dataset we created in code. It’s possible for a report to have multiple datasets, in the report we’d give each one it’s own name (based on the XSD) then here we’d bind the dataset table to the name we’d used in the report. Once done we will then add the new ReportDataSource to the local reports DataSources collection. Finally, we’ll referesh the report viewer to make it generate the report.

      // Create a report data source for the sales order data

      ReportDataSource dsCustomers = new ReportDataSource();

      dsCustomers.Name = "Customers_Customers";

      dsCustomers.Value = dataset.Tables["Customers"];

 

      localReport.DataSources.Add(dsCustomers);

 

      // Refresh the report

      reportViewer1.RefreshReport();

You can download a copy of these instructions, along with the entire sample project including code and reports, at the Microsoft Code Gallery site http://code.msdn.microsoft.com/SqlServerRSClient . As promised, below is a copy of the GetCustomerOrders routine, for your reference.

    private void GetCustomerOrders(ref DataSet dsNorthwind)

    {

      string sqlCustomerOrders = "SELECT c.[CustomerID]"

        + " ,c.[CompanyName]"

        + " ,c.[ContactName]"

        + " ,c.[ContactTitle]"

        + " ,c.[Address]"

        + " ,c.[City]"

        + " ,c.[Region]"

        + " ,c.[PostalCode]"

        + " ,c.[Country]"

        + " ,c.[Phone]"

        + " ,c.[Fax]"

        + " ,o.[OrderID]"

        + " ,o.[CustomerID]"

        + " ,o.[EmployeeID]"

        + " ,o.[OrderDate]"

        + " ,o.[RequiredDate]"

        + " ,o.[ShippedDate]"

        + " ,o.[ShipVia]"

        + " ,o.[Freight]"

        + " ,o.[ShipName]"

        + " ,o.[ShipAddress]"

        + " ,o.[ShipCity]"

        + " ,o.[ShipRegion]"

        + " ,o.[ShipPostalCode]"

        + " ,o.[ShipCountry]"

        + "  FROM [Northwind].[dbo].[Customers] c"

        + "  join [Northwind].[dbo].[Orders] o on c.CustomerID = o.CustomerID";

 

      SqlConnection connection = new

        SqlConnection("Data Source=(local); " +

                      "Initial Catalog=Northwind; " +

                      "Integrated Security=SSPI");

 

      SqlCommand command =

          new SqlCommand(sqlCustomerOrders, connection);

 

      SqlDataAdapter customerOrdersAdapter = new

          SqlDataAdapter(command);

 

      customerOrdersAdapter.Fill(dsNorthwind, "CustomerOrders");

 

    }

Using a Local Reporting Services 2008 Report with an ADO.NET Data Set

SQL Server Reporting Services is an incredibly full featured reporting tool. An often asked question though is “How can I use Reporting Services without setting up a full SQL Server just to run Reporting Services?”

Fortunately the folks at Microsoft thought of this, and created a version of Reporting Services that runs in Local, or what Microsoft calls Client mode. There are several ways to use client mode, you can bind the report right to the database if you wish, or to an object. However if you have an application that’s been around for a bit, or maybe you’ve been around for a bit, chances are you have a lot of ADO.NET DataSets you’d love to use for data in a report. Today we’ll look at how to bind those data sets to a SQL Server Reporting Services report.

Let’s say you have created an application to work with the AdventureWorks2008 database. You user has now asked you for one last feature. They wish to display a list of Vendors in a report. They want to preview the report, print it, and be able to export it to a PDF format.

Based on your experience you know that SQL Server Reporting Services would be a good choice. However your client does not have an instance of SQL Server Reporting Services running in their corporation. Thus the path is clear, use SQL Server Reporting Services in Client mode.

Preliminary Work

Prior to beginning our work, we’ll need to do two basic setup steps. First, if you don’t have it already, you will need to download the Adventure Works 2008 database from CodePlex. Install it following their instructions. Here is the current location for AdventureWorks2008:

http://www.codeplex.com/MSFTDBProdSamples/Release/ProjectReleases.aspx?ReleaseId=18407

Now open up Visual Studio 2008 and create a new C# WinForms application. Note that while the ReportViewer control we’ll be using works fine in WinForms, ASP.Net, or WPF, for simplicity we’ll use a WinForms application. Give your project a meaningful name (I used ReportingServicesLocal).

Create the DataSource

Normally Reporting Services knows what tables and columns are available because you have setup a connection to a database. In this scenario however, we are going to bind the report to an in memory ADO.NET DataTable.

At design time then Reporting Services does not know about the DataSet, and so we must create a surrogate for Reporting Services. Thus we’ll create a special type of XML schema definition to stand in for our not yet created DataSet.

To accomplish this, first we need to create the Data Source schema by following these steps:

1. Right click on the project in the Solution Explorer window.

2. Select Add, New Item.

3. Click on the Data leaf of the Visual C# Items branch in the Add New Item window.

4. Pick the DataSet item. Give it a meaningful name, such as VendorList.xsd.

clip_image002

Now we need to add a table to the DataSet.

1. In your toolbox, under DataSet find the Data Table tool and drag it onto the design surface.

2. Click the DataTable1 and rename it to Vendors.

The last step in the process is to add our columns to the Vendors DataTable we just created.

1. Right click on the name of your DataTable and pick Add, Column from the pop up menu.

2. For the first column type in VendorName. Note that if we needed to, we could now go to the properties window and change the DataType to something other than the default of System.String. For this lab, everything we’ll use is a string so this won’t be needed.

3. Repeat step 2, adding these column names: AddressLine1, AddressLine2, City, StateProvinceName, PostalCode

When done it should look like:

clip_image004

Create the Report.

Now that we have a schema, we’re ready to create the report and add the components to it. To create the report, follow these basic steps.

1. Right click on the project and select Add, New Item.

2. In the pop up window, go to the Reporting leaf under the Visual C# branch.

3. Pick “Report”, and give the report a meaningful name such as VendorList.rdlc.

clip_image006

Now that the report is created, we need to add the components and data columns to it.

1. With the blank report, drag a Table control onto the report body.

2. Open the Data Sources window by selecting Data, Show Data Sources from the Visual Studio menu.

3. You should see the VendorList DataSet, under it the Vendors DataTable, and under it the columns.

clip_image008

4. Drag the VendorName to the first column of the table. Next, drag the City to the second column, and the StateProvinceName to the third.

5. Right click on the column header for StateProvinceName and pick “Insert Column to the Right”.

6. Drag the PostalCode to this newly inserted column. Your report should now look something like:

clip_image010

Adding the Report Viewer to the Windows Form

Now that the setup tasks are complete, it’s time to get to the user interface ready. First we’ll do some basic setup of the form.

1. When you created the basic project Visual Studio created a default Windows Form, Form1.cs. Start by changing the Text property to read “Report Viewer”.

2. While we’re at it, let’s change the (Name) property to frmReportViewer.

Now add the Report Viewer control to the form.

1. In the toolbox, navigate to the Reporting area, and drag a MicrosoftReportViewer control onto the form. Resize so it takes up the lower 90% or so of the form.

2. Change the name to rvwMain (or something meaningful).

Next add a button to the form. We’ll use it to trigger the report.

1. From the Common Controls area of the toolbox, drag a button control onto the form.

2. Change the (Name) property to btnDataSet.

3. Change the Text property to DataSet.

4. Double Click on the button to open up it’s code behind.

We’ll be supplying the data to the ADO.Net dataset using SQL Server, so we need to go to the top of the form and add a reference to the System.Data.SqlClient.

using System.Data.SqlClient;

Now let’s go into the btnDataSet_Click event, and add some code to fill our dataset. This code snippet will bind to our local SQL Server, create a command to do a simple select statement to a view, and fill the dataset.

  /* Fill the Dataset ------------------------------------*/
  string qry = "select v.Name as VendorName "
                  + ", v.AddressLine1 "
                  + ", v.AddressLine2 "
                  + ", v.City "
                  + ", v.StateProvinceName "
                  + ", v.PostalCode "
               + "from Purchasing.vVendorWithAddresses v "
              + "order by v.Name ";

  string connectionstring = @"Server=(local);"
    + "Database=AdventureWorks2008;Trusted_Connection=True;";

  SqlConnection connection = new SqlConnection(connectionstring);
  SqlCommand cmd = new SqlCommand(qry, connection);

  SqlDataAdapter daVendor = new SqlDataAdapter();
  daVendor.SelectCommand = cmd;
  DataSet dsVendors = new DataSet();
  daVendor.Fill(dsVendors);

Note that in the first line of our select statement, we had to use v.Name as VendorName. The column names we return from our dataset must match the column names we entered in the Data Source back in Exercise 2 Step 3. Fortunately SQL easy to use “AS” syntax makes this simple.

Also, even though in this example we use SQL Server, the connection could be to any Data Source such as MySQL or Oracle. The important thing is we wind up with a DataSet to bind to.

In the same btnDataSet_Click method we now need to tell the report viewer control which report to run, then where to get it’s data. To tell the ReportViewer control to use a local report (as opposed to a report residing on a Reporting Services Server) we need to set the ReportEmbeddedResource property.

  rvwMain.LocalReport.ReportEmbeddedResource = "ReportingServicesLocal.VendorList.rdlc";

Note the format of the string we pass in. It has the name of the project, then a dot, then the name of the report complete with it’s rdlc extension. You should also know this is case sensitive.

Now we need to tell the report where our data really is. To do this, we’ll tell it to bind the Vendor data table from the VendorList data source to the dataset we generated in the step above.

  rvwMain.LocalReport.DataSources.Add(
    new Microsoft.Reporting.WinForms.ReportDataSource(
    "VendorList_Vendors", dsVendors.Tables[0]));

We need to create a new ReportDataSource to pass into the Add method of the reports DataSources. In the constructor for the ReportDataSource we pass in two parameters. The first is the name of the DataTable we are binding to.

Note that it’s syntax is a bit odd, you have to address it with first the DataSet name, then use an underscore to append the name of the specific DataTable.

The second parameter is the specific table from the dataset to bind to. Since we only had 1 we can use the simple .Tables[0] syntax shown here. We could have also given it a specific name.

One final note, in this simple example we are only binding one data source. However it’s possible for reports to have multiple data tables contained in them. To bind each one, we would simply have created a data table for each in the XSD, then added the code to the step above to read each one in, then bound them in this step by repeating this line of code for each one.

Finally we’re ready to display the report. Simply add this line to trigger the generation of the report.:

      rvwMain.RefreshReport();

Test your application.

Everything is now setup, you should be ready to run.

1. Launch the app from Visual Studio.

2. Once open, click on the DataSet button.

3. Your screen should look something like:

clip_image012

 

And there you go, you too can now easily create nice looking reports from your existing ADO.NET datasets.

Bug.Net meeting on SQL Server Compact Edition

A few months ago I did a presentation to the Birmingham .Net Users Group (BUG.NET). The subject was SQL Server Compact Edition, in it I discuss both the ‘traditional’ way of accessing a SSCE file as well as using LinqToSQL.

This is a recording of a live presentation, so the audio quality may not be as good as some of the previous presentations I’ve done, it was my first experiment in recording a session live. Let me know how it works for you, I did make every attempt to clean up where I could and ensure the audience was understandable as well.

The wmv is about 76 meg, you can download the wmv directly or watch streaming below.

SQL Server Compact Edition Connection Strings

In my recent presentation I talked about an important but subtle difference with connection strings when using SQL Server Compact Edition. It was so important I thought I’d make a special blog post out of it.

There are two methods for programmatically accessing data in SQL Server Compact Edition (SSCE). The first method is using the System.Data.SqlServerCe library. When you create an instance of the SqlCeEngine, you need to pass a connection string formatted like so:

DataSource=”mydatabasename.sdf”; Password=’mypassword’

This method is valid, by the way, for version 3.1 or 3.5 of SSCE. The second method, available with Visual Studio 2008 and the 3.5 version of SSCE is to use LINQ to SQL. When creating the DataContext object, you also need a connection string formatted like so:

Data Source=mydatabasename.sdf; Password=’mypassword’

Note very carefully the two differences. First, the name of the sdf file lacks the double quote marks in the LINQ to SQL version. Second, note the Data Source phrase has a space between the words in the LINQ version, where the SqlCeEngine version lacks the space.

It’s a small distinction, but it’ll drive you nuts if you don’t catch it. I drove myself nuts for quite a while because I didn’t notice the extra space in Data Source when I began experimenting with LINQ to SQL! Hopefully my pain will save others some hair pulling.

Getting Started with SQL Server Compact Edition 3.5

Tonight I will be presenting “Getting Started with SQL Server Compact Edition 3.5” at the Bug.Net users group. If you go to my Arcane Lessons page and scroll down just a little, you will find a section called “Getting Started with SQL Server 2005 Compact Edition”. The subjects referenced there are still valid under Visual Studio 2008 / SQL Server Compact Edition 3.5.

However, there are some new features that make it worth our while to give SSCE 3.5 a second look. For example, did you know you can access Compact Edition using LINQ to SQL? My updated presentation and code demo will show you how.

Speaking of which, you can find my PowerPoint slides and Code Demos at the new Microsoft Code Gallery site I’ve setup:

http://code.msdn.microsoft.com/ssce

Over the coming days I will also be adding blog posts to talk about the new features in version 3.5, and how to access SSCE from LINQ to SQL.

Presenting Getting Started with SQL Server Compact Edition 3.5 at BUG.NET Meeting

Just wanted to let everyone know I’ll be doing a presentation this coming Tuesday night, August the 12th for the Birmingham .Net Users Group (BUG.NET). My topic, as you may have guessed from the title, will be using SQL Server Compact Edition.

While I will be using Visual Studio 2008, I will point out which pieces are 2005 compatible. I will also cover the use of both traditional coding techniques as well as how to use LinqToSQL to talk to the Compact Edition.

The meeting takes place at 6:30 pm at New Horizons Training Center in Homewood.

I also plan a new series of blog posts to start later this week on the subject, and will be creating a new Code Gallery site to hold my examples.

Also, don’t forget the regular BSDA meeting this coming Thursday night, the 14th. Also starting at 6:30 pm at New Horizons, Shannon Brooks-Hamilton, a software usability expert, will be there to talk about user interface design. Lots of good thought material on how we can make better UIs for our users.

Doug Turnure MS Developer Evangelist To Speak at BSDA on Silverlight 2.0

Just thought I’d share some exciting news, Doug Turnure the Microsoft Developer Evangelist for our South East area will be in Birmingham on Thursday, March 13th. He will be at the Birmingham Software Developers Association and will be telling us about Silverlight 2.0 and other cool stuff that was announced at Mix 08 this week. Afterward we’ll be having a geek dinner at Jim and Nicks on Oxmoor.

The BSDA meeting will take place at New Horizons in Homewood, beginning at 6:30 pm. I’d suggest getting there a bit early to get a good seat, then be sure to join us afterward for food and more geekery at Jim and Nicks.

Don’t Uninstall Visual Studio 2005 Yet!

One of the great benefits of Visual Studio 2008 is the ability for it to target multiple .Net Frameworks. This means, in theory you could go ahead and begin using Visual Studio 2008 even though you still need to write apps that are 2005 / .Net 2.0 compliant. You might be tempted to go ahead and uninstall 2005. And that would be fine if you are only doing .Net development. But wait…

If you are still doing SQL Server BIDS (Business Intelligence Developer Studio) then don’t uninstall Visual Studio 2005! Currently there’s no support in VS2008 for doing SQL Server 2005 BIDS Development. If you uninstall VS2005 you won’t be able to do any more BIDS work. Trust me, I found out the hard way.

After uninstalling VS2005, I went to do a BIDS project and that’s when I got hit with the nasty surprise. The uninstall had also removed the Dev Environment that was shared with BIDS. I tried to rerun the install of my SQL Server Developer Edition, but for some reason it thought I wanted to upgrade. It kept giving me the message “You cannot upgrade a version of SQL Server from the GUI, you must use the command line.”

I finally had to reinstall VS2005, along with all it’s service packs. After that I was able to work on my BIDS projects again. So take it from me, if you are still doing SQL Server 2005 Business Intelligence projects, Visual Studio 2005 still has some life in it yet.

Alabama Code Camp

The sixth Alabama Code Camp is coming up February 23rd, 2008. Registration is now open, as is the call for speakers. Many, including myself have submitted, you can see them by going to the Alabama Code Camp site and clicking on the speakers link. The list of speakers is very impressive, no less than eight MVPs, and at least two authors. I’m humbled to be amongst such distinguished company!

Here’s the synopsis for my two sessions, in case you are curious:

Introduction to SQL Server Integration Services

Whether you are creating a full blown data warehouse, doing a data conversion from an old system to a new one, or integrating applications together SQL Server Integration Services can help. Get an overview of this powerful tool built into SQL Server.

The Developer Experience

Learn about tips and tricks to enhance your experience as a developer both in the physical world and the virtual world. See hardware that can make your life easier, software additions for Windows and Visual Studio, even how just a few tweaks in the Visual Studio options can make your experience as a developer more pleasant and productive.

This is shaping up to be an impressive code camp, so don’t hesitate and get registered today!

Visual Studio 2008 Loadfest – Birmingham

On Thursday night, January 24th 2008 the BUG.NET group in conjunction with Microsoft is sponsoring a Visual Studio 2008 Loadfest. Bring your laptop or desktop and lets load VS 2008 on it! The first 75 people to register will get a free copy of VS2008. In addition there will be some fun and games, there will be several XBox 360’s and huge TVs to play games on. (Sorry, no give aways of the XBoxes or TVs, but you still get to have fun with them.)

The event will begin at 6:30 pm across from the New Horizons training center,  and will last about 2 hours. (See the link below for more details and directions).

To attend and get your free copy of VS2008 you must register! Go to

https://www.clicktoattend.com/invitation.aspx?code=124185

No cost, but like I said you must register. See you there!

VS 2008 Technical Overview

Just thought I’d pass along an upcoming event, on Thursday January 17th 2008 Doug Turnure will be presenting a technical overview of Visual Studio 2008 at the IPSA (Internet Professional Society of Alabama). Doug’s will be talking about new features and designers in VS2008, LINQ, and will even be giving away a couple of copies of VS2008. RSVP ahead of time at the IPSA site, especially if you want to purchase a lunch.

Enjoy!