Arcane Fun Fridays: Photographic Podcasts

Readers will know I’m a big fan of digital photography, it’s a hobby of mine I like to relax with. For example, here’s a pic I took in downtown Birmingham (the one in Alabama, not England).

[My Picture!]

I’m also a big beliver in podcasts, I listen to many to educate myself in the .Net world. It occurred to me there’s probably some good photographic podcasts as well, and sure enough I found some good ones. I thought I’d pass along some of the one’s I’ve been listening to, for your listening delight.

The Candid Frame – The host interviews photographers to find out how they got started, their techniques, etc.

http://www.thecandidframe.com/

Martin Bailey Photography – Martin shares his techniques with us.

http://www.martinbaileyphotography.com/podcastmp3.php

Jeff Curto’s Camera Position – Jeff concentrates on the creative side of photography

http://www.cameraposition.com/

Tips from the Top Floor – Each week Chris has a new technique for us. You should also check out the forums, they are very active.

http://www.tipsfromthetopfloor.com/

Photocast Network – This is a central site for many shows including the ones I’ve listed above. They also have a few other shows I want to check out but haven’t had the opportunity to as of yet.

You should definitely check out the “Focus Ring” episodes. These are shows where the hosts from several of the network’s shows get together on a single podcast to discuss various topics. By far these have been my favorite episodes so far.

http://www.photocastnetwork.com/

There you go, some podcasts to listen to while you’re out playing with that new camera of yours this weekend!

Getting Started with SQL Server 2005 Full Text Searching: Part 3 – Using SQL

OK, so you have this spiffy catalog, and you’ve populated it with full text searches for your favorite tables. Now you’d like to actually use those index from within your SQL. There are four new commands you can use in SQL to get to your data. All are used as part of the where clause, and have similar syntax but different results.

The first is the one you’ll probably use the most, it’s the contains command. You simply pass in the column name and what you want to search for.

select col1, col2 from myTable
where contains(fts_column, ‘searchword’)

For fts_column you can use the name of one of the columns that was indexed, or you can use * (and asterisk) to search in all of the columns that were full text search indexed. In the single quotes you put in the word or phrase you want to look for.

Contains searchs for an exact match. It either finds it or it doesn’t, and it has to be an independent word. For example, if your text field contained “I love the Mythbusters every week.” and you searched for ‘Mythbuster’, would NOT return a match.

If you want your text searching to be a little more open minded, use the freetext command instead. The syntax is identical to contains, including the ability to use an asterisk.

select col1, col2 from myTable
where freetext(fts_column, ‘searchword’)

In this case, however, a search of our afore mentioned text field for ‘Mythbuster’ would return a match, as freetext understands that Mythbuster and Mythbusters are essentially the same word.

In your application, you might consider using a check box that says “exact match”. For exact match queires use the contains keyword, when the user does not check you can use the freetext command.

It’s also possible to return a list of results that are sorted by a rank. The rank indicates the strength of the match to the search phrase passed in. To get a list of ranks, use either the containstable or freetexttable commands. Their syntax is like their cousins, as is the method it uses for searching (containstable is exact, freetexttable is more liberal). The only addition is the first parameter must be the name of the table, then comes the column name and search condition.

Instead of rows, what is returned are two columns: key and rank. The rank is a relative score from 0 to 1000 that indicates the strength of the match. A higher value means it’s a better match.

The key is the primary key from the table you’re searching. You can then use this key to pull back the data from the main table. Let’s do a simple example: you want an exact match for all employees who live in Alabama. Unfortunately the DBA who created the table had just come off a three day drinking binge, and instead of separate street / city / state fields, just created a big text field called emp_address.

select rank, emp_id, emp_name from empTable
join freetexttable(empTable, emp_address, ‘Alabama’) ftt
on empTable.emp_id = ftt.[KEY]

This would return something like:

Rank emp_id emp_name
255 12345 Jamie Hyneman
128 45678 Adam Savage

And there you go, four ways you can have your SQL leverage the power of full text searching to return results.

Getting Started with SQL Server 2005 Full Text Searching: Part 2 – The Indexes

Yesterday I introduced you to full text searching, and covered the basics on creating catalogs to hold your full text indexes. A full text search index is a little different than a regular index. First, each table can only have one full text search index created for it. Next, the create syntax is slightly different. OK, in fact it’s a lot different. Let’s take a look:

create fulltext index on my_table_name_here
(column1, column2,…)
key index my_tables_unique_index_name
on my_catalog_name_here
with change_tracking {manual | auto | off}, no population


The first thing is also the most obvious, you need to supply the name of the table in the first line. Note we’re not supplying a name for the full text search index. Since there’s only one per table, SQL Server takes care of creating the full text search index name for us.

Next we need to supply the name of the column or columns we want indexed. These can be any sort of text field. Just list them one after another, separated by commas.

The next item is also required, and sort of tricky. Each row in the table you are doing full text searching on must have a unique index. It makes sense when you think about it, for the text search to be efficient it must be able to quickly move to the row with the word you’re hunting for, and the way to do that is via the unique index.

So for this parameter you’ll need to supply a unique index name for “my_tables_unique_index_name”. Keep in mind this is not the name of the columns from the table. Instead this is the name of a “normal” index (not a full text search index) that is unique for the table.

The “on” parameter is optional, you only need it if you set up multiple catalogs and don’t have a default. If you omit it, it will simply put the new index in the default catalog.

Next you will need to tell SQL Server how often to update the index. You do this through the with change_tracking parameter. OFF turns it off entirely, no updates will be done until you issue a rebuild via the alter syntax I’ll cover momentarily. You might want to use OFF when you have a table that gets updated very rarely.

AUTO, on the other hand is for when you have a table that gets updated frequently. It will update the full text search index when the associated table is updated. The final option, MANUAL will flag changes to the underlying table, but it won’t update the full text search index until you tell it to.

The final parameter, no population, only applies when you use OFF. It tells SQL Server not to populate the index when it’s created. If you omit it, or use AUTO or MANUAL, SQL Server will populate the full text search index when the index is created.

OK, so you’ve got this index created and need to change it, or perhaps you need to work with one that’s already in existence. For this there’s the alter command:

alter fulltext index on my_table_name_here
parameters here

There’s quite a few parameters you can pass, so let’s look at them individually. Just know that when you see them below, they should go where you see “parameters here” above.

set change_tracking {off | auto | manual} – This works the same as with the create command, it lets you change the tracking mode.

disable – Disables the full text search index, it’s not used for searching nor is it updated. However the data is left intact, should you want to turn it back on.

enable – Enables the full text search index after a disable.

add ( column ) – Adds the passed in column to the full text search index.

drop ( column ) – Removes the passed in column from the full text search index.

start full population –This rebuilds the index from the ground up.

start incremental population –This will update the index since the last time it was updated. Note you must have a timestamp column on your table for this to work.

start update population –Remember a moment ago when I talked about the change_tracking manual option? Well this command is how you update an index with manual change tracking.

And finally, you may decide one day you no longer need the full text search index. Since the readers of this blog are the smartest, most intelligent readers on the planet you’ve already figured out we’ll need to use a variant of the drop command:

drop fulltext index on my_table_name_here

And there you go, you now know how to create, change, or remove a full text search index. Now there’s one more piece, you need to know how to use them from within your SQL. But we’ll save that for tomorrow.

Getting Started with SQL Server 2005 Full Text Searching: Part 1 – The Catalog

One of the coolest features of SQL Server 2005 is the ease with which you can implement full text searching. True, it was available in previous versions but 2005 makes it very easy to implement and use.

Full Text Search is an offshoot of the Microsoft Index Server technology. It’s what you could call an “add-on”. By default it’s enabled for every database you create in 2005.

But just having it turned on is not enough, now you have to create a catalog to hold the data for your full text data. The catalog is a separate file from your database, and holds all the key words it finds. The syntax to create a catalog is pretty simple:

create fulltext catalog my_catalog_name_here
in path ‘c:\mysqldata\somesubdirectory’
as default

The ‘in path’ is optional, if you omit it your catalog is created in the same place as the data. For small databases this is fine, for large ones you might actually want to store the catalog on a separate hard disk in order to get a performance boost.

The ‘as default’ clause says this catalog will be the default one used for new full text search indexes, or for searching existing ones. Most times you’ll probably only need one catalog for a database, so you can add this and forget it.

Once you have a catalog created, you may need to tweak it. There’s not a lot of tweaking you can do, just three ways you can alter it, and all are implemented via the alter command.

alter fulltext catalog my_catalog_name_here rebuild

alter fulltext catalog my_catalog_name_here reorganize

alter fulltext catalog my_catalog_name_here as default

The first command, rebuild does just what it says. Your old catalog goes to the great bit bucket in the sky (i.e. it’s deleted) and SQL Server will recreate all of your full text search indexes. And it should be obvious, but remember during this time your full text search will not be available.

Reorganize is something like doing a disk defrag, it cleans up and reorganizes your full text search indexes. While it may not be as efficient as doing a complete rebuild, it does have the advantage of not taking the catalog offline while it does it’s work.

Finally ‘as default’ simply makes the catalog the default, in case you either forgot or were distracted by Mike Rowe doing something nauseating on “Dirty Jobs” (http://www.discovery.com/dirtyjobs) .

OK, you now have a catalog. But the catalog is simply a space to hold your full text search indexes, and those we’ll create in the next post.

Collections in C#: NameValueCollection

In doing some reading I ran across a handy collection called the NameValueCollection. This collection, which resides in the System.Collections.Specialized namespace, allows you to use either a string or an integer index for the key. Further, it allows you to store more than one string value in a key.

Let’s start the code example by creating a simple Console application. I added using references to System.Collections and System.Collections.Specialized namespaces at the top. As a final bit of housekeeping, make sure to add a Console.ReadLine() as the last line of our code, so the console will wait on us to hit the enter key after we read the results. (If you don’t, the program will run so fast you won’t be able to appreciate your fine work.)

Now I’m going to load some data into a new collection called myCollection. For the data, I’ll use a website owner and the website or sites they own.

      System.Collections.Specialized.NameValueCollection myCollection

        = new System.Collections.Specialized.NameValueCollection();

 

      myCollection.Add(“Arcane”, http://arcanecode.com”);

      myCollection.Add(“PWOP”, http://dotnetrocks.com”);

      myCollection.Add(“PWOP”, http://dnrtv.com”);

      myCollection.Add(“PWOP”, http://www.hanselminutes.com”);

      myCollection.Add(“TWIT”, http://www.twit.tv”);

      myCollection.Add(“TWIT”, http://www.twit.tv/SN”);

Next, I’d like to get some data back out. I mentioned you could cycle through the collection using an integer index, so let’s see how that’s done:

      Console.WriteLine(“Key / Value Pairs by Integer Index”);

      for (int i = 0; i < myCollection.Count; i++)

      {

        Console.WriteLine(i.ToString() + ” “

          + myCollection.GetKey(i) + “: “

          + myCollection.Get(i));

      }

 

[Picture of Key/Value pairs by Integer Index]

 

In the above output you can see how I use the GetKey and Get methods to retrieve the key name and value for that key using the loop’s index. Note that when multiple values are associated with a single key, they are returned as a list of comma separated values.

You can also use foreach logic to cycle through the collection. Here I am using the AllKeys property of our collection to get the list of keys. I can then print the key, and also use the key as the indexer into my collection as you can see below.

      Console.WriteLine();

      Console.WriteLine(“Keys / Value Pairs via AllKeys Collection”);

      foreach (string myKey in myCollection.AllKeys)

      {

        Console.WriteLine(myKey + “: “ + myCollection[myKey]);

      }

 

[Picture of Key/Value pairs via AllKeys Collection]

 

Now I, what? Yes, you in the back row, what was your question? Ah, you say lists of comma separated values are OK, but you want to be able to access individual values? Fortunately some nested looping and the GetValues method will satisfy you demanding types.

 

      Console.WriteLine();

      Console.WriteLine(“Keys / Individual Values”);

      foreach (string myKey in myCollection.AllKeys)

      {

        foreach (string myValue in myCollection.GetValues(myKey))

        {

          Console.WriteLine(myKey + “: “ + myValue);

        }

      }

 

[Picture of Keys/Individual Values]

 

This also works great if your data has commas within it. Let’s add two lines back at the top of the program to the collection.

      myCollection.Add(“CommaTest”, “Here is a , in a string”);

      myCollection.Add(“CommaTest”, “Here is another , in a string”);

 
Now run the application again, and lets look at the results.
 
[Picture of Comma Test]

As you can see in the last area “Keys / Individual Values” the GetValues method correctly determined that the commas I had embedded were part of the data and not a delimiter between values.

Whenever you need a good string collection that has the ability to tie multiple values to a single key, the NameValueCollection would be a good class to take a look at.

 

Arcane Fun… Saturdays?

Sorry for missing my usual Friday post, I was having ISP issues (which are still unresolved, but I’ve done a workaround for now).

This weekend I’ll be participating in something very geeky, it’s called Field Day. Each year on the fourth full weekend in June amateur radio operators (you may have heard them called “Hams”) get together to practice their emergency response preparedness, fellowship and have a good time.

The idea behind Field Day is for the hams in a community to gather at a single location, setup radios, equipment, run off of emergency power, and generally practice what we would do in case of an emergency. At the same time my local clubs are gathered, other clubs will be gathering in their communities as well. We’ll then get on the air and communicate with each other, exchanging brief messages similar to what we would do in the event of a real emergency.

This preparedness has already paid off, several times. In the days after 9/11 amateur radio was the chief form of communication. More recently, the hurricanes that devastated Louisiana, Mississippi, and parts of Alabama provided a wide scale communications effort. For months it was amateur radio that provided the communications links between emergency responders as well as relief agencies like the Red Cross and United Way.

In this day and age you might be thinking “is amateur radio still around? I thought cell phones and the internet got rid of it?” Not so. Most amateur radio equipment can be setup with a minimum of requirements. A decent 12 volt battery, the radio, and some wire in a tree and the radio operator is in business. The internet doesn’t work so well without power, and the cell phones don’t seem to work to well after a hurricane knocks the cell towers onto the ground.

Community education is the other component to Field Day. Often we gather in public places like parks so that we can be seen by folks driving or walking by. This year my clubs, the Shelby County Amateur Radio Club and the Birmingham Amateur Radio Club are joining forces and will be at Oak Mountain State Park near the fishing lake. I’m sure in your community hams will be gathering too.

If you happen to be out and about and see a bunch of guys bent over radios, wander up and say hello. They’ll be glad to show you around, maybe even let you get on the air. There’s nothing quite like the thrill of picking up a microphone and realizing the guy you are talking to is on the other side of the planet, then realizing the only thing making it happen is the little box in front of you and a piece of wire strung up in a tree! Who needs the internet anyway?

VirtualBox – USB Support

So far I haven’t had a lot of success getting USB devices working under VirtualBox with XP as the guest. Perhaps it has something to do with Vista being my host?

I’ve been testing using some USB keys, and while VirtualBox seems to know they are present, the message never seems to make it into my guest OS of XP. I intend to keep working with it, USB support would be one of the most compelling things to make me start using VirtualBox as my primary virtualization platform. However, as of right now USB support doesn’t seem quite up to prime time.

VirtualBox – Communicating to the Host OS via Networking

This evening I installed my old copy of XP (I’m now running Vista) into VirtualBox. The install was pretty easy and straight forward, so much so that it’s not even worth doing step by step instructions. A simple wizard setup my base machine, and XP installed just like it would as a “real” machine.

Using the default of NAT for networking (Networking Address Translation) seemed OK for getting to the internet, but I spent most of my evening trying to make the guest OS, in this case XP, talk to the hard disks of my host OS, Vista.

To save you a lot of grief and manual digging, here’s what I finally had to do. First, I setup a single folder on my host OS, right clicked on it to bring up properties. I then picked the Sharing tab and told the OS to share it with others on the network. (Yes, I’m firewalled, both hardware at the router and within the OS as well. I haven’t been listening to all those security now episodes for nothing! )

The folder I created was named “Z”, for no better reason than it’d be easy to find. I also named the share Z, for consistency. Once I had it shared, I went back into the guest OS of XP, which was running inside VirtualBox. I opened an explorer (aka My Computer) window, and picked Tools, Map Network Drive. OK, here comes the tricky part:

After picking the drive letter, for the Folder I had to use the IP address of the guest OS, followed by the name of the share, as in \\192.168.1.1\Z . I could not browse my local network, I couldn’t enter the machine name, only using the combo of IP address followed by share name would work.

Digging in the documentation it said that running VirtualBox’s network emulation in NAT mode caused the issue, and gave the solution, but I wish they had mentioned it a bit more prominently in the software, since using a lot of common techniques was not working.

A few notes, yes I could have chosen to share my entire drive. However, being security conscious I prefer to setup a single folder and share it. That allows me a comfortable level of isolation, and allows my to quickly and easily scan the contents with antivirus / spyware applications before using the files. And, if anyone should “break in” my exposure via shared networking will be limited to that single folder, which will be empty 99.9% of the time.

To find your machine’s IP, in the host box (outside VirtualBox) open a command window and type in IPCONFIG and hit enter. In the list of wireless adapters should be your hard wired network card, just grab it’s IP address.

Also, the share name of “Z” was because I was testing, for longer term I’ll probably setup something more meaningful like “VirtualBox Shared Folder”.

Be aware that the moment you share a folder between your VirtualBox (or any Virutal Machine) and the host OS, you have a security vulnerability. That may be fine, and will be one of the better solutions for transferring data and application installs between the host and guest OS.

Many people though use virtual machines to test new software (especially “free”applications) for viruses / spyware / malware. If that’s your goal, make sure to disconnect your mapped network drive before testing these potentially harmful applications.

Hopefully I’ve saved you a bit of effort in establishing a connection between your guest and host OS’s hard disks when running VirtualBox.

Virtual Box

I’ve started playing with a new virtualization alternative, Virtual Box (http://www.virtualbox.org/ ). It’s an open source alternative to other virtual machine programs like VMWare or Microsoft’s Virtual PC. It runs on both Windows and several flavors of Linux, and has guest additions for Windows and Linux. It also has USB support, a feature lacking in Microsoft’s product.

I found the user interface very intuitive. Simply clicking New brings up a wizard and walks you through the steps to setup a new machine.

You can choose to use a physical CD/DVD or mount one off of an ISO file, access hard disk info, audio, etc all by clicking on the blue links you see above.

As an initial test I downloaded Damn Small Linux (http://www.damnsmalllinux.org/ ) as a ISO file, and ran it in “Live” version as a mounted image. I only gave it a quick run, but so far it seems to work OK. I plan further testing with XP as a test image, but would be interested in seeing comments with your experiences.

Arcane Reasons for Data Warehousing

I may have mentioned that of recent I have been doing a lot of work in the Data Warehousing arena. Today I met with some IT folks from another branch of the company who are considering a reporting strategy for their area. One of the people I was meeting with asked me “With so much data available, how do you decide what data to put in the warehouse first, versus what data do you leave in the application, either permanently or until a later point?

Great question, and I thought that you too might be interested in the answer.

Interapplication Reports. Historically trying to combine data from multiple applications has been painful, to put it nicely. Clearly than this turns out to be one of the most compelling reasons for data warehousing, to house data from multiple applications and allow users to easily combine that data into singular reports.

Phasing Out Historical or Ad-Hoc Systems. Accounting systems seem to have an existence all their own. For various reasons they live well beyond their normal lifespan. We have a system at work, written in an old DOS based reporting tool that dates back to the late 1980’s. Over the years it’s been used to do reporting from other systems. As it turns out it has some issues with Vista, and will need replacing. Rather than getting yet another system, we plan to replace its reports with ones from our data warehouse.

Friendlier Reporting. Often when I see databases, the field names are quite cryptic. Names like fklnam (foreign name last name) and accsbcd (account sub code) litter databases. It’s difficult enough for IT Professionals to decipher the field name mayhem, but asking users to do so just to create a few ad-hoc reports can be asking far too much. Not to mention the sometimes bizarre seeming relationships between tables.

Moving to a data warehouse allows you to give much saner, user friendly names to your data. In addition you can flatten out some of the tables, simplifying the relationship structures significantly.

Production Server Load Reduction. Production systems are usually optimized for dealing with single records at a time. As a result, searching through and retrieving data for large quantities of data can be resource intensive on the production system. Shifting reporting to a warehouse means a reduced load for the production system. In addition you elimante the chance that malformed SQL from some ad-hoc query can cripple your production system.

Ease of Offline Maintenance for Production Systems. Finally, having a warehouse makes it easier to take production systems offline for maintenance. If users know they can still get to their data via the warehouse, they will be less concerned about their production system going offline for work, which in turn makes it easier to schedule such work. If you have a system that requires frequent maintenance, your users will be less likely to give you grief if their data is available elsewhere.

Those are my primary ways in which we decide which data is targeted for inclusion into the warehouse. If you need to combine data from multiple applications, have older systems that need replacement, have cryptic field names or complex table relationships, need to reduce the load on your production server, or have systems that need frequent maintenance then consider those systems first for inclusion into a data warehouse.

I’d be curious to hear your comments on your strategies for determining inclusion into your own data warehouse.

Arcane Fun Fridays: Run As Radio

“Hi, my name is Arcane, and I’m a podcast addict. “ I tell the small room full of people.

“Hi.” A crowd of voices echoes back.

“Welcome to Podcast Addicts.” says the group leader. “Tell us about yourself.”

“It’s these podcasts. I just can’t seem to get enough of them. At first it was just listening on the way home from work. Then I started on the way to the office as well. Before long I was listening all the time, grocery shopping, cutting the grass, I’ve even quit watching TV, preferring to improve myself listening to these podcasts instead of frying my brain with yet another mindless sitcom.”

“So, what’s brought you here tonight?” the group leader prompts me.

“Well, it’s those jerks over at Pwop Productions ( http://www.pwop.com/ ). You know, the same guys who do Dot Net Rocks ( http://www.dotnetrocks.com/ ) and Haselminutes ( http://www.hanselminutes.com/ ) just to name a few?”

The group leader nods, glancing between me and the crowd, and looking just a bit worried. But since he says nothing, I continue. “Well, they’ve gone and done it again. As if all those great shows weren’t enough, they’ve gone off and created yet another one, Run As Radio ( http://www.runasradio.com/ ).

“Each week Richard Campbell and Greg Hughes talk about things for system admins, hardware geeks, or savvy developers. It’s gotten to where I’ve got podcasts going all the time. My wife says I don’t listen to her anymore, or I think that’s what she’s saying, it’s sort of hard to hear her over the podcasts. She may be saying something about the space aliens trying to eat my meatloaf, but…”

I pause, realizing the crowd is no longer listening to me. In a flurry of headphones and USB connectors they are attacking the computers on the far side of the room, the Run As Radio site flickering as they download past and current episodes to their various media devices. Even the group leader is there, frantically trying to get his Zune to connect to someone.

I smile, and slip quietly out the back door. My work is done.

 

Arcane Tools: Cropper

Well, the uber cool Scott Hanselman has done it again, found another gem. OK, he’s been using it for a while, but in watching his GrokTalk ( see my post on Tuesday ) I learned about Cropper.

Cropper is a screen capture tool. As you can see below, it puts an translucent window on your screen. You can move and resize this window with the mouse, or the keyboard.

[Pic of Cropper in action]

The arrow keys will move the cropper window in 1 pixel increments for fine tuning, or for quick moves combine the arrows with the CTRL key to make 10 pixel jumps. You can also resize, use ALT plus the arrows for 1 pixel resizes, or CTRL+ALT+arrow for 10 pixel resizing jumps.

You have the option to save in a variety of formats, including BMP, PNG, and JPG, and can even select a level of JPG compression. You can also save to the clipboard if you so desire.

To capture an image, simply double click on the translucent cropper window, or press ENTER. When you do, a file is written to your Documents folder in a subfolder called Cropper Captures (although this is user configurable). I like this, as it lets me quickly grab one screen shot after another without having to put a lot of thought into it.

The coolest thing about Cropper though, is it’s entirely written in C#, and open source so you can see all the code. It comes courtesy of Brian Scott, you can see his blog and download Cropper for yourself at http://blogs.geekdojo.net/brian/articles/Cropper.aspx .

The only negative I’ve found is the name. Apparently cropping is also a popular term in the scrapbooking world, so when I started talking about cropper my wife ( http://southerntinkerbelle.com ) got all excited and tought I was getting into scrapbooking! I hated to disappoint her, but on the bright side the sofa really wasn’t all that uncomfortable.

Mr. Wizard

TV’s Mr. Wizard, a.k.a. Don Herbert passed away June 12th, 2007 at the tender young age of 89. For decades Mr. Wizard made science interesting and fun. I got into Mr. Wizard in the 80’s when he was on Nickelodeon. And yes even though the show was aimed at kids, I was already in my 20’s. That shows you how good he was, he made a show for kids interesting to young adult geeks.

One of his big themes was making science accessible to everyone, showing how anyone could do science at home. I guess in some way he was a real for runner of the Web 2.0 movement, showing how anyone can do fun science with what they have on hand and not having to rely on some big company. Naturally Mr. Wizard’s on the web, you can view his official site and read more at http://www.mrwizardstudios.com/.

My condolences to the Herbert family, the science world is a little dimmer now without Mr. Wizard.

Arcane Surfaces

By now most folks have heard of the new Microsoft Surface ( http://www.microsoft.com/surface/ ). In case you’ve been busy organizing your Star Wars figures, Microsoft Surface is a technology that lets you interact with the top of a coffee table sized device, as if it were a touch screen. You can draw on it, write, move the windows around and resize them.

It also has the ability to interact with wireless devices. In one demo, a camera is placed on the Surface and the pictures appear to spill out onto the top of the table. Multiple people can “grab” these, spin them resize them, move them around to the delight.

What’s interesting though is the level at which people seem to want to take this. I’ve seen numerous blogs and websites exclaiming how they can finally have their “Tron Desk”.

[The Tron Desk]

In case you don’t recall, Tron (http://www.imdb.com/title/tt0084827/ ) was a 1982 movie in which Jeff Bridges gets sucked into a computer and has to play games to escape, and TRON is the program that can stop the bad guy. In the “real world” the bad guy (Ed Dillinger, played by David Warner (above)) has a really cool desk.

The monitor is built in, as is the keyboard. Dillinger types on the flat surface of his desk as lighted keys appear under his fingers. Of course in 1982 it was a mock up, but today we do have the technology to do that kind of thing, I recall some early personal computers of that era having flat keyboards, each key was just the slightest bump. Today, my PDA has a touch screen, I can use my finger to key in the password and manipulate the start menu.

So if this is so cool, and do-able, why isn’t everyone using flat desks like the Tron one today? Well for the same reason that I think the Surface is going to see limited use: Tactile Feedback.

Humans, at least for the time being, still like multisensory input. We like the feeling of the keys bouncing against our fingers, or the satisfying click of the mouse as we press it. It’s these tiny subtilities that we don’t think about that make the device usable, and largely unchanged since the inception.

Sure, Surface will have it’s place. I can see it as a big conference room table, or on a conference room wall. Maybe in resturaunts, to place orders or ask for drink refills.

But using the Surface as my desk? No thanks. Not unless it has a USB port for my keyboard!

— END OF LINE —

Thanks for coming!

I just wanted to thank everyone who took the effort to come to the presentation I did tonight on SQL Server Compact Edition at the Birmingham Dot Net Users Group ( http://www.bugdotnet.com ). It was a small crowd but very engaged, all in all a very enjoyable evening for everyone.

As promised, here is a link to the Power Point presentation (in PDF format) I used during the presentation:

SSCE presentation for BUG.Net group

The complete C# and VB.Net code samples were posted April 13th, 2007:

http://arcanecode.wordpress.com/2007/04/13/sql-server-compact-edition-with-c-and-vbnet/

And finally, the series of posts I mentioned on system Views started with this post on April 16th, 2007:

http://arcanecode.wordpress.com/2007/04/16/system-views-in-sql-server-compact-edition-tables/

If you want to see all of my SSCE posts, simply click the SQL Server Compact Edition tag over in the categories area, or use this link:

http://arcanecode.wordpress.com/tag/sql-server-compact-edition/
Thanks again to  everyone, I had a great time and hope came away with a better understanding of SQL Server Compact Edition.

Follow

Get every new post delivered to your Inbox.

Join 100 other followers