All posts by arcanecode

Fixing the Failed To Acquire Token Error When Logging Into Azure from PowerShell

When I haven’t used Azure from PowerShell in some time, I’ll get an error “failed to acquire token” when using cmdlets such as Get-AzSubscription or Get-AzContext.

To add to the confusion, sometimes Connect-AzAccount appears to work and log me in, but I’ll still get the error.

Under the hood, the Azure cmdlets are caching some connection tokens. Fixing this issue is pretty simple, although not intuitive or easy to find the solution. All you have to do is issue the following cmdlet:


This will remove old tokens from your system.

After issuing the clear command, just end your current PowerShell session, then login again using Connect-AzAccount and you should be good to go.

SQL Server 2016 Reporting Services Cookbook On Sale Until Jan 13, 2021

My last book, SQL Server 2016 Reporting Services Cookbook, is on sale until January 13th, 2021.

You can get the E-Book version for just $5 (US) when ordering direct from my publisher.

Now you may be thinking, “2016? That’s oooooold!” But in fact little has changed in Reporting Services since the 2016 release. All of the topics and techniques are still just as valid in SQL Server 2019.

To order your copy of the book, just jump on over to SQL Server 2016 Reporting Services Cookbook | Packt (

Hurry though, offer is only good until January 13, 2021

Iterate Over A Hashtable in PowerShell

Iterating over an array in PowerShell using a foreach loop is pretty simple. You might think you can do the same thing with a hash table using the same syntax, but if you do you’ll get nothing back.

It is possible to loop over a hash table though, using one of two methods.

First, let’s create a simple hashtable.

$hash = @{
'About Arcane Code' = ''
'ArcaneCode Blog' = ''
'ArcaneCode RedGate Articles' = ''
'ArcaneCode Github Repository' = ''

In the first method, the one that I prefer, you can use the GetEnumerator method of the hash table object.

foreach ($h in $hash.GetEnumerator() )
  Write-Host "$($h.Name) : $($h.Value)"

Within the loop, you can use the Name property to get the key part of the hash, and the Value property to retrieve the value. Here is the output:

ArcaneCode Blog : 
ArcaneCode Github Repository : 
About Arcane Code : 
ArcaneCode RedGate Articles :

In the second method, instead of iterating over the hash table itself, we loop over the Keys of the hash table.

foreach ($h in $hash.Keys) 
  Write-Host "$h: $($hash.Item($h))"

For each key, we retrieve it’s value from the hash table using the key to indicate the item we want. We could have shortened this slightly, but skipping the Item and just referencing the value by the key, using this syntax:

foreach ($h in $hash.Keys) 
  Write-Host "$h: $($hash.$h)"

Both of these methods produce the same output as our original version.

ArcaneCode Blog : 
ArcaneCode Github Repository : 
About Arcane Code : 
ArcaneCode RedGate Articles :

There you go, two simple ways in which you can iterate over a hash table in PowerShell. As I indicated, I prefer GetEnumerator because I have access to both the key and the value in a single variable within my loop. But feel free to use the method that works best for your situation.

Two New PowerShell Courses for Developers on Pluralsight

I’m proud to announce I have not one but two, yes TWO new PowerShell courses available on Pluralsight, targeted for developers. These two courses are designed to work in harmony, to show developers how they can leverage PowerShell to automate and assist in their daily work.

The first is “PowerShell 7 Quick Start for Developers on Linux, macOS, and Windows”. It leverages your knowledge as a developer to bring you up to speed on the PowerShell language. It doesn’t waste time explaining concepts you already know, like variables and loops. It simply shows you how to do things in PowerShell.

The second course is “Everyday PowerShell for Developers on Linux, macOS, and Windows”. It begins showing how to combine PowerShell with Docker to create a PHP container and test a simple website. It then proceeds to create an Azure SQL database and load data in it.

In the second you are taught how to code your own classes and modules by creating a simple module that leverages a USPS website API to lookup a zip code and return the city and state it belongs to.

In the final part of the course you are shown how to use the new DataFabricator module to generate realistic looking, but fake data for use in testing your applications.

While originally developed with PowerShell 7.0.3, all code in the course was tested and videoed using PowerShell 7.1.0.

Additionally, the code was tested on a variety of platforms, including Ubuntu 20.04, 20.10, Windows 10, H1 and H2, macOS Catalina and even macOS Big Sur.

If you don’t have a Pluralsight subscription, just go to the Pluralsight page and click the Try for Free link to get a free 10 day trial.

VSCode User Snippets for PowerShell and MarkDown

I’ve been working a lot on a new course for Pluralsight, Everyday PowerShell 7 for Developers. I’ve been cranking out a lot of code as a result, PowerShell, obviously, as well as Markdown for documentation.

I’m finding the use of VSCode’s User Snippets to be extremely useful for developing this course. Snippets have allowed me to store my most often used code as templates, quickly manifesting new sections of code.

In PowerShell I’m finding these especially useful for writing Pester tests. Most tests fall into a few simple patterns, using a snippet I can quickly create the bulk of my test, leaving me to fill in the few remaining pieces.

If you know Markdown, you know some of the syntax can be a bit arcane. Using snippets makes it easy to insert commonly used ones, such as being able to type in a snippet name of mdlink and get the correct syntax to appear for a hyperlink.

It also helps with consistency. A small example, for doing italics in Markdown, you can use a singe underscore or a single asterisk. For bold, two underscores or asterisks.

I decided (for no particular reason) to use underscores for italics and asterisks for bold. If it’s been a while since I wrote Markdown though, I may not always remember. So I setup mditalic and mdbold to insert the correct formatting for me.

I’ve placed my snippets in a repository on my github site:

If you aren’t familiar with how to use User Snippets, I have a quick overview in a Markdown file in the repository.

Speaking of Markdown, there is one issue with VSCode. By default, intellisense is not enabled in VSCode for Markdown. While snippets can be used without intellisense, it’s much easier to use them with intellisense turned on.

In the repository I have a Markdown  file with information on how to turn intellisense on in VSCode for Markdown files.

Be aware there are also user snippets available through the Extensions Marketplace in VSCode. Just go into the Extensions, and enter Snippet into the search bar.  I just didn’t happen to find any that quite suited my needs, hence I created my own.

You’re welcome to copy and use the user snippets, just some or all of them, as you need. Do be aware some of them have my name and websites, such as the author info snippet I use for the bottom of all my Markdown files, or the header snippet for the top of my PowerShell files.

I hope you find user snippets as helpful as I do.

What Is SQL Server Blocking?

This is a guest post from my friend Kevin Kline. Kevin serves as Principal Program Manager at SentryOne. He is a founder and former president of PASS and the author of popular IT books like SQL in a Nutshell. Kevin is a renowned database expert, software industry veteran, Microsoft SQL Server MVP and long-time blogger at SentryOne. As a noted leader in the SQL Server community, he blogs about Microsoft Data Platform features and best practices, SQL Server trends as well as professional development for data professionals.

Understanding the way that SQL server blocking works is key to ensuring that your database is able to run smoothly and optimally, yet this can be a tricky subject to unpick if you are new to it.

You can get an in-depth explanation of SQL server blocking by reading this guide from SentryOne, but for a briefer introduction to what is involved, read on.


Image Source: Pixabay

Basics of blocking

In certain SQL server systems, two or more processes may need to make use of the hardware and software resources available to them simultaneously. The nature of the process will determine whether or not it is granted ‘lock’ status, which effectively allows it to gain priority over other processes and leverage the resources ahead of them.

Any processes which are competing with locked processes are referred to as being blocked, since they are effectively being made to wait in a queue until the prioritized, locked process has been completed and the resources are freed up.

Server blocking is vital to ensuring that the integrity of the data remains uncompromised and that the entire ecosystem is able to operate as usual.

Lock types

To appreciate why and when blocking occurs, it is necessary to look at the different types of lock modes that are available in the SQL server environment.

The most potent of the pack is the Exclusive lock, which is generally applied to make sure that data is modified in a logical order, rather than allowing multiple processes to tinker with it at once. Update is another lock mode that, as the name suggests, is applied during updates so that they follow the proper sequence.

Locks can be applied at several levels, from row level to the page, table and the database, with this hierarchy being useful in determining where blocks might originate.

Potential problems

While SQL server blocking can be considered an entirely routine aspect of database operation, there are circumstances in which problems can arise.

For example, blocking may cascade across a plethora of processes in a chain if one process is blocked by a locked process and in turn passes this onto subsequent processes that are waiting to be executed.

Of course working out whether a particular block should be considered an issue or not is not always straightforward, but in general so long as the block does not last more than five seconds or so it should not be too disruptive.


SQL server deadlocks are closely related to blocking, although they are not exactly the same. Rather than effectively queuing processes through locks to allow for ordered execution, they occur when more than one process has an exclusive lock over specific resources, meaning neither can be completed.

In this instance, the server will terminate one of the conflicting processes, and like blocking a deadlock is not intrinsically bad, but needs to be monitored and managed by the administrator.

Making use of monitoring

As you may have gathered, while SQL server blocking is commonplace, acceptable and ultimately necessary to keep a database in good order, it is also something which it pays to keep tabs on to ensure seamless operation.

It is possible to do this manually, although this is a very labor-intensive procedure. Instead using a modern monitoring solution to automatically track blocks and provide alerts if problems arise is much more efficient.

Furthermore it may be helpful to use monitoring software which is capable of expressing blocking data visually, so that it can be digested quickly. As your experience with managing an SQL server grows, your skill with weeding out worrisome blocks will improve, but even veterans need the right tools to make their lives easier.

If you found this post useful, please see Sentry One’s guide on SQL Server Blocking at

Introduction to the Azure Data Migration Service

My latest course, Introduction to the Azure Data Migration Service, has just gone live on Pluralsight.

More and more companies are migrating their infrastructure, including their databases, to Azure. In this course, Introduction to the Azure Data Migration Service, you will learn foundational knowledge of Microsoft’s migration service.

First, you will learn how to use the Azure Data Migration Assistant to analyze your existing database for any compatibility issues with the Azure SQL platform.

Next, you will discover how to use the same tool to generate and deploy your database schema to an Azure SQL Database. Finally, you will explore how to migrate your data using the Data Migration Service.

When you are finished with this course, you will have the basic skills and knowledge of the Azure Data Migration Service needed to begin moving your databases into Azure.

See my course on Pluralsight at

Getting Started with PowerShell Core on Linux and macOS

My newest course, Getting Started with PowerShell Core on Linux and macOS, is now live on Pluralsight! This course is my eighteenth in a long line of Pluralsight courses.

I begin the course explaining the difference between PowerShell for Windows (version 5.1) and the all-new PowerShell Core (version 6.2 was used for this course), which works not only on Windows but on Linux and macOS as well. I then show how to install PowerShell Core, along with a few other key components such as Visual Studio Code, on both Linux and macOS.

Not familiar with PowerShell? No problem! I quickly cover the basics of PowerShell including cmdlets, the use of the pipeline, how to write functions, and how to put those functions in reusable scripts.

As if that weren’t enough, I show how to do some “cool things” with PowerShell Core, including working with Docker containers, SQL Server, and Azure.

For the course, I primarily used Ubuntu 19.04 and macOS Mojave. The code was also tested on Ubuntu 18.04 LTS and 18.10, as well as macOS High Sierra. In addition, I tested the Linux installs on a variety of distributions including CentOS, Manjaro, and more. The samples include markdown files with information on how to install on these other distributions.

All of the samples are included in the downloadable components of the course on Pluralsight. New with this course I have the samples also available on my GitHub site. As I move into the future the GitHub codebase will be updated with new samples and information.

Also included in the samples are several markdown files that have additional information not included in the course, such as setting VSCode on Windows to use PowerShell Core instead of Windows PowerShell 5.1 as the default terminal.

While you are up on my GitHub site be sure to check out the full list of repositories, I have a lot of examples on it, including some from previous courses such as my recent Reporting Services course. (For a full list of my courses just check out the About ArcaneCode page on this site.)

Note the sample file on Pluralsight will remain static, so if someone watches the course their samples will reflect what is in the course. For the latest updated samples see the GitHub site referenced above.

What? You don’t have a Pluralsight subscription yet? Well, no worries dear reader, just email me, free @ and I can send you a code good for 30 days with which you can watch all 18 of my courses, plus anyone else’s course at Pluralsight.

Setting Your Ubuntu 18.10 Favorites Bar In A Script

Of late I’ve been setting up and tearing down a lot of Ubuntu virtual machines as part of a PowerShell Core on Linux and macOS course I’m working on for Pluralsight. I wanted to create a script to install everything I need in one fell swoop so I could start testing my PowerShell Core code on a new box.

The one thing that annoyed me though was the Ubuntu Favorites bar on the left. I wanted to be able to add and remove my favorited automatically, rather than manually setting them up each time.

I didn’t think it’d be that hard, but it took a surprising amount of web searching to find the correct answer.

From inside a bash terminal session, you can issue the following command:

/usr/bin/gsettings get favorite-apps
(If /usr/bin is in your path, which it likely is, you could omit that part as we’ll see in a moment.) This will produce an array containing a list of your favorites.
['ubiquity.desktop', 'firefox.desktop', 'thunderbird.desktop', 'org.gnome.Nautilus.desktop', 'rhythmbox.desktop', 'libreoffice-writer.desktop', 'org.gnome.Software.desktop', 'yelp.desktop', 'ubuntu-amazon-default.desktop']
I’ve seen all sorts of suggestions on how to update the array, Use Python, Ruby, you could even use PowerShell to rearrange it if you wanted. To be honest though, I took the simple approach.
I just set my favorites manually, one last time. That way I could let Ubuntu tell me the correct application names to use in the background, without having to hunt them down. Once I had them set, I simply ran the gsettings get command (see above) again to get the list of apps, in the order I wanted them.
I then used gsettings again, this time in set mode.
gsettings set favorite-apps "['firefox.desktop', 'org.gnome.Terminal.desktop', 'org.gnome.Nautilus.desktop', 'code_code.desktop', 'azuredatastudio.desktop', 'org.gnome.Software.desktop', 'yelp.desktop']"
Just add this to your setup bash script, or enter it at the terminal, and ta-da! Your favorites are now setup like you want.
Naturally all of this is entered as a single line, this is just wrapped here due to space. Also, these are the favorites I want for my situation. Rather than just copy and pasting above, follow my suggestion to set things up manually, then use the output of gsettings get as input for the set.
I have tested this in Ubuntu 18.10, in theory it should work in 18.04 as well. I would imagine it would also work in 19.04 when it’s released, I’ll come back and update this post once I’ve had time to test it.

Programming Your Baofeng Radio with Chirp and Solving the Prolific Driver Issue

Ok, this post is a bit off what I normally talk about, but I wanted to add it so I’d have a spot to point people back to, and more importantly be able to refer back to it myself.

Some of you may know I am an amateur “ham” radio operator, my call sign is N4IXT. As I’m also a “techie” I’m often asked to help people program their radios using the computer. There are a variety of hand held radios on the market. One of the most prolific manufacturers is a Chinese company who makes very inexpensive (some might even say cheap) radios marketed under the names of Baofeng and BTech. Another popular maker along these lines is Wouxun.

This post describes not only the steps to program, but to also install the software you’ll need to program these radios and others.

Installing Software and Drivers

The first thing you’ll need is the programming software itself. The software most people use is called CHIRP, available from It has versions for Windows, Mac, and Linux, although for this article we are going to focus on Windows.

You’ll also need the cable for your model of radio. Sometimes the cable comes with the radio, other times you’ll need to purchase it separately. Now if you were like me, and installed CHIRP, plugged in the cable, and hooked it to your radio, you probably got a lot of error messages.

Inside the programming cable are some chips that help the computer talk to the radio. The company who originally created these chips was called Prolific. These Chinese companies cloned the chips and their code, indeed the cables even report to Windows that they are from Prolific.

Thus when you plug in the cable, it tells Windows “Hey I’m a Prolific cable”, and Windows goes out and downloads and installs the latest Prolific drives. Which is where the problems start.

Prolific was naturally a tad upset that someone had stolen their design, and who can blame them? So they started adding a special code to their newer chips, as well as an update to the drives so that if the chip isn’t an authentic chip from Prolific, the driver won’t work.

You can verify this for yourself. On Windows 10 just go to the Cortona search area (next to the Start menu) and type in Device Manager. (If you are on an older version of Windows you can open up Device Manager through the Administration panel).


Scroll down the list to the Ports area and you’ll see the Prolific driver listed. Note that it has a big yellow exclamation mark.


Right click on the Prolific driver and pick Properties.


Right in the middle under Device Status we see the problem: This device cannot start. (Code 10). While rather cryptic, what this really translates into is “Hey, we’re trying to use the Prolific driver, but it’s telling us this isn’t a Prolific device.”

This puts us in a bind, as we need to use the driver, but the latest version doesn’t work. The solution then is to install the most recent version that works. Fortunately someone has made that very easy to do.

Head over to iFamilySoftware at and they have a detailed article describing the issue (which I’ve just briefly recapped here). In addition, they have a tiny program you can download and install that will uninstall the nonfunctional version of the Prolific driver, and install the last version that actually worked.

Even more important, their app will prevent Windows from automatically updating the driver in the future. Windows frequently will try to be “helpful” and scan your drivers and will go “oh look, you are running an older driver (in this case for Prolific), here let me update that for you automatically and not tell you”.

Note they have versions of their program for both the 32 and 64 bit versions of Windows, and it will run on Windows XP, 7, 8, 8.1, and 10. Just download and run, reboot, then return to Device Manager. Your Prolific driver should now appear without the exclamation mark, indicating it is now working.  Right clicking and picking properties should now display “The device is working properly”.

Running Chirp

OK, so now you have CHIRP installed, and the correct version of Prolific, it’s time to load up your radio with frequencies. Open up CHIRP, then plug in your cable, and (with the radio off) plug it into your radio, and then turn the radio on.

CHIRP opens up with a big blank screen. Your first step will be to download what is currently in the radio, even if it’s a brand new radio with just the default handful of frequencies.  This let’s CHIRP get a correct image of the radio type and what its current settings are.

Click on the Radio in the menus, then from the dropdown pick Radio, Download From Radio.


Next it wants to know what COM port the cable is using, who made the radio, and what model. If you aren’t sure of the COM port, go back to Device Manager using the previous instructions, and expand the Ports. At the very end of the driver name is the COM port it’s using, in parenthesis.


The first time you use CHIRP you’ll probably have to key in the COM port, just type COM3, COM4, or whatever it was.

I also want to add a quick note for eagle eyed viewers. I’m using two computers to write this article, one in which the Prolific drivers didn’t work, the second where it did. If you look back to the original image from the Device Manager you’ll note it read COM4 at the end. On the computer I took the above screen shot on, the computer where I have it working, Prolific is using COM3. Again you can use device manager, like I did, to determine what the correct COM port is for your computer, and if you use multiple computers your COM port may be different on each.

So once you’ve entered the correct COM port, you’ll see a drop down list for the Vendor, and CHIRP supports a LOT of radio companies. The radio I’m plugging in here is a Baofeng.

Once you pick the vendor, the Model drop down will now be populated with a list of all the models CHIRP supports for that Vendor. Here, I’ve picked the UV-5R. My radio is actually a UV-5RA, but the UV-5R settings work for it too. Once done just click OK.

Next you are shown a warning message.


Basically it’s CHIRP’s way of saying “Use at your own risk”. Don’t worry about it, CHIRP has been used on hundreds of thousands, if not millions of these radios. I’ve personally programmed dozens of radios with CHIRP for myself and others, so you can safely click YES. You may want to click the “Do not show this next time” option first, so you won’t see this every time you update your radio.

Next you will see a reminder on how to hookup your radio.


Basically, turn the radio off, hook it to the computer using the cable, then turn it on. Once you click OK, it will start transferring the data. You’ll see a small message showing you the progress, generally takes around 20 to 30 seconds.


When it’s done, the list of frequencies will open up inside CHIRP. (Click on the image below, or any of the images in this article, to open up a bigger version).


The first thing you should do is save what is there by using the File, Save menu option. This will save it as an IMG file. I generally use the radio vendor, then model, then the area where the frequencies are for. In this example it’s for Shelby County AL. After that I put the date I created the file. If I later go update it, I’ll do a Save As, and update the date. This lets me track changes over time, or go back to an older version if I need to.

Now you are ready to start entering frequencies. You can of course do it manually, typing each row supplying the data. You could also look up the frequencies online and cut/paste into CHIRP. Radio Reference ( is a good site to look things up, but there are many you can pick from.

Even easier though is to use CHIRPs built in functionality. In the Radio menu is an option “Import from data source”. There is a list of multiple sites you can import from, each will let you search on a variety of things like your state, county, or zip code to find frequencies near you. Just be sure your cursor is sitting on empty row in the frequency list or else you may overwrite some of your existing frequencies.

The final option is to copy from an IMG file a friend with a similar radio has sent you. CHIRP will let you open up multiple IMG files at the same time, and display them as tabs across the top. You can just open their file and copy/paste into yours. You can copy just some of the rows or their entire frequency list.

Now that you have your frequencies loaded you’re almost, but not quite, ready to upload to the radio. If you look to the very left, there are two tabs. The Memories tab is what you are currently on. Now click on the Settings tab.


The overwhelming majority of settings you should leave alone unless you have a specific need. However there are a few you may wish to look at. Do note these are for the Baofeng, what you see on the Settings tab will change with each radio type.

On the Basic settings you’ll see Display Mode A and B. This indicates what is shown on the display. Your choices are Name, Frequency, and Channel. I tend to prefer Name as it shows the name (such as the repeater call sign), but you could alternately show the frequency of the repeater. I don’t find Channel very useful, but feel free to experiment and pick what you like.

As you will see it’s not big deal to make a change and re-upload to the radio. Note that you don’t have to import from the radio before you upload back to the radio every time. You can make a change, upload, then if you don’t like it make another change and just upload again. But more on uploading in a moment.

The other thing on for this radio you might want to look at is the background color for the display, this Baofeng can have three different background colors, and you can set the Standby, Receive (RX) and Transmit (TX) colors independently.  Just for fun I use the different colors but use what works for you.

You can also enable or disable the Roger Beep using the check box at the bottom. Most hams find the beep annoying, so I suggest turning it off unless you are doing a lot of Simplex work.

Next click on the Advanced Settings.


The one thing I like to change here for the Baofeng’s is the voice. By default it is set to English. Every time you press a button or change to a different memory channel she (it’s a female voice) starts talking to you. Not only do I find this annoying, it also slows down the operation of the radio. I use the drop down to change this to OFF, but again it should be your preference.

As a final step in my setup process I click on the Other Settings tab.


At every meeting I go to there’s generally half a dozen people with Baofengs, so I change the Power-On Message to my call sign. This way when you turn on your radio, it displays the call sign of the owner.

There are several other tabs but these are really advanced items that I leave alone.

OK, now that you have your CHIRP image ready, it’s time to upload. A reminder though, make sure to do a File Save so your IMG file will be updated and should something catastrophic happen you won’t lose your hard work. (I’ve never had CHIRP crash, but I have had my home power go out on me. Oops.)

If your radio still isn’t hooked up, turn it off, then plug it into the cable, then turn it on. From the Radio menu, pick Upload To Radio.


Now it’s just like the download. You are first prompted for the COM port, the vendor, and the Model. Once you click OK, you are given the reminder to turn off the radio, hook it up, turn the radio on, and click OK. After clicking OK you may see the message about it being an experimental driver, just click Yes to proceed.

Once you proceed you’ll see the Cloning Radio progress box as it uploads the data to your radio. When it’s done you’ll see your radio reset, and then you’re done! You are ready to start using your radio, just remember to unhook it from the programming cable before you try to use it.


The steps in this article should work for most radios, especially the critical steps at the beginning for fixing your Prolific driver. I want to remind you the Memories layout and Settings pages will be different for each radio, even radios from the same vendor.

Some manufacturers, for example, allow additional information in the memory columns such as geographic location. Others may not have multiple LED colors for the display.

With the information in this article though you should be able to navigate setting up most of the radios that CHIRP supports.

Building Your First SSRS Report–Now on Pluralsight

My newest course, “Building Your First SSRS Report”, is now live on Pluralsight! This course teaches you how to use Visual Studio 2017 to create a professional report for SQL Server 2017 Reporting Services.

This course assumes you’ve never touched SSRS before. It shows you how to create a new project, and create reports using both the built in wizard and manually. Not comfortable writing SQL queries? No problem, the course demonstrates the use of the built in query builder. It even shows you how to deploy your reports to the SSRS Report Portal.

Check out the course today! And if you don’t have a Pluralsight subscription, see my About Arcane Code page for details on how to get a code good for 30 days access to Pluralsight, with which you can watch any of my courses (or anyone else’s for that matter!).

Why Do I Need An SSAS Cube?

A viewer of my Pluralsight courses wrote to me asking what I thought was an excellent question. “What is the advantage of having an SSAS Cube?” or at it’s root, why do we need cubes at all? What do cubes provide that we can’t get directly from the database?

It’s an excellent question because understanding why is the foundation for all other learning about cubes. If you know why you are doing something, you’ll better understand what you are doing as you do it. I think the answer can be best illustrated by an example.

Let’s say you are an analyst for a huge  chain of international retail stores. The big box kind that sell everything: groceries, household items, clothes, and the like. You’ve been put in charge of toilet paper sales. When starting your analysis, the first thing you want to know is “How much toilet paper have we sold over the last five years?”

This may seem easy at first. A simple SELECT SUM(Amount) FROM SALES WHERE Product=’Toilet Paper’ would yield the result. But wait, is the query really that simple?

Think about it terms of scale. On any given day you likely sell at least a million rolls of toilet paper. Amplify that by years, and now you are looking at billions of individual rows the database has to crawl over to get the results, if not more! Even a high performance database could take hours to process such a query.

Of course you wouldn’t stop there. You’d then want to break down sales for each year. Then for each month in the year. For each month by state/province. Then by sales territories. Then by individual stores.

But wait! You might decide to analyze by state/province first, then by year/month. As you can see, your queries would soon be taking hours, if not days to run.

In addition, your queries would be brining the database to its knees, your coworkers would be ready to spread nasty rumors about you on social media, and the DBA Police would be bursting into your cube, disconnecting your computer from the network and putting you in handcuffs.

By now you’ve probably guessed the answer to this is the cube, but it circles back to the original question, why a cube?

While the focus of the question was SSAS (SQL Server Analysis Services) the answer applies to pretty much any cube based software. Analytic cubes work by aggregating values ahead of time. Whenever the cube is processed, the cube engine (SQL Server) contacts the source database, reads in all the records, and sums them together. The totals are what is stored in the cube.

During processing the cube engine will analyze the data, and determine the best way to organize the data into small pieces that can quickly be added up to answer most questions. Of course, during the design of the cube the developer can provide hints to the cube engine in various ways, such as creating hierarchies.

A hierarchy provides guidance to both the end user as well as the cube engine. For example, you may have one for Year / Month / Day. There may be another for Country / State / County / Store, and perhaps another for Sales Region / Sales Territory. Building these hierarchies gives the cube hints on how to store the data aggregations so they can quickly be combined at query time.

Note I said aggregations, as a cube can store many different types of totals. Of course there is the natural one of sum, but a cube can also contain averages, maximum and minimum values, and more. In addition, it can also create calculated values to hold things like year over year growth.

You might be thinking “well wait, wouldn’t it take forever to process the cube everyday?” True, if you processed the entire cube daily. However, most cubes (and SSAS for sure) provide the ability to do incremental processing. This means the engine only has to read in new records in the source database, records that have appeared since the last time the cube was processed, and add those values to the stored totals.

All of this together helps us to answer our question, “Why do I need a cube?”


Being able to analyze data fast is the primary benefit of cubes. Being able to slice and dice data in an almost infinite combination is why cubes are so valuable to data analysts across the world.

Reducing the workload on the source systems (whether a relational database or a data warehouse) is an additional benefit. The aggregations are created once and stored in the cube, as opposed to having to query the data warehouse each and every time we need information.

Some may argue they can achieve similar goals with tools like PowerBI, and you could. But you need to understand that under the hood, PowerBI is using an SSAS (Tabular) cube to hold the data. So on the back end you are still using a form of SSAS for your analysis. In addition, using a cube provides a centralized data source for many reporting tools, including PowerBI.

By now you can see what benefits a cube provides, and why you need one!

Downloading Files with PowerShell and Invoke-WebRequest

Last weekend I was at the Atlanta Code Camp, giving a presentation on PowerShell for Developers. One of the attendees emailed me, asking for a good example of being able to download multiple files from a website.

To do so, we’ll use the Invoke-WebRequest cmdlet. But first, we’ll setup a directory to hold our output. For all my demos I have a root folder, C:\PowerShell. For this article I’ve created a subfolder called Invoke-WebRequest-Demo. The first thing my code does is sets a variable to point to this folder, then changes the current location to it.

$dir = 'C:\PowerShell\Invoke-WebRequest-Demo'
Set-Location $dir

Next, we’ll need a variable to point to the root URL of the website we wish to download from. This will be everything except for the actual file name.  For this demo I will use files from the No Agenda Show podcast. Not only is it a cool podcast, but they have an unrestricted license for their material meaning I can freely reuse it for demo purposes. Each episode of the podcast has it’s own image, so we’ll download images from a few recent episodes.

After I place the base URL into a variable, I create an array, each item in the array having the name of a file to download.

$baseUrl = ''
$files = '1537127586.656_na-1069-art-feed.png',

Now that everything is setup, we use a simple foreach loop to iterate over the array and download each file via Invoke-WebRequest.

foreach ($file in $files)
   Write-Host "Downloading $file"
   $dlUrl = "$($baseUrl)$file"
   $dlPath = "$($dir)$file"
   Invoke-WebRequest $dlUrl -OutFile $dlPath

And that’s all there is to it. If you want to learn more about downloading files via the web, this code was extracted from my Testing PowerShell with Pester course on Pluralsight. In it I test a module which gets the RSS feed, then downloads images and audio files for the No Agenda show. Just go to the About Me link at the top and you’ll find a complete list of all my Pluralsight courses.

Speaking of Pester, you might also appreciate the introduction to Pester series of articles I’m currently authoring for Red Gate’s Simple Talk website. You can find a link via the same About Me page, or just jump directly there by going to to see my articles.

Kusto Query Language (KQL) from Scratch–Live and Free at Pluralsight

My latest course, Kusto Query Language (KQL) from Scratch, is now live at Pluralsight!

Even better, this course is totally free! You will need an account at Pluralsight, but those are free to setup and only takes a moment. You do not have to have a paid plan. Once you have your account  you can begin watching.

What? You haven’t heard of Kusto or KQL?

Increasingly, Azure is becoming the infrastructure backbone for many corporations. It is imperative then, that you have the ability to query Azure to gain insights to the Azure services your company is using. The Kusto Query Language, or KQL for short, is the language you use to query these Azure services such as Azure Log Analytics, Azure Security Center, Azure Application Insights, and Windows Defender Advanced Threat Protection.

Kusto is the new database engine that stores data for all of these services. Kusto was designed from scratch to be a “big data” repository for Azure that was easy to query using KQL.

The course starts with the basics of KQL, the Kusto Query Language. In the first module alone you’ll learn 80% of the commands to cover most of the queries you’ll need.

Most, but not all, as KQL has many more commands to handle unique needs. After learning a variety of aggregation operators, you will progress to advanced KQL abilities such as machine learning and time series analysis.

Finally, you will explore how to export the results of your KQL queries to CSV files and PowerBI. When you’re finished with this course, you will have the skills and knowledge of the Kusto Query Language needed to gain valuable insights to your Azure services.

Get Notified When Your Home Router IP Address Changes–With PowerShell!

Readers of my blog over may recall I’m an amateur (ham) radio operator (N4IXT that’s me!). One of my ham radio buddies has a cool radio that can be remotely controlled over the internet. With their software he just enters the IP address of his home (his home router that is, after we setup some port forwarding), and he can connect to and operate his radio remotely, make contacts around the world and more.

The tricky part of all this came in the sentence enters the IP address of his home…. His home router tends to change its address on the whim of his service provider. If the router reboots, it will definitely have a new address when it comes up. You may be thinking “well just get a static IP address”. Unfortunately, his internet service provider can’t give him this, something about him using a bonded pair of 10 mb lines to get 20 mb prevents it.

So how could he get notified when his IP address changes? Well I was fairly certain I could solve his problem with some simple PowerShell scripting!

I’ve now published the solution on my GitHub site:

Rather than going more in-depth here, the ReadMe has the full details describing the scripts that were created. In addition the code is heavily commented.

In addition to solving this particular problem, the code could be used as a template for:

  • Getting the IP Address of your home router
  • Sending emails via GMail
  • Setting up tasks in the Windows Task Scheduler

Hopefully other people with similar issues will find this little project as useful as my ham radio buddy has!