Using Task in ASP.NET MVC Today

I’ve been experimenting with the new async support coming in the next version of C# (and VB). I must say, I’m very impressed. Async is one of those things you know you’re supposed to be doing. However, traditionally it has taken a lot of code and been hard to get right. The new await keyword changes all that.

For example, here’s an async function to download the Twitter public timeline:

public async Task PublicTimelineAsync()
{
  var url = "http://api.twitter.com/1/statuses/public_timeline.xml";
  var xml = await new WebClient().DownloadStringTaskAsync(url);
  return XDocument.Parse(xml);
}

That’s not much more difficult that writing the synchronous version. By using the new async and await keywords, all the ugly async CPS code you’re supposed to write is generated for you automatically by the compiler. That’s a huge win.

The only downside to async is that support for it is spotty in the .NET Framework today. Each major release of .NET to date has introduced a new async API pattern. .NET 1.0 had the Async Programming Model (APM). .NET 2.0 introduced the Event-based Async Pattern (EAP). Finally .NET 4.0 gave us the Task Parallel Library (TPL). The await keyword only works with APIs writen using the TPL pattern. APIs using older async patterns have to be wrapped as TPL APIs to work with await. The Async CTP includes a bunch of extension methods that wrap common async APIs, such as DownloadStringTaskAsync from the code above.

The async wrappers are nice, but there are a few places where we really need the TPL pattern plumbed deeper. For example, ASP.NET MVC supports AsyncControllers. AsyncControllers are used to avoid blocking IIS threads waiting on long running I/O operations – such as getting the public timeline from Twitter. Now that I’ve been bitten by the async zombie virus, I want to write my async controller methods using await:

public async Task<ActionResult> Index()
{
    var t = new Twitter();
    var timeline = await t.PublicTimelineAsync();
    var data = timeline.Root.Elements("status")
        .Elements("text").Select(e => e.Value);
    return View(data);
}

Unfortunately, neither the main trunk of MVC nor the MVC futures project has support for the TPL model [1]. Instead, I have to manually write some semblance of the async code that await would have emitted on my behalf. In particular, I have to manage the outstanding operations, implement a continuation method and map the parameters in my controller manually.

public void IndexAsync()
{
    var twitter = new Twitter();

    AsyncManager.OutstandingOperations.Increment();
    twitter
        .PublicTimelineAsync()
        .ContinueWith(task =>
        {
            AsyncManager.Parameters["timeline"] = task.Result;
            AsyncManager.OutstandingOperations.Decrement();
        });
}

public ActionResult IndexCompleted(XDocument timeline)
{
    var data = timeline.Root.Elements("status")
        .Elements("text").Select(e => e.Value);
    return View(data);
}

I promise you, writing that boilerplate code over and over gets old pretty darn quick. So I wrote the following helper function to eliminate as much boilerplate code as I could.

public static void RegisterTask<T>(
    this AsyncManager asyncManager,
    Task<T> task,
    Func<T, object> func)
{
    asyncManager.OutstandingOperations.Increment();
    task.ContinueWith(task2 =>
    {
        //invoke the provided function with the
        //result of running the task
        var o = func(task2.Result);

        //use reflection to set asyncManager.Parameters
        //for the returned object's fields and properties
        var ty = o.GetType();
        foreach (var f in ty.GetFields())
        {
            asyncManager.Parameters[f.Name] = f.GetValue(o);
        }
        foreach (var p in ty.GetProperties())
        {
            var v = p.GetGetMethod().Invoke(o, null);
            asyncManager.Parameters[p.Name] = v;
        }

        asyncManager.OutstandingOperations.Decrement();
    });
}

With this helper function, you pass in the Task<T> that you are waiting on as well as a delegate to invoke when the task completes. RegisterTask takes care of incrementing and decrementing the outstanding operations count as appropriate. It also registers a continuation that reflects over the object returned from the invoked delegate to populate the Parameters collection.

With this helper function, you can write the async controller method like this:

public void IndexAsync()
{
    var twitter = new Twitter();

    AsyncManager.RegisterTask(
        twitter.PublicTimelineAsync(),
        data => new { timeline = data });
}

//IndexCompleted hasn't changed
public ActionResult IndexCompleted(XDocument timeline)
{
    var data = timeline.Root.Elements("status")
        .Elements("text").Select(e => e.Value);
    return View(data);
}

It’s not as clean as the purely TPL based version. In particular, you still need to write separate Async and Completed methods for each controller method. You also need to build an object to map values from the completed tasks into parameters in the completed method. Mapping parameters is a pain, but the anonymous object syntax is terser than setting values in the AsyncManager Parameter collection.

It’s not full TPL support, but it’ll do for now. Here’s hoping that the MVC team has async controller methods with TPL on their backlog.


[1] I’m familiar with Craig Cavalier’s Async MVC with TPL post, but a fork of the MVC Futures project is a bit too bleeding edge for my needs at this point.

Build Your Own WDS Discovery Image

Given that I work on the Windows team, it shouldn’t come as a surprise that we use Windows Deployment Services to distribute Windows images internally. For most machines, it’s really convenient. You trigger a network boot (on my Lenovo, you press the “ThinkVantage” button during start up), select the image to install and what partition to install it to, wait a while, answer the installation finalization questions (machine name, user name, etc) and you’re done.

However, I have an Dell Inspiron Duo (with the cool flip screen) netbook that lacks a built in network port. No network port, no network boot. I’ve got a USB network dongle, but it doesn’t support network boot either. No network boot, no ultra-convenient WDS installation, sad DevHawk.

I was able to work around this by building a custom WDS Discover image that I loaded onto a USB flash drive. Now, I plug in the USB drive, select it as the boot device and I’m off and running…err, off and installing at any rate. Building the image was kind of tricky, so I figured it would be a good idea to write it down and share.

Step One: Install the Windows Automated Installation Kit (AIK)
The AIK is a set of tools for customizing Windows Images and deployment. In particular, it includes the Windows Preinstallation Environment (aka WinPE) which is the minimal OS environment that Windows Setup runs in. We’ll be building a custom WinPE image to launch the WDS discovery and setup from.

Step Two: Create a new PE image
The AIK includes a command line tool for creating a blank PE image. Step 1 of this walkthru shows you how to use it. It’s pretty easy. Open the Deployment Tools Command Prompt as an administrator and run the following commands:

copype.cmd x86 C:\winpe_x86
copy winpe.wim ISO\sources\boot.wim

The copype.cmd batch file creates a new PE image of the specified architecture in the specified location. The Inspiron is an Atom processor so I chose an x86 PE image.

Note, in several steps below I assume you’ve created your  PE image in c:\winpe_x86. If you’ve created it somewhere else, make sure to swap in the correct path when executing the steps below.

Step Three: Mount the PE Boot image with DISM
Now that we have our basic PE boot image, we need to update it with custom drivers and the setup experience that can load WDS images across the network. Before we can update boot.wim, we need to mount it on the file system.

The AIK includes the Deployment Image Servicing and Management (DISM) tool for working with WIM files. To mount the boot.wim file, execute the following command:

dism /Mount-WIM /WimFile:C:\winpe_x86\ISO\sources\boot.wim /index:1 /MountDir:c:\winpe_x86\mount

Copype.cmd created an empty mount directory specifically for DISM to mount WIM images in.

Step Four: Add Custom Device Driver
The driver for my USB network dongle is not included in the standard Windows driver package, so it needs to be manually added to the PE image. Again, we use DISM to do this.

dism /image:c:\winpe_x86\mount /add-driver /driver:"PATHTODRIVERDIRECTORY"

Step Five: Add Setup packages
The PE image does not include the Windows Setup program by default. There are several optional packages that you can add to your PE image. For WDS discovery, you need to add the setup and setup-client packages. Again, we use DISM to update the image.

dism /image:c:\winpe_x86\mount /add-package /packagepath:"c:\Program Files\Windows AIK\Tools\PETools\x86\WinPE_FPs\winpe-setup.cab"
dism /image:c:\winpe_x86\mount /add-package /packagepath:"c:\Program Files\Windows AIK\Tools\PETools\x86\WinPE_FPs\winpe-setup-client.cab"

Step Six: Add winpeshl.ini file
Now that we’ve added the setup program to the image, we need to tell setup to run in WDS discovery mode on startup. This is accomplished by adding a winpeshl.ini file to the WindowsSystem32 folder of the PE image.

Note, the official instructions on TechNet have a bug. The path to setup.exe should be %SYSTEMDRIVE%sources, not %SYSTEMROOT%sources. Here’s the contents of my winpeshl.ini file:

[LaunchApps]
%SYSTEMDRIVE%\sources\setup.exe, "/wds /wdsdiscover"

You can also add /wdsserver:<server> to the command line if you want to hard code the WDS Server to use in your image.

Step Seven: Add Lang.ini file
If you do all the above steps and try to boot the resulting image, you’ll get a nasty “Windows could not determine the language to use for Setup” error. Turns out there’s another bug in the official docs – you need a lang.ini file in your sources directory along side setup.exe in order to run. I just grabbed the lang.ini file off the normal Win7 boot image and copied it to the sources directory of my mounted boot image.

Step Eight: Commit and Unmount the PE Boot image
We’re now done updating the boot image, so it’s time to close and unmount it. This is accomplished with DISM:

dism /unmount-wim /mountdir:c:\winpe_x86\mount /commit

At this point, the contents of the ISO folder are ready to be transferred to a USB stick for booting.

Step Nine: Prepare the USB Flash Drive
To enable your USB flash drive to be bootable, it needs to have a single FAT32 partition spanning the entire drive. Instructions in this walkthru show you how to configure and format your USB drive.

Note, not all USB drives are created equal. I have one USB drive where the Duo just comes up with a blank screen when I try to use it for USB Boot. If you follow these steps and can’t boot, try a different USB drive.

Step Ten: Copy the image contents to the Flash Drive
I just did this with xcopy. In this case, my flash drive is E:, but obviously you should swap in the drive letter for your flash drive.

xcopy c:\winpe_x86\ISO\*.* /e e:

Step Eleven: Boot your Netbook from the USB drive
With the USB drive containing the image + the network dongle both plugged in, boot the machine and trigger USB boot. For the Duo, you can hit F12 during boot to manually select your boot source. Your custom image will be booted, and it will then look out on the network to find the WDS server to load images from. Select the image you want and where you want to install it and away you go.

One thing to remember is that you’re adding the  USB network dongle driver to the WDS discovery boot image, but not to the image that gets installed via WDS. So chances are you’ll need the driver again once you get the image installed. I put that driver on the same USB key that holds the boot image. That way I can easily install the driver once Windows is installed.

Playing With The Lead

Ovechkin Celebrates the Capitals' First Goal in Game 5

Obviously, the Capitals win Saturday was huge. It put them through to the second round for only the second time since their trip to the Stanley Cup Finals in 1998. It was also the first playoff series in the Ovechkin/Boudreau era to be settled without having to go the full seven games. The Capitals have played four seven-game playoff series in the past three years. It’ll be nice for the Caps to have the extra time off to rest and heal for a change.

As we wait to see who the Capitals will face in the Conference Semifinals, I want to highlight what I think is a huge change from series from the past three years: The Capitals went 3-1 against the Rangers when they held the series lead. Over the four series in 2008-2010, the Capitals went 2-5 in games where they held the series lead. That’s pretty bad. It gets even worse when you realize that both of those wins came early in their respective series. The Caps won game #2 against the Penguins in ’09 to take a 2-0 series lead. Last year, they won game #3 against the Canadiens to take a 3-1 series lead. In both of those series, the Capitals proceeded to lose the next three games. They eventually lost both series.

So when the Caps lost game 3 and we’re down 3-0 at the start of the 3rd period in game 4, it certainly seemed as if the Capitals we’re going to choke away another series lead like they had the past two years. Instead, they came out for the third period and played like their backs were against the wall. And while the Capitals’ have sucked at defending a series lead, they have played very well well when facing elimination – 6-3 to be exact in the past three years.

If the new-and-improved Caps can combine their traditional talent of playing from behind in the series with the ability to drive nails into coffins win games when they have the series lead, the Capitals will be a very hard team to beat this year.

Shocker at Staples

My passion for the Washington Capitals is well documented. What you don’t know is that I was actually a Los Angeles Kings fan before I was a Capitals fan.

I wasn’t into hockey growing up, but I caught hockey fever when I was going to college in southern California. That was the Gretzky era  – he led them to the Stanley Cup finals the year after I graduated from USC – and the Kings were the hottest ticket in town. But that era faded with the 1994 lockout, bankruptcy, trading Gretzky to the Blues in 1996 and missing the playoffs four years in a row. But unlike most of my then-fellow Angelenos, I stayed on the Kings bandwagon.

In 1998, the Kings finally made it back to the playoffs, facing the St. Louis Blues (Gretzky had moved on to the Rangers by then). The Kings had lost the first two games in St. Louis, but held a 3-0 lead in the 3rd period of Game #3. Then this happened:

In a game that will be talked about for years to come, the Kings saw a 3-0 lead wiped out by four St. Louis power-play goals within a 3:07 span after defenseman Sean O’Donnell received a fighting major for beating down the Blues’ Geoff Courtnall, who had knocked down goaltender Jamie Storr.

Pascal Rheaume, Brett Hull and Pierre Turgeon scored goals to tie the score and then Terry Yake knocked in the game-winner as the Blues rallied for a 4-3 victory Monday night to take a commanding 3-0 lead in their best-of-seven playoff series before a sellout crowd of 16,005 at the Great Western Forum.

Meltdown on Manchester
Los Angeles Times, April 28 1998

I was one of those 16,005. It was the ugliest feeling I have ever had walking out of a hockey game.

I imagine the fans at the Staples Center last night are familiar with it.

"I’ve never seen anything like it," defenseman Matt Greene said after the Kings squandered a 4-0 lead and gave up a season-high five goals in the second period.

San Jose winger Devin Setoguchi finished off a three-on-two break with a deadly wrist shot past Jonathan Quick 3 minutes and 9 seconds into sudden-death play, stunning a Staples Center crowd that had been taken for a long and wild ride all night. What seemed like a chance for the Kings to take control of the series instead became a potentially devastating defeat that left the Sharks leading the first-round series two games to one with Game 4 scheduled for Staples Center on Thursday.

Kings turn four-goal lead into 6-5 overtime loss to Sharks in Game 3 
Los Angeles Times, April 20 2011

I watched the 2nd period last night at first with jubilation (Kings go up 4-0 less than a minute into the period), then slight concern (Sharks finally get on the board), then increasing concern (Sharks close the game to 4-3), then relief (Kings score :15 seconds later to make it 5-3) and finally horror (Sharks score twice in the last :90 seconds to tie the game 5-5).

I couldn’t watch any more after that. I saw that it had gone to overtime, but I didn’t know who won until I looked it up online this morning.

Frozen Royalty calls it the “Flop on Figueroa”. Purple Crushed Velvet has a broken heart. Hockeywood calls it an “epic meltdown” but then suggests Kings fans need to “Keep Calm and Carry On” because “One game a playoff series does not make”.

Technically, that’s true – the Kings are only down 2-1 and have shown they can win in San Jose. But with momentum shift of blowing a 4 goal lead, I don’t see how the Kings win this series. I’d like to be wrong, but I don’t see how they win another game this year, much less the series.

DevHawk Has A Brand New Blog (Engine)

So it would make a crappy song, but the title of this post is still true. This is my first post on the new-and-improved DevHawk running on WordPress.

I decided a while back that it was time to modernize my blog engine – DasBlog is getting a little long of tooth and there hasn’t been a new release in over two years. I spent some time looking at different options, but settled on WordPress for much the same reasons Windows Live did: “host of impressive capabilities”, scalable and widely used. Also, it’s very extensible, has about a billion available themes and has a very active development community. I was able to find plugins to replicate DasBlog’s archive page as well as archive widget that replicated custom functionality that I added to DasBlog via custom macros.

Of course, moving eight years worth of posts to a new engine took quite a bit of effort and planning. I wanted to make sure that I maintained all my posts and comments as well as take advantage of some of the new features available to me from WordPress. For example, I took the opportunity to flatten my list of categories and move most of them to be tags. I also went thru and converted all of my old code snippets to use SyntaxHighlighter instead of CodeHTMLer or Pygments for WL Writer. Of course, I automated almost all of the conversion process. For anyone interested in following my footsteps, I published my PowerShell scripts for converting DasBlog to the WordPress WXR import/export format up on BitBucket.

Not only did I want to save all my data, I also wanted to make sure I saved my search engine mojo (if I have any left after blogging a paltry six times in the past sixteen months). So I hacked up a WordPress plugin to redirect my old DasBlog links to the new WordPress URLs. That’s up on BitBucket as well for anyone who wants it. It’s got some DevHawk specific bits in there (like the category cleanup) but if you tore those parts out it would be usable for any DasBlog-to-WordPress conversion. If there’s interest, maybe I’ll write up how the conversion scripts and redirect plugin work.

The plan is that now that I’m finally done moving my blog over the new back end, I will actually start writing on a more regular basis again. We’ll see how that works out.