Hawk Notes, Volume 1

This is the first in a series of blog posts about Hawk, the engine that powers this site. My plan is to make a post like this for every significant update to the site. We’ll see well that plan works.

  • I just pushed out a new version of Hawk on my website. The primary feature of this release is support for ASP.NET 5 Beta 7. I also published the source code up on GitHub. Feedback welcome!
  • As I mentioned in my post on Edge.js, the publishing tools for Hawk is little more than duct tape and bailing wire at this point. Eventually, I’d like to have a dedicated tool, but for now it’s a manual three step process:
    1. Run the PublishDraft to publish a post from my draft directory to a local git repo of all my content. As part of this, I update some of the metadata and render the markdown to HTML.
    2. Run my WritePostsToAzure Custom Command to publish posts from my local git repo to Azure. I have a blog post on my custom command infrastructure in the works.
    3. Trigger a content refresh via an unpublished URL.
  • I need to trigger a content refresh because Hawk loads all of the post metadata from Azure on startup. The combined metadata for all my posts is pretty small - about 2/3 of a megabyte stored on disk as JSON. Having the data in memory makes it easy to query as well as support multiple post repositories ( Azure storage and the file system).
  • I felt comfortable publishing the Hawk source code now because I added a secret key to the data refresh URL. Previously, the refresh URL was unsecured. I didn’t think giving random, anonymous people on the Interet the ability to kick off a data refresh was a good idea, so I held off publishing source until I secured that endpoint.
  • Hawk caches blog post content and legacy comments in memory. This release also adds cache invalidation logic so that everything gets reloaded from storage on data refresh, not just the blog post metadata.
  • I don’t understand what the ASP.NET team is doing with the BufferedHtmlContent class. In beta 7 it’s been moved to the Common repo and published as source. However, I couldn’t get it to compile because it depends on an internal [NotNull] attribute. I decided to scrap my use of BufferedHtmlContent and built out several classes that implement IHtmlContent directly instead. For example, the links at the bottom of my master layout are rendered by the SocialLink class. Frankly, I’m not sure if rolling your own IHtmlContent class for snippet of HTML code you want to automate is a best practice. It seems like it’s harder than it should be. It feels like ASP.NET needs a built-in class like BufferedHtmlContent, so I’m not sure why it’s been removed.

The Brilliant Magic of Edge.js

In my post relaunching DevHawk, I mentioned that the site is written entirely in C# except for about 30 lines of JavaScript. Like many modern web content systems, Hawk uses Markdown. I write blog posts in Markdown and then my publishing “tool” (frankly little more than duct tape and bailing wire at this point) coverts the Markdown to HTML and uploads it to Azure.

However, as I went thru and converted all my old content to Markdown, I discovered that I needed some features that aren’t supported by either the original implementation or the new CommonMark project. Luckily, I discovered the markdown-it project which implements the CommonMark spec but also supports syntax extensions. Markdown-it already had extensions for all of the extra features I needed - things like syntax highlighting, footnotes and custom containers.

The only problem with using markdown-it in Hawk is that it’s written in JavaScript. JavaScript is a fine language has lots of great libraries, but I find it a chore to write significant amounts of code in JavaScript - especially async code. I did try and rewrite my blog post upload tool in JavaScript. It was much more difficult than the equivalent C# code. Maybe once promises become more widely used and async/await is available, JavaScript will feel like it has a reasonable developer experience to me. Until then, C# remains my weapon of choice.

I wasn’t willing to use JavaScript for the entire publishing tool, but I still needed to use markdown-it [1]. So I started looking for a way to integrate the small amount of JavaScript code that renders Markdown into HTML in with the rest of my C# code base. I was expecting to have to setup some kind of local web service with Node.js to host the markdown-it code in and call out to it from C# with HttpClient.

But then I discovered Edge.js. Holy frak, Edge.js blew my mind.

Edge.js provides nearly seamless interop between .NET and Node.js. I was able to drop the 30 lines of JavaScript code into my C# app and call it directly. It took all of about 15 minutes to prototype and it’s less than 5 lines of C# code.

Seriously, I think Tomasz Janczuk must be some kind of a wizard.

To demonstrate how simple Edge.js is to use, let me show you how I integrated markdown-it into my publishing tool. Here is a somewhat simplified version of the JavaScript code I use to render markdown in my tool using markdown-it, including syntax highlighting and some other extensions.

// highlight.js integration lifted unchanged from 
// https://github.com/markdown-it/markdown-it#syntax-highlighting
var hljs  = require('highlight.js');
var md = require('markdown-it')({
  highlight: function (str, lang) {
    if (lang && hljs.getLanguage(lang)) {
      try { 
        return hljs.highlight(lang, str).value;
      } catch (__) {}
    }

    try {
      return hljs.highlightAuto(str).value;
    } catch (__) {}

    return ''; 
  }
});

// I use a few more extensions in my publishing tool, but you get the idea
md.use(require('markdown-it-footnote'));
md.use(require('markdown-it-sup'));

var html = return md.render(markdown);

As you can see, most of the code is just setting up markdown-it and its extensions. Actually rendering the markdown is just a single line of code.

In order to call this code from C#, we need to wrap the call to md.render with a JavaScript function that follows the Node.js callback style. We pass this wrapper function back to Edge.js by returning it from the JavaScript code.

// Ain't first order functions grand? 
return function (markdown, callback) {
    var html = md.render(markdown);
    callback(null, html);
}

Note, I have to use the callback style in this case even though my code is syncronous. I suspect I’m the outlier here. There’s a lot more async Node.js code out in the wild than syncronous.

To make this code available to C#, all you have to do is pass the JavaScript code into the Edge.js Func function. Edge.js includes a embedded copy of Node.js as a DLL. The Func function executes the JavaScript and wraps the returned Node.js callback function in a .NET async delegate. The .NET delegate takes an object input parameter and returns a Task<object>. The delegate input parameter is passed in as the first parameter to the JavaScript function. The second parameter passed to the callback function becomes the return value from the delegate (wrapped in a Task of course). I haven’t tested, but I assume Edge.js will convert the callback function’s first parameter to a C# exception if you pass a value other than null.

It sounds complex, but it’s a trivial amount of code:

// markdown-it setup code omitted for brevity
Func<object, Task<object>> _markdownItFunc = EdgeJs.Edge.Func(@"
var md = require('markdown-it')() 

return function (markdown, callback) {
    var html = md.render(markdown);
    callback(null, html);
}");
  
async Task<string> MarkdownItAsync(string markdown)
{
    return (string)await _markdownItFunc(markdown);
}

To make it easier to use from the rest of my C# code, I wrapped the Edge.js delegate with a statically typed C# function. This handles type checking and casting as well as provides intellisense for the rest of my app.

The only remotely negative thing I can say about Edge.js is that it doesn’t support .NET Core yet. I had to build my markdown rendering tool as a “traditional” C# console app instead of a DNX Custom Command like the rest of Hawk’s command line utilities. However, Luke Stratman is working on .NET Core support for Edge.js. So maybe I’ll be able to migrate my markdown rendering tool to DNX sooner rather than later.

Rarely have I ever discovered such an elegant solution to a problem I was having. Edge.js simply rocks. As I said on Twitter, I owe Tomasz a beer or five. Drop me a line Tomasz and let me know when you want to collect.


  1. I also investigated what it would take to update an existing .NET Markdown implementation like CommonMark.NET or F# Formatting to support custom syntax extensions. That would have been dramatically more code than simply biting the bullet and rewriting the post upload tool in JavaScript. ↩︎

Go Ahead, Call It a Comeback

It’s been a looooong time, but I finally got around to geting DevHawk back online. It’s hard to believe that it’s been over a year since my last post. Lots has happened in that time!

First off, I’ve changed jobs (again). Last year, I made the switch from program manager to dev. Unfortunately, the project I was working on was cancelled. After several months in limbo, I was reorganized into the .NET Core framework team back over in DevDiv. I’ve got lots of friends in DevDiv and love the open source work they are doing. But I really missed being in Windows. Earlier this year, I joined the team that builds the platform plumbing for SmartGlass. Not much to talk about publicly right now, but that will change sometime soon.

In addition to my day job in SmartGlass, I’m also pitching in to help the Microsoft Services Disaster Response team. I knew Microsoft has a long history of corporate giving. However, I was unaware of the work we do helping communities affected by natural disasters until recently. My good friend Lewis Curtis took over as Director of Microsoft Services Disaster Response last year. I’m currently helping out on some of the missions for Nepal in response to the devestating earthquake that hit there earlier this year.

Finally, I decided that I was tired of running Other Peoples Codetm on my website. So I built out a new blog engine called Hawk. It’s written in C# (plus about 30 lines of JavaScript), uses ASP.NET 5 and runs on Azure. It’s specifically designed for my needs - for example, it automatically redirects old DasBlog style links like http://devhawk.net/2005/10/05/code+is+model.aspx. But I’m happy to let other people use it and would welcome contributions. When I get a chance, I’ll push the code up to GitHub.

Yet More Change for the Capitals

Six years ago, I was pretty excited about the future for the Washington Capitals. They had just lost their first round match up with the Flyers – which was a bummer – but they had made the playoffs for the first time in 3 seasons. I wrote at the time:

Furthermore, even though they lost, these playoffs are a promise of future success. I tell my kids all the time that the only way to get good at something is to work hard while you’re bad at it. Playoff hockey is no different. Most of the Caps had little or no playoff experience going into this series and it really showed thru the first three games. But they kept at it and played much better over the last four games of the series. They went 2-2 in those games, but the two losses went to overtime. A little more luck (or better officiating) and the Caps are headed to Pittsburgh instead of the golf course.

What a difference six seasons makes. Sure, they won the President’s Trophy in 2010. But the promise of future playoff success has been broken, badly. The Caps have been on a pretty steep decline after getting beat by the eighth seed Canadians in the first round of the playoffs in 2010. Since then, they’ve switched systems three times and head coaches twice. This year, they missed the playoffs entirely even with Alex Ovechkin racking up a league-leading 51 goals.

Today, the word came down that both the coach and general manager have been let go. As a Caps fan, I’m really torn about this. I mean, I totally agree that the coach and GM had to go – frankly, I was surprised it didn’t happen 7-10 days earlier. But now what do you do? The draft is two months and one day away, free agency starts two days after that. The search for a GM is going to have to be fast. Then the GM will have to make some really important decisions about players at the draft, free agency and compliance buyouts with limited knowledge of the players in our system. Plus, he’ll need to hire a new head coach – preferably before the draft as well.

The one positive note is that the salary cap for the Capitals looks pretty good for next year. The Capitals currently have the second largest amount of cap space / open roster slot in the league. (The Islanders are first with $14.5 million / open roster slot. The Caps have just over $7 million / open roster slot.) They have only a handful of unrestricted free agents to resign – with arguably only one “must sign” (Mikhail Grabovski) in the bunch. Of course, this could also be a bug rather than a feature – having that many players under contract may make it harder for the new GM to shape the team in his image.

Who every the Capitals hire to be GM and coach, I’m not expecting a promising start. It feels like the next season is already a wash, and we’re not even finished with the first round of this year’s playoffs yet.

I guess it could be worse.

I could be a Toronto Leafs fan.

Brokered WinRT Components Step Three

So far, we’ve created two projects, written all of about two lines of code and we have both our brokered component and its proxy/stub ready to go. Now it’s time to build the Windows Runtime app that uses the component. So far, things have been pretty easy – the only really tricky and/or manual step so far has been registering the proxy/stub, and that’s only tricky if you don’t want to run VS as admin. Unfortunately, tying this all together in the app requires a few more manual steps.

But before we get to the manual steps, let’s create the WinRT client app. Again, we’re going to create a new project but this time we’re going to select “Blank App (Windows)” from the Visual C# -> Store Apps -> Windows App node of the Add New Project dialog. Note, I’m not using “Blank App (Universal)” or “Blank App (Windows Phone)” because the brokered WinRT component feature is not support on Windows Phone. Call the client app project whatever you like, I’m calling mine “HelloWorldBRT.Client”.

Before we start writing code, we need to reference the brokered component. We can’t reference the brokered component directly or it will load in the sandboxed app process. Instead, the app need to reference a reference assembly version of the .winmd that gets generated automatically by the proxy/stub project. Remember in the last step when I said Kieran Mockford is an MSBuild wizard? The proxy/stub template project includes a custom target that automatically publishes the reference assembly winmd file used by the client app. When he showed me that, I was stunned – as I said, the man is a wizard. This means all you need to do is right click on the References node of the WinRT Client app project and select Add Reference. In the Reference Manager dialog, add a reference to the proxy/stub project you created in step two.

Now I can add the following code to the top of my App.OnLaunched function. Since this is a simple Hello World walkthru, I’m not going to bother to build any UI. I’m just going to inspect variables in the debugger. Believe me, the less UI I write, the better for everyone involved. Note, I’ve also added the P/Invoke signatures for GetCurrentProcess/ThreadID and to the client app like I did in the brokered component in step one. This way, I can get the process and thread IDs for both the app and broker process and compare them.

var pid = GetCurrentProcessId();
var tid = GetCurrentThreadId();

var c = new HelloWorldBRT.Class();
var bpid = c.CurrentProcessId;
var btid = c.CurrentThreadId;

At this point the app will compile, but if I run it the app will throw a TypeLoadException when it tries to create an instance of HelloWorldBRT.Class. The type can’t be loaded because the we’re using the reference assembly .winmd published by the proxy/stub project – it has no implementation details, so it can’t load. In order to be able to load the type, we need to declare the HelloWorldBRT.Class as a brokered component in the app’s pacakge.appxmanifest file. For non-brokered components, Visual Studio does this for you automatically. For brokered components we have to do it manually unfortunately. Every activatable class (i.e. class you can construct via “new”) needs to be registered in the appx manifest this way.

To register HelloWorldBRT.Class, right click the Package.appxmanifest file in the client project, select “Open With” from the context menu and then select “XML (Text) editor” from the Open With dialog. Then you need to insert inProcessServer extension that includes an ActivatableClass element for each class you can activate (aka has a public constructor). Each ActivatableClass element contains an ActivatableClassAttribute element that contains a pointer to the folder where the brokered component is installed. Here’s what I added to Package.appxmainfest of my HelloWorldBRT.Client app.

<Extensions>
  <Extension Category="windows.activatableClass.inProcessServer">
    <InProcessServer>
      <Path>clrhost.dll</Path>
      <ActivatableClass ActivatableClassId="HelloWorldBRT.Class"
                        ThreadingModel="both">
        <ActivatableClassAttribute
             Name="DesktopApplicationPath"
             Type="string"
             Value="D:\dev\HelloWorldBRT\Debug\HelloWorldBRT.PS"/>
      </ActivatableClass>
    </InProcessServer>
  </Extension>
</Extensions>

The key thing here is the addition of the DesktopApplicationPath ActivatableClassAttribute. This tells the WinRT activation logic that HelloWorldBRT.Class is a brokered component and where the managed .winmd file with the implementation details is located on the device. Note, you can use multiple brokered components in your side loaded app, but they all have the same DesktopApplicationPath.

Speaking of DesktopApplicationPath, the path I’m using here is path the final location of the proxy/stub components generated by the compiler. Frankly, this isn’t an good choice to use in a production deployment. But for the purposes of this walk thru, it’ll be fine.

ClientWatchWindow

Now when we run the app, we can load a HelloWorldBRT.Class instance and access the properties. re definitely seeing a different app process IDs when comparing the result of calling GetCurrentProcessId directly in App.OnLoaded vs. the result of calling GetCurrentProcessId in the brokered component. Of course, each run of the app will have different ID values, but this proves that we are loading our brokered component into a different process from where our app code is running.

Now you’re ready to go build your own brokered components! Here’s hoping you’ll find more interesting uses for them than comparing the process IDs of the app and broker processes in the debugger! 😄