Tonight’s Caps Game Live on the Web

This just came across my news reader:

Washington Capitals will face the Philadelphia Flyers in preseason exhibition tonight. Bummed that the game is not being televised? Never fear, WashingtonCaps.com is broadcasting the game via broadband off their website. However it is restricted to the Washington area only.
[Caps Live Via WebPuckhead’s Thoughts]

Wow! I mean, I still can’t watch the game <grumble grumble> and it is still pre-season, but this is pretty significant. More details in the press release:

The Washington Capitals exhibition game against the Philadelphia Flyers on Tuesday, Sept. 26 at 7 p.m. will be broadcast live on WashingtonCaps.com through the use of B2 Networks. This will mark the first time in NHL history that a game can be viewed exclusively via broadband.

The game can be seen free of charge on the Capitals’ website, WashingtonCaps.com. Fans will need a high-speed internet connection and Windows Media Player 9 or higher. Capitals’ radio network play-by-play announcer Steve Kolbe will call the game with Mike Vogel, senior writer from WashingtonCaps.com, providing analysis. Due to NHL broadcast restrictions, the game can only be viewed by fans living within the Capitals broadcast area.

B2 Networks is a provider of secure international television and video broadcasting systems, pay-per-view and billing systems. During the past 12 months, B2 has broadcast more than 3,000 hockey games from all levels including the championships from four leagues. B2 is the digital distribution rightsholder for the United Hockey League, American Hockey League, ECHL and USHL, along with baseball’s Northern League. B2 recently broadcast the National Lacrosse League championship to fans on four continents.
[Capitals Preseason Game to be Broadcast on WashingtonCaps.com via B2Networks]

The Caps and their owner are aggressively pursing avenues outside of the main stream media (which for the most part ignores hockey). First, they come up with Guidelines For Issuing Press Credentials To Bloggers and now this.

Of all the major sports, hockey seems to have the most to gain from both the HD revolution and media decentralization. Hockey is so fast and the puck is so small, you spend most of your time tracking the puck when watching in standard definition. In HD, you don’t have to watch the puck, you can watch the play. This isn’t to say that other sports aren’t gorgeous in HD, but the difference in the experience between SD and HD just isn’t as significant for other sports. As for media decentralization, the reason hockey has the most to gain is because it has the least coverage in the mainstream media today. So there’s no where to go but up.

However, the NHL broadcast restrictions stuff has got to go. Come on Ted, you’re a “pioneer of the Internet and new media”. Get those guys at the NHL to wake up and embrace the new media! How come EVERY game isn’t available this way?

Revisiting the AJAX Ecosystem

Seven months and one job ago, I wrote this about AJAX toolkits:

The network effect that Dion doesn’t consider is the component ecosystem phenomenon that Microsoft has a ton of experience with. Old school VB, COM/ActiveX and .NET have all had large ecosystems of components and controls evolve that extend the functionality of the baseline development platform. There’s no reason to believe that won’t happen with Atlas. I think it’s wrong to describe Atlas as a monolith or self-contained or enclosing. It’s an extensible baseline platform – i.e. the baseline functionality is set down once at the development platform and the ecosystem can extend it from there. Sure, overlapping extensions happen (how many rich text editor components are there for ASP.NET?) but at least they all have basic compatibility.

I bring this up now because I saw on Shawn Burke’s blog that they’ve shipped the September release of the Atlas Control Toolkit. There are now 25 different controls (they had 10 in their first release). But there’s something more significant than the addition of 15 controls overall:

Slider is just a super-useful little control.  There are so many times when you want to let users use this type of UI.  Another great thing about Slider is that it’s a 3rd party contribution, from Garbin, who did a great job on it. (emphasis added)
[Atlas Control Toolkit September Release]

I just wanted to brag that I called this 7 months ago.

Stateless != Stateless

A while back, I blogged that Services Aren’t Stateless, in response to some stuff in Thomas Erl’s latest book. At the time, I mentioned that I was looking forward to discussing my opinions with Erl when I attended his workshop. I’ve spent the last two days at said workshop. I’ll have a full write-up on the workshop later this week, but I wanted to blog the resolution to this stateless issue right away.

At the time, I wrote “I assume Erl means that service should be stateless in the same way HTTP is stateless.” Turns out, my assumption was way way wrong. When he addressed this topic in his workshop, he started by talking about dealing with concurrency and scalability, which got me confused at first. Apparently, when Erl says stateless, he’s referring to minimizing memory usage. That is, don’t keep service state in memory longer than you need to. So all the stuff about activity data, that’s all fine as per Erl’s principles, as long as you write it out to database instead of keeping it in memory. In his book, he talks about the service being “temporarily stateful” while processing a message. When I read that, I didn’t get it – because I was thinking of the HTTP definition of stateless & stateful. But if we’re just talking about raw memory usage, it suddenly makes sense.

On the one hand, I’m happy to agree 100% with Erl on another of his principles. Frankly, much of what he talked about in his workshop seems predicated on unrealistic organizational behavior and offer at best unproven promises of cost and time savings in the long run via black box reuse. So to be in complete agreement with him on something was a nice change of pace. Thomas is much more interested in long-running and async services than I originally expected when I first flipped thru his book.

On the other hand, calling this out as a “principle of service orientation” hardly seems warranted. I mean, large scale web sites have been doing this for a long time and SQL Session State support has been a part of ASP.NET since v1. Furthermore, using the term “stateless” in this way is fundamentally different from the way HTTP and the industry at large uses it, which was the source of my confusion. So while I agree with the concept, I really wish Erl hadn’t chosen an overloaded term to refer to it.

Feasible Service Reuse

Yesterday, I posted about services and reuse. More to the point, I posted why I don’t believe that business services will be reusable, any more than business objects were reusable. However, “can’t reuse business services” isn’t the whole story, because I believe in different kinds of reuse.

The kind of reuse I was writing about yesterday is typically referred to as “black box reuse”. The idea being that I can reuse the item (object or service) with little or no understanding of how it works. Thomas Beck wrote about colored boxes on his blog yesterday. Context impacts reuse – the environments in which you plan to reuse an item must be compatible with what the item expects. However, those contextual requirements aren’t written down anywhere – at least, they’re not encoded anywhere in the item’s interface. Those contextual requirements are buried in the code – the code you’re not supposed to look at because we’re striving for black box reuse. Opaque Requirements == No Possibility of Reuse.

As I wrote yesterday, David Chappell tears this type of reuse apart fairly adeptly. Money quote: “Creating services that can be reused requires predicting the future”. But black box reuse this isn’t the only kind of reuse. It’s attractive, since it’s simple. At least it would be, if it actually worked. So what kind of reuse doesn’t require predicting the future?

Refactoring.

I realize most people probably don’t consider refactoring to be reuse. But let’s take a look at the official definition from refactoring.com:

Refactoring is a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior. Its heart is a series of small behavior preserving transformations. Each transformation (called a ‘refactoring’) does little, but a sequence of transformations can produce a significant restructuring. Since each refactoring is small, it’s less likely to go wrong. The system is also kept fully working after each small refactoring, reducing the chances that a system can get seriously broken during the restructuring

Two things about this definition imply reuse. First, refactoring is “restructuring an existing body of code”. It’s not rewriting that existing body of code. You may be making changes to the code – this certainly isn’t black box reuse – but you’re not scrapping the code completely and starting over. Second, refactoring is making changes to the code “without changing its external behavior”. You care about the code’s external behavior because somewhere, some other code is calling the code you’re refactoring. Some other existing piece of code that you don’t want to change – i.e. that you want to reuse.

When you refactor, you still reuse a significant amount of the code, but you’re not having to predict the future to do it. Refactoring**is the kind of reuse I believe in.

In his article, David talks about types of reuse such as business agility, adaptability and easily changeable orchestration. These look a lot more like refactoring than black box reuse to me. Unfortunately, David waves these away, saying  “Still, isn’t this just another form of reuse?”. Reconfiguration hardly qualifies as “predict the future” style reuse that he spends the rest of the article arguing against. It’s just one paragraph in an otherwise splendid article, so I’ll give him a pass this time. (I’m sure he’s relieved.)

Hard on Hardware

I guess being “Harry’s Computer” is a rough gig. Right before I went on vacation a few weeks ago, the power connector on my laptop started acting up. I’d plug it in, but it wouldn’t charge. Typically, replugging it would solve the issue. When I got back from vacation, the help desk tech took one look at it and realized I needed a new power connector. OK, how long will that take? Supposedly a day or two, but in the end it took a week and a half. It arrived Monday afternoon, right after I left for a two day SOA workshop (more on that later). To make matters worse, the power connector has now completely broken off, so I’m having to lug my docking station around if I want to charge my laptop.

Then, to make matters worse, my power cable had stopped working. Luckily, my buddy Dale is up here in Vancouver with me at this workshop, so I’ve been able to borrow his. But seriously, a broken power cable? How does that happen? I mean, it’s not cut or anything. But if you share the transformer box, you can hear something broken inside rattling. That’s not good.

So I have a busted power cable that I can’t connect to my laptop anyway because of a broken power connector. Frankly, I’m a little worried about what will go wrong with this machine next. But since it only seems to happen when I’m on the road, and I’m not scheduled to go on the road again anytime soon, I guess I’ll survive.