Durable Services with Fake Persistence

I’ve been investigating the new WCF/WF integration in .NET Framework 3.5. I want to understand how the new context features work. Unfortunately, there’s not much info out there (that I could find at any rate). You’re pretty much stuck with the samples and Jesus Rodriguez’s overview of durable services. So I sat down to dig a little deeper.

Note, since there are no docs on this stuff as I write this, many of the links below are Reflector code links.

I started by lifting the DurableCalculator sample contract and service implementation and dumping it into a new WCF Service Library project. I did this for two reasons. First, VS08 has added a WCF Service Host much like VS05 added the ASP.NET Development Server. Very cool. But the existing sample is still written to be hosted in IIS, so I wanted to change that. Second, and much more important, I wanted to start with a vanilla config file. I knew it wouldn’t work out of the box, but the point of this exercise was to learn how this works under the covers.

When you fire up the durable service with the vanilla config file, you get an error (as expected). Services marked with the DurableServiceAttribute require a binding that supports the context protocol. WsHttpBinding, the default binding when you create a new service, doesn’t. However, it’s easy to fix by switching to wsHttpContextBinding instead. Via Reflector, we see that wsHttpContextBinding inherits from wsHttpBinding and adds a ContextBindingElement to the binding element collection created by the base class. BasicHttpContextBinding and netTcpContextBinding work the same way.

Even after changing to wsHttpContextBinding, we’re still getting an error on service start. But it’s a new error, so we’re making progress. Now, we’re told that services marked with DurableServiceAttribute need a persistence provider to be specified. If we look in the original sample’s web.config file, we find a persistenceProvider element in the service behavior. This element references the SqlPersistenceProviderFactory type. Obviously, the point here is to persist durable service instances to the database between calls, much as WF can do.

However, merely configuring the existing SQL persistence provider doesn’t really tell you what’s going on under the hood. Besides, often when you’re experimenting, you don’t really want to go thru the headache of setting up a SQL store for persisting instances to. Somewhere along the line, I implemented a fake persistence service for WF that stored the serialized instances in memory. So I decided to do the same for WCF durable services.

Building a WCF Persistence Provider requires building two classes: a factory and the provider itself. Factories inherit from PersistenceProviderFactory, which exposes only one non-CommunicationObject method: CreateProvider. It appears that the service host creates a single persistence provider factory and calls CreateProvider whenever it needs a persistence provider. Providers themselves inherit from PersistenceProvider, which exposes methods to Load, Save and Delete durable service instances.

My FakePersistenceProvider (and factory) are brain dead simple, though the ratio of “real” code to factory and CommunicationObject scaffolding is quite low (about 40 lines out of 258). The factory keeps a dictionary of serialized service instances, keyed by guid. When providers are created, this key guid is passed as a parameter to CreateProvider. The provider instances delegate Load, Save and Delete back to internal methods on the factory class. The methods themselves use the NetDataContractSerializer to serialize the service instance out to a byte array and deserializing it back again. I chose NetDataContractSerializer because that’s what the SQL persistence provider uses under the hood.

PersistenceProvider supports async versions of Load, Save and Delete but I didn’t implement them. Also, there’s a LockingPersistenceProvider abstract class which adds (you guessed it) instance locking semantics. However, my fake provider doesn’t span machines, much less require locking semantics so I skipped it.

So it looks like DurableServiceAttribute, context-supporting bindings and persistence providers are all inter-related. Certainly, you can’t use the attribute with out the binding and persistence provider. As I continue to dig, I’m want to see how context inter-relates with WF as well as it’s possible usage outside of DurableServiceAttribute based scenarios.

If you’re interested in the code, I’ve stuck it up on my SkyDrive. In addition to the FakePersistenceProvider implementation and the simple Durable Calculator service, it includes a simple client to test the service and persistence provider. The WCF Service Host includes a test client, but it doesn’t appear to support the context protocol, so I had to build a simple test app instead. Enjoy.

Morning Coffee 115

  • Scott Guthrie has two new posts in his series on LINQ to SQL. The first covers updating the database using stored procs instead of dynamic SQL. I was somewhat surprised that there wasn’t the capability to auto-generate vanilla Insert, Update and Deleted procs, but I guess DBA’s probably hate that anyway. The second shows how to use ExecuteQuery to execute arbitrary SQL instead of using the cool LINQ query syntax. I’m doing a bunch of loosely-typed SQL work right now, so I’m going to take a deeper look at this.
  • Speaking of LINQ, I just discovered this great series on IQueriable by Bart De Smet. It’s four months old, but takes an incredibly detailed look at what happens under the hood with LINQ. Bart also has a reference implementation of LINQ’s standard query operators as well as LINQ to Sharepoint.
  • Dan Maharry has pulled together what looks like the definitive guide for really slimming down and speeding up your VPC. It’s XP specific, but I’d bet most of the guidance would also apply to WS03, which is what I mostly use in my VPCs. (via Larkware)
  • Jimmy Nilsson thinks it’s the operations department that holds the power in today’s IT world. I agree 100% That’s why I value Dale’s input so much.
  • Nick Malik wonders if it’s time to translate the Federal Enterprise Architecture for use in the commercial sector. My dad just retired from 5 years in the FAA and he thinks FEA is too high level to be particularly useful.
  • The 2007 edition version of Scott Hanselman’s ultimate tool list is now available.
  • A bunch of XNA Gamefest sessions are now available for on-demand viewing.

Morning Coffee 114 – MoMAAB Edition

  • We spent all day yesterday discussing four topics: SaaS, Tools for Scrum, Web 2.0 and Domain Specific Languages. Even though it was just a day, my brain is full. These were deep and challenging discussion. I need to let the discussions stew a bit before posting anything about them here. But I will.
  • Next time we do one of these, I’m bringing a video camera. I took notes, but looking over them the next morning they seem woefully incomplete. OneNote’s integrated audio/video recording capabilities would nicely augment my notes.
  • We ran this meeting using Open Space, and it worked very well. Of course, we only had 8 people, so we didn’t need a lot of process to self organize. However, it did whet my appetite for having a larger Open Space style un-conference for architects. Is that something other folks might be interested in?
  • Major thanks to the folks at Clarity Consulting who graciously gave us space to meet and fed us yesterday. Their CTO Jon Rauschenberger sat in on most of our meeting, and drove our Web 2.0 discussion. I said I wanted to stew a bit on the discussions, but Jon’s slides are available on line if you’re interested.
  • Scott Colestock showed me Diigo, a social annotation tool. Where del.icio.us lets you tag and annotate individual pages, Diigo lets you annotate and highlight specific parts of the page. They also have blogging tools, where these annotations and highlights become blog posts, but they don’t support dasBlog. However, since FeedBurner doesn’t support Diigo for link splicing, I’m afraid my use of it will be limited.
  • Jim Wilt introduced me to Virtual PC’s command line. He recommends using “-pc <vpc name> -launch -singlepc” which launches a single virtual environment without the VPC console. I rarely run more than one VPC at a time and I hate stuff cluttering up my taskbar and notification area, so I like this a lot.
  • Loren Goodman demonstrated the SharePoint Explorer Client. SharePoint & MOSS came up several times in all of our topics, so this is going to get a second look. I always thought it was strange that MSFT ships a smart client for editing WSS & MOSS, but not viewing it. SP Explorer looks like it fills that gap nicely.
  • Shannon Braun sent us all a link to the 50/70 rule, which seems like a good rule of thumb. Of course, assuming that things won’t progress linearly is almost always a good rule of thumb. But the 50/70 rule has reasoning behind the assumption.
  • Chicago is nice, but the weather has been a little freaky. It’s either been hot & humid, downporing thunderstorms or tornados. Keith Powell showed me FlightAware, which shows you flight departure and arrival history. My flight hasn’t left within an hour of scheduled departure in a week. I’m going to try and grab an earlier flight, but I have a feeling it’s going to be a long trip home.

The One Business Case for Integration

Nick Malik lays out what he thinks are the four business cases for integration:

Assume we succeeded, and our systems are all optimally integrated.  What has changed?

  1. We have better business intelligence.  We have better understanding of our customers, our partners, our products, and our business.  And from that understanding, we make better decisions.  Those decisions are made in a federated manner using self-apparent information.
  2. We have end-to-end business processes that cross multiple systems, multiple roles, multiple geographies, and multiple data stores, all aware of and supporting the needs of the customer.
  3. We have end-to-end awareness of the metrics that drive both dissatisfaction and cost, and we can take that knowledge and apply it to making our business better.
  4. We have a more efficient enterprise, more able to grow to a larger size, at an accelerated rate, and still respond with agility to changing business opportunities.

I put to you that, in fact, we only have one business case for integration: better business intelligence. The other reasons Nick lists are either redundant or not as important to the business – at least in the general case – as you might think.

First off, #3 from Nick’s list sounds suspiciously like #1. If there’s a difference between “better understanding driving better decisions” and “applying awareness of metrics to making our business better”, I don’t know what it is. We’ll send one of them off to the Dept. of Redundancy Dept. and be done with it.

Second, I don’t think the business cares that IT has multiple systems or multiple data stores. If the business could run on one big centralized system that could meet the needs of the customer (aka the ERP fantasy), they’d be fine with that. The fact that realities of modern enterprise IT require splitting up capabilities across many systems is an implementation detail that frankly isn’t a concern of the business.

Besides, what’s the business benefit here? News flash: the enterprise already has end-to-end business processes that cross multiple systems, multiple roles, blah blah blah. They’re just not automated end-to-end. Does the business care that their not automated? Not a bit. Sure, they care about processes are slow, costly and error-prone, which manual processes tend to be. But it’s those negative characteristics that the business cares about, not integration. Besides, making processes quick, cheap and error-free sounds a lot like making them efficient. In other words, more work for the Dept. of Redundancy Dept.

Finally, I don’t think efficiency and agility is as important to the enterprise as Nick makes it out to be. I mean, the enterprise will say it cares about efficiency – especially in front of the stock holders. But when it comes to putting it’s money where it’s mouth is, the enterprise doesn’t, more often than not. Think about how success is measured in the typical IT project. Is efficiency one of the criteria for judging success? Not really. Will your project stakeholders let you run over budget and ship a few months late, just to improve efficiency? Probably not, unless that efficiency gain is both demonstrable and dramatic.

Of course, there are certainly specific examples where a automation or efficiency business case for integration can be made. For example, if replacing a specific manual process with an automated one has a large and measurable ROI, the business will likely be interested in making that investment. If you have a certain process that you do over and over that’s core to the business, the business will probably be interested in optimizing the frak out of it. For example, I would guess a delivery company like UPS or FedEx has spent a lot of time and money on optimizing their delivery processes.

But what it sounds like Nick’s talking about here is making a general case for making all our systems “optimally integrated”. Given that our current systems aren’t, this would take significant time and money. Yet the tangible benefit to the business is at best nebulous. Nick thinks improved integration will allow the business to “respond with agility to changing business opportunities.” He’s probably right. But how do you quantify this agility? How much will we save in the future for what we’re spending today? For the general case, the answer is “it depends”. It’s really hard to fund a project when it’s projected ROI is “it depends” .

However, business intelligence is a no brainer for the enterprise to invest in. Giving decision makers better and more up-to-date information, that’s a tangible benefit that the organization can quantify now. If you can quantify the value of a project, you’ve got the start of a budget. Of course, all that juicy data is smeared across a variety of systems, which means integration. Again, the enterprise doesn’t really care about said multiple systems or integration, but they care about the outcome.

Nick recommends to SOA folks that “if you aren’t already working with your BI team, pick up the phone. Their mature processes and practices are able to address many of your issues, and the natural synergy between BI and SOA can make them a strong ally in the fight for a better, faster, cheaper, and more intelligent enterprise.” Good advice. Otherwise, selling integration to the business isn’t much different than selling them SOA. In other words, don’t sell it – just do it.

Morning Coffee 113

  • I’m in Chicago today and tomorrow for a reunion of sorts. In my last job, I managed a group of external architects called the Microsoft Architecture Advisory Board (aka the MAAB). We discontinued the program a while back, but the core of the group found the program valuable enough they have continued to meet anyway. I found the MAAB meetings incredibly valuable and insightful, so I’m really excited to be invited to continue my involvement with the group.
  • I picked up Bioshock Tuesday (Circuit City had it on sale) on my way to my bi-weekly campus excursion. My meetings were over around 2pm so I headed home early, expecting to surprise the kids. But Jules had decided to skip naps and go shopping with them. Her cell phone was dead, so I ended up at home with a couple of hours to myself and a brand new copy of Bioshock. Wow, is that a good game. Certainly deserving of the amazingly good reviews it’s garnered.
  • Speaking of reviews, this transparently biased review of Bioshock over at Sony Defense Farce Force is frakking hilarious. Somehow, I doubt their dubious review will stem the tidal wave of Bioshock’s well-deserved hype. Can’t wait to read their Halo 3 review.
  • Pat Helland writes at length on master-master replication. I reformated it into PDF so I could read it – the large images were messing up the text flow on my system. As usual for Pat, there’s gold in that thar post. His thoughts on DAGs of versions and vector clocks as identifiers are very exciting. However, I think he glosses over the importance of declarative merging. I would think programmatic merge would likely be non-deterministic across nodes. If so, wouldn’t you end up with two documents with the same vector-clock identifier by different data?
  • Joe McKendrick points to a few people who predict the term “service-oriented” will eventually be subsumed under the general heading of “architecture”. Not to brag, but I made that exact same prediction almost three years ago.
  • Erik Johnson thinks that SOA 2.0 centers on transformational patterns. The idea (I think) is that if systems “understand each other more deeply”, then we can build a “smarter stack” and design apps via new constructs to promote agility and simplicity. Personally, I’m skeptical that we can define unambiguously system semantics except in the simplest scenarios, but Erik talks about using “graph transformation mathematics” to encode semantics. I don’t know anything about graph transformation mathematics, but at least Erik has progressed beyond hand waving to describing the “what”. Here’s looking forward to the “how”.
  • New dad Clemens Vasters somehow finds time to implement an XML-RPC binding for WCF 3.5. I was encouraged that it didn’t require any custom attributes or extensions at the programmer level. Of course, XML-RPC fits semantically into WCF’s interface based service model, so it shouldn’t be a huge surprise that it didn’t require any custom extensions. But did it need WCF 3.5? Would this work if recompiled against the 3.0 assemblies?
  • Phil Haack writes a long post on Duck Typing. VB9 originally supported duck typing – the feature was called Dynamic Interfaces – when it was first announced, but it was subsequently cut. I was really looking forward to that feature. Between it and XML Literals, VB9 was really stepping out of C#’s shadow. I guess it still is, even without dynamic interfaces.
  • Since I’ve been doing some LINQ to XML work lately, I decided to go back and re-write my code in VB9 using XML literals. While XML literals are nice, I don’t think they’re a must have. First, LINQ to XML has a nice fluent interface, so the literals don’t give you that much cleaner code (though you do avoid writing XElement and XAttribute over and over.) Second, I find VB9′s template syntax (like ASP <%= expression %>) clunky to work with, especially in nested templates. Finally, I like the namespace support of XNames better. As far as I can tell, VB9 defines namespaces with xmlns attributes just like XML does. So I’m not dying for literal XML support in a future version of C#. How about you?