The Integration Business Case, continued

Nick responds to my visceral thoughts on the integration business case. There’s no point in excerpting it – go read the whole thing. I’ll wait.

It looks like for case #2, he added the ability to “change readily and inexpensively”, which is to say he made it overlap even further with #4 than it used to. He also changed #3 to make it clear that he was collecting metrics to give us “awareness of process efficiency”. That makes #3 overlap with #4 on efficiency instead of #1 on BI, but either way it’s still redundant.

So we’re still left with the business cases of Business Intelligence, Efficiency and Agility. Nick conflates Efficiency and Agility both in his original post and his follow-up, but I think it makes sense to separate them. I still stand by my original point that the business is only interested in directly funding Business Intelligence.

Nick is willing to bet a nice lunch that MSFT has invested more in improving operational efficiency that we have on BI in the past four years. He’s probably right, but he missed the point I was making. The business will readily invest in improving a specific process they can measure the ROI on improving. MSFT has lots of processes, I’m sure most of them have significant room for improvement.

But Nick’s list isn’t about specific improvements. He’s explicitly wrote that he’s describing a scenario where “our systems are all optimally integrated”. Selling the business on generally improving efficiency is very different that selling the business on improving the efficiency of a specific process. I’d bet the same nice lunch that the vast majority – if not all – of integration infrastructure running at MSFT was originally deployed as part of a specific business scenario that needed to be solved.

My point here is most businesses are better at funding projects to meet specific business needs than it is at funding pure infrastructure projects.

As for agility. Martin Fowler pointed out once that adding flexibility means adding complexity. But chances are, you’ll be wrong about the flexibility you think you’ll need. So you actually end up with the additional complexity but none of the flexibility benefit. Martin recommends “since most of the time we get it wrong, just don’t put the flexibility in there”. Instead, you should strive for simplicity, since simpler systems are easier to understand and thus easier to change.

Does the same philosophy apply to process? I think so, though there is one thing I’d be willing to risk being wrong on. We all expect the steps in a process to change over time, so moving to a declarative model for process definition sounds like a good idea. Luckily, there’s existing platform infrastructure that helps you out here. But beyond that, I can’t think of a flexibility requirement that I’m so sure of that I’m willing to take on the additional complexity.

Again, I’m not saying efficiency or agility (or integration for that matter) are bad things. I’m saying they’re a tough sell to the business in the absence of specific scenarios. Selling the business on automating the ordering processing is feasible. Selling the business on building out integration infrastructure because some future project will leverage it is much tougher. If you can sell them on it, either because the company is particularly forward thinking or because you can sell ice to Eskimos, then more power to you. But for the rest of us, better to focus on specific scenarios that the business will value and keep the integration details under wraps.

Durable Services with Fake Persistence

I’ve been investigating the new WCF/WF integration in .NET Framework 3.5. I want to understand how the new context features work. Unfortunately, there’s not much info out there (that I could find at any rate). You’re pretty much stuck with the samples and Jesus Rodriguez’s overview of durable services. So I sat down to dig a little deeper.

Note, since there are no docs on this stuff as I write this, many of the links below are Reflector code links.

I started by lifting the DurableCalculator sample contract and service implementation and dumping it into a new WCF Service Library project. I did this for two reasons. First, VS08 has added a WCF Service Host much like VS05 added the ASP.NET Development Server. Very cool. But the existing sample is still written to be hosted in IIS, so I wanted to change that. Second, and much more important, I wanted to start with a vanilla config file. I knew it wouldn’t work out of the box, but the point of this exercise was to learn how this works under the covers.

When you fire up the durable service with the vanilla config file, you get an error (as expected). Services marked with the DurableServiceAttribute require a binding that supports the context protocol. WsHttpBinding, the default binding when you create a new service, doesn’t. However, it’s easy to fix by switching to wsHttpContextBinding instead. Via Reflector, we see that wsHttpContextBinding inherits from wsHttpBinding and adds a ContextBindingElement to the binding element collection created by the base class. BasicHttpContextBinding and netTcpContextBinding work the same way.

Even after changing to wsHttpContextBinding, we’re still getting an error on service start. But it’s a new error, so we’re making progress. Now, we’re told that services marked with DurableServiceAttribute need a persistence provider to be specified. If we look in the original sample’s web.config file, we find a persistenceProvider element in the service behavior. This element references the SqlPersistenceProviderFactory type. Obviously, the point here is to persist durable service instances to the database between calls, much as WF can do.

However, merely configuring the existing SQL persistence provider doesn’t really tell you what’s going on under the hood. Besides, often when you’re experimenting, you don’t really want to go thru the headache of setting up a SQL store for persisting instances to. Somewhere along the line, I implemented a fake persistence service for WF that stored the serialized instances in memory. So I decided to do the same for WCF durable services.

Building a WCF Persistence Provider requires building two classes: a factory and the provider itself. Factories inherit from PersistenceProviderFactory, which exposes only one non-CommunicationObject method: CreateProvider. It appears that the service host creates a single persistence provider factory and calls CreateProvider whenever it needs a persistence provider. Providers themselves inherit from PersistenceProvider, which exposes methods to Load, Save and Delete durable service instances.

My FakePersistenceProvider (and factory) are brain dead simple, though the ratio of “real” code to factory and CommunicationObject scaffolding is quite low (about 40 lines out of 258). The factory keeps a dictionary of serialized service instances, keyed by guid. When providers are created, this key guid is passed as a parameter to CreateProvider. The provider instances delegate Load, Save and Delete back to internal methods on the factory class. The methods themselves use the NetDataContractSerializer to serialize the service instance out to a byte array and deserializing it back again. I chose NetDataContractSerializer because that’s what the SQL persistence provider uses under the hood.

PersistenceProvider supports async versions of Load, Save and Delete but I didn’t implement them. Also, there’s a LockingPersistenceProvider abstract class which adds (you guessed it) instance locking semantics. However, my fake provider doesn’t span machines, much less require locking semantics so I skipped it.

So it looks like DurableServiceAttribute, context-supporting bindings and persistence providers are all inter-related. Certainly, you can’t use the attribute with out the binding and persistence provider. As I continue to dig, I’m want to see how context inter-relates with WF as well as it’s possible usage outside of DurableServiceAttribute based scenarios.

If you’re interested in the code, I’ve stuck it up on my SkyDrive. In addition to the FakePersistenceProvider implementation and the simple Durable Calculator service, it includes a simple client to test the service and persistence provider. The WCF Service Host includes a test client, but it doesn’t appear to support the context protocol, so I had to build a simple test app instead. Enjoy.

Morning Coffee 115

  • Scott Guthrie has two new posts in his series on LINQ to SQL. The first covers updating the database using stored procs instead of dynamic SQL. I was somewhat surprised that there wasn’t the capability to auto-generate vanilla Insert, Update and Deleted procs, but I guess DBA’s probably hate that anyway. The second shows how to use ExecuteQuery to execute arbitrary SQL instead of using the cool LINQ query syntax. I’m doing a bunch of loosely-typed SQL work right now, so I’m going to take a deeper look at this.
  • Speaking of LINQ, I just discovered this great series on IQueriable by Bart De Smet. It’s four months old, but takes an incredibly detailed look at what happens under the hood with LINQ. Bart also has a reference implementation of LINQ’s standard query operators as well as LINQ to Sharepoint.
  • Dan Maharry has pulled together what looks like the definitive guide for really slimming down and speeding up your VPC. It’s XP specific, but I’d bet most of the guidance would also apply to WS03, which is what I mostly use in my VPCs. (via Larkware)
  • Jimmy Nilsson thinks it’s the operations department that holds the power in today’s IT world. I agree 100% That’s why I value Dale’s input so much.
  • Nick Malik wonders if it’s time to translate the Federal Enterprise Architecture for use in the commercial sector. My dad just retired from 5 years in the FAA and he thinks FEA is too high level to be particularly useful.
  • The 2007 edition version of Scott Hanselman’s ultimate tool list is now available.
  • A bunch of XNA Gamefest sessions are now available for on-demand viewing.

Morning Coffee 114 – MoMAAB Edition

  • We spent all day yesterday discussing four topics: SaaS, Tools for Scrum, Web 2.0 and Domain Specific Languages. Even though it was just a day, my brain is full. These were deep and challenging discussion. I need to let the discussions stew a bit before posting anything about them here. But I will.
  • Next time we do one of these, I’m bringing a video camera. I took notes, but looking over them the next morning they seem woefully incomplete. OneNote’s integrated audio/video recording capabilities would nicely augment my notes.
  • We ran this meeting using Open Space, and it worked very well. Of course, we only had 8 people, so we didn’t need a lot of process to self organize. However, it did whet my appetite for having a larger Open Space style un-conference for architects. Is that something other folks might be interested in?
  • Major thanks to the folks at Clarity Consulting who graciously gave us space to meet and fed us yesterday. Their CTO Jon Rauschenberger sat in on most of our meeting, and drove our Web 2.0 discussion. I said I wanted to stew a bit on the discussions, but Jon’s slides are available on line if you’re interested.
  • Scott Colestock showed me Diigo, a social annotation tool. Where del.icio.us lets you tag and annotate individual pages, Diigo lets you annotate and highlight specific parts of the page. They also have blogging tools, where these annotations and highlights become blog posts, but they don’t support dasBlog. However, since FeedBurner doesn’t support Diigo for link splicing, I’m afraid my use of it will be limited.
  • Jim Wilt introduced me to Virtual PC’s command line. He recommends using “-pc <vpc name> -launch -singlepc” which launches a single virtual environment without the VPC console. I rarely run more than one VPC at a time and I hate stuff cluttering up my taskbar and notification area, so I like this a lot.
  • Loren Goodman demonstrated the SharePoint Explorer Client. SharePoint & MOSS came up several times in all of our topics, so this is going to get a second look. I always thought it was strange that MSFT ships a smart client for editing WSS & MOSS, but not viewing it. SP Explorer looks like it fills that gap nicely.
  • Shannon Braun sent us all a link to the 50/70 rule, which seems like a good rule of thumb. Of course, assuming that things won’t progress linearly is almost always a good rule of thumb. But the 50/70 rule has reasoning behind the assumption.
  • Chicago is nice, but the weather has been a little freaky. It’s either been hot & humid, downporing thunderstorms or tornados. Keith Powell showed me FlightAware, which shows you flight departure and arrival history. My flight hasn’t left within an hour of scheduled departure in a week. I’m going to try and grab an earlier flight, but I have a feeling it’s going to be a long trip home.

The One Business Case for Integration

Nick Malik lays out what he thinks are the four business cases for integration:

Assume we succeeded, and our systems are all optimally integrated.  What has changed?

  1. We have better business intelligence.  We have better understanding of our customers, our partners, our products, and our business.  And from that understanding, we make better decisions.  Those decisions are made in a federated manner using self-apparent information.
  2. We have end-to-end business processes that cross multiple systems, multiple roles, multiple geographies, and multiple data stores, all aware of and supporting the needs of the customer.
  3. We have end-to-end awareness of the metrics that drive both dissatisfaction and cost, and we can take that knowledge and apply it to making our business better.
  4. We have a more efficient enterprise, more able to grow to a larger size, at an accelerated rate, and still respond with agility to changing business opportunities.

I put to you that, in fact, we only have one business case for integration: better business intelligence. The other reasons Nick lists are either redundant or not as important to the business – at least in the general case – as you might think.

First off, #3 from Nick’s list sounds suspiciously like #1. If there’s a difference between “better understanding driving better decisions” and “applying awareness of metrics to making our business better”, I don’t know what it is. We’ll send one of them off to the Dept. of Redundancy Dept. and be done with it.

Second, I don’t think the business cares that IT has multiple systems or multiple data stores. If the business could run on one big centralized system that could meet the needs of the customer (aka the ERP fantasy), they’d be fine with that. The fact that realities of modern enterprise IT require splitting up capabilities across many systems is an implementation detail that frankly isn’t a concern of the business.

Besides, what’s the business benefit here? News flash: the enterprise already has end-to-end business processes that cross multiple systems, multiple roles, blah blah blah. They’re just not automated end-to-end. Does the business care that their not automated? Not a bit. Sure, they care about processes are slow, costly and error-prone, which manual processes tend to be. But it’s those negative characteristics that the business cares about, not integration. Besides, making processes quick, cheap and error-free sounds a lot like making them efficient. In other words, more work for the Dept. of Redundancy Dept.

Finally, I don’t think efficiency and agility is as important to the enterprise as Nick makes it out to be. I mean, the enterprise will say it cares about efficiency – especially in front of the stock holders. But when it comes to putting it’s money where it’s mouth is, the enterprise doesn’t, more often than not. Think about how success is measured in the typical IT project. Is efficiency one of the criteria for judging success? Not really. Will your project stakeholders let you run over budget and ship a few months late, just to improve efficiency? Probably not, unless that efficiency gain is both demonstrable and dramatic.

Of course, there are certainly specific examples where a automation or efficiency business case for integration can be made. For example, if replacing a specific manual process with an automated one has a large and measurable ROI, the business will likely be interested in making that investment. If you have a certain process that you do over and over that’s core to the business, the business will probably be interested in optimizing the frak out of it. For example, I would guess a delivery company like UPS or FedEx has spent a lot of time and money on optimizing their delivery processes.

But what it sounds like Nick’s talking about here is making a general case for making all our systems “optimally integrated”. Given that our current systems aren’t, this would take significant time and money. Yet the tangible benefit to the business is at best nebulous. Nick thinks improved integration will allow the business to “respond with agility to changing business opportunities.” He’s probably right. But how do you quantify this agility? How much will we save in the future for what we’re spending today? For the general case, the answer is “it depends”. It’s really hard to fund a project when it’s projected ROI is “it depends” .

However, business intelligence is a no brainer for the enterprise to invest in. Giving decision makers better and more up-to-date information, that’s a tangible benefit that the organization can quantify now. If you can quantify the value of a project, you’ve got the start of a budget. Of course, all that juicy data is smeared across a variety of systems, which means integration. Again, the enterprise doesn’t really care about said multiple systems or integration, but they care about the outcome.

Nick recommends to SOA folks that “if you aren’t already working with your BI team, pick up the phone. Their mature processes and practices are able to address many of your issues, and the natural synergy between BI and SOA can make them a strong ally in the fight for a better, faster, cheaper, and more intelligent enterprise.” Good advice. Otherwise, selling integration to the business isn’t much different than selling them SOA. In other words, don’t sell it – just do it.