Morning Coffee 114 – MoMAAB Edition

  • We spent all day yesterday discussing four topics: SaaS, Tools for Scrum, Web 2.0 and Domain Specific Languages. Even though it was just a day, my brain is full. These were deep and challenging discussion. I need to let the discussions stew a bit before posting anything about them here. But I will.
  • Next time we do one of these, I’m bringing a video camera. I took notes, but looking over them the next morning they seem woefully incomplete. OneNote’s integrated audio/video recording capabilities would nicely augment my notes.
  • We ran this meeting using Open Space, and it worked very well. Of course, we only had 8 people, so we didn’t need a lot of process to self organize. However, it did whet my appetite for having a larger Open Space style un-conference for architects. Is that something other folks might be interested in?
  • Major thanks to the folks at Clarity Consulting who graciously gave us space to meet and fed us yesterday. Their CTO Jon Rauschenberger sat in on most of our meeting, and drove our Web 2.0 discussion. I said I wanted to stew a bit on the discussions, but Jon’s slides are available on line if you’re interested.
  • Scott Colestock showed me Diigo, a social annotation tool. Where del.icio.us lets you tag and annotate individual pages, Diigo lets you annotate and highlight specific parts of the page. They also have blogging tools, where these annotations and highlights become blog posts, but they don’t support dasBlog. However, since FeedBurner doesn’t support Diigo for link splicing, I’m afraid my use of it will be limited.
  • Jim Wilt introduced me to Virtual PC’s command line. He recommends using “-pc <vpc name> -launch -singlepc” which launches a single virtual environment without the VPC console. I rarely run more than one VPC at a time and I hate stuff cluttering up my taskbar and notification area, so I like this a lot.
  • Loren Goodman demonstrated the SharePoint Explorer Client. SharePoint & MOSS came up several times in all of our topics, so this is going to get a second look. I always thought it was strange that MSFT ships a smart client for editing WSS & MOSS, but not viewing it. SP Explorer looks like it fills that gap nicely.
  • Shannon Braun sent us all a link to the 50/70 rule, which seems like a good rule of thumb. Of course, assuming that things won’t progress linearly is almost always a good rule of thumb. But the 50/70 rule has reasoning behind the assumption.
  • Chicago is nice, but the weather has been a little freaky. It’s either been hot & humid, downporing thunderstorms or tornados. Keith Powell showed me FlightAware, which shows you flight departure and arrival history. My flight hasn’t left within an hour of scheduled departure in a week. I’m going to try and grab an earlier flight, but I have a feeling it’s going to be a long trip home.

The One Business Case for Integration

Nick Malik lays out what he thinks are the four business cases for integration:

Assume we succeeded, and our systems are all optimally integrated.  What has changed?

  1. We have better business intelligence.  We have better understanding of our customers, our partners, our products, and our business.  And from that understanding, we make better decisions.  Those decisions are made in a federated manner using self-apparent information.
  2. We have end-to-end business processes that cross multiple systems, multiple roles, multiple geographies, and multiple data stores, all aware of and supporting the needs of the customer.
  3. We have end-to-end awareness of the metrics that drive both dissatisfaction and cost, and we can take that knowledge and apply it to making our business better.
  4. We have a more efficient enterprise, more able to grow to a larger size, at an accelerated rate, and still respond with agility to changing business opportunities.

I put to you that, in fact, we only have one business case for integration: better business intelligence. The other reasons Nick lists are either redundant or not as important to the business – at least in the general case – as you might think.

First off, #3 from Nick’s list sounds suspiciously like #1. If there’s a difference between “better understanding driving better decisions” and “applying awareness of metrics to making our business better”, I don’t know what it is. We’ll send one of them off to the Dept. of Redundancy Dept. and be done with it.

Second, I don’t think the business cares that IT has multiple systems or multiple data stores. If the business could run on one big centralized system that could meet the needs of the customer (aka the ERP fantasy), they’d be fine with that. The fact that realities of modern enterprise IT require splitting up capabilities across many systems is an implementation detail that frankly isn’t a concern of the business.

Besides, what’s the business benefit here? News flash: the enterprise already has end-to-end business processes that cross multiple systems, multiple roles, blah blah blah. They’re just not automated end-to-end. Does the business care that their not automated? Not a bit. Sure, they care about processes are slow, costly and error-prone, which manual processes tend to be. But it’s those negative characteristics that the business cares about, not integration. Besides, making processes quick, cheap and error-free sounds a lot like making them efficient. In other words, more work for the Dept. of Redundancy Dept.

Finally, I don’t think efficiency and agility is as important to the enterprise as Nick makes it out to be. I mean, the enterprise will say it cares about efficiency – especially in front of the stock holders. But when it comes to putting it’s money where it’s mouth is, the enterprise doesn’t, more often than not. Think about how success is measured in the typical IT project. Is efficiency one of the criteria for judging success? Not really. Will your project stakeholders let you run over budget and ship a few months late, just to improve efficiency? Probably not, unless that efficiency gain is both demonstrable and dramatic.

Of course, there are certainly specific examples where a automation or efficiency business case for integration can be made. For example, if replacing a specific manual process with an automated one has a large and measurable ROI, the business will likely be interested in making that investment. If you have a certain process that you do over and over that’s core to the business, the business will probably be interested in optimizing the frak out of it. For example, I would guess a delivery company like UPS or FedEx has spent a lot of time and money on optimizing their delivery processes.

But what it sounds like Nick’s talking about here is making a general case for making all our systems “optimally integrated”. Given that our current systems aren’t, this would take significant time and money. Yet the tangible benefit to the business is at best nebulous. Nick thinks improved integration will allow the business to “respond with agility to changing business opportunities.” He’s probably right. But how do you quantify this agility? How much will we save in the future for what we’re spending today? For the general case, the answer is “it depends”. It’s really hard to fund a project when it’s projected ROI is “it depends” .

However, business intelligence is a no brainer for the enterprise to invest in. Giving decision makers better and more up-to-date information, that’s a tangible benefit that the organization can quantify now. If you can quantify the value of a project, you’ve got the start of a budget. Of course, all that juicy data is smeared across a variety of systems, which means integration. Again, the enterprise doesn’t really care about said multiple systems or integration, but they care about the outcome.

Nick recommends to SOA folks that “if you aren’t already working with your BI team, pick up the phone. Their mature processes and practices are able to address many of your issues, and the natural synergy between BI and SOA can make them a strong ally in the fight for a better, faster, cheaper, and more intelligent enterprise.” Good advice. Otherwise, selling integration to the business isn’t much different than selling them SOA. In other words, don’t sell it – just do it.

Morning Coffee 113

  • I’m in Chicago today and tomorrow for a reunion of sorts. In my last job, I managed a group of external architects called the Microsoft Architecture Advisory Board (aka the MAAB). We discontinued the program a while back, but the core of the group found the program valuable enough they have continued to meet anyway. I found the MAAB meetings incredibly valuable and insightful, so I’m really excited to be invited to continue my involvement with the group.
  • I picked up Bioshock Tuesday (Circuit City had it on sale) on my way to my bi-weekly campus excursion. My meetings were over around 2pm so I headed home early, expecting to surprise the kids. But Jules had decided to skip naps and go shopping with them. Her cell phone was dead, so I ended up at home with a couple of hours to myself and a brand new copy of Bioshock. Wow, is that a good game. Certainly deserving of the amazingly good reviews it’s garnered.
  • Speaking of reviews, this transparently biased review of Bioshock over at Sony Defense Farce Force is frakking hilarious. Somehow, I doubt their dubious review will stem the tidal wave of Bioshock’s well-deserved hype. Can’t wait to read their Halo 3 review.
  • Pat Helland writes at length on master-master replication. I reformated it into PDF so I could read it – the large images were messing up the text flow on my system. As usual for Pat, there’s gold in that thar post. His thoughts on DAGs of versions and vector clocks as identifiers are very exciting. However, I think he glosses over the importance of declarative merging. I would think programmatic merge would likely be non-deterministic across nodes. If so, wouldn’t you end up with two documents with the same vector-clock identifier by different data?
  • Joe McKendrick points to a few people who predict the term “service-oriented” will eventually be subsumed under the general heading of “architecture”. Not to brag, but I made that exact same prediction almost three years ago.
  • Erik Johnson thinks that SOA 2.0 centers on transformational patterns. The idea (I think) is that if systems “understand each other more deeply”, then we can build a “smarter stack” and design apps via new constructs to promote agility and simplicity. Personally, I’m skeptical that we can define unambiguously system semantics except in the simplest scenarios, but Erik talks about using “graph transformation mathematics” to encode semantics. I don’t know anything about graph transformation mathematics, but at least Erik has progressed beyond hand waving to describing the “what”. Here’s looking forward to the “how”.
  • New dad Clemens Vasters somehow finds time to implement an XML-RPC binding for WCF 3.5. I was encouraged that it didn’t require any custom attributes or extensions at the programmer level. Of course, XML-RPC fits semantically into WCF’s interface based service model, so it shouldn’t be a huge surprise that it didn’t require any custom extensions. But did it need WCF 3.5? Would this work if recompiled against the 3.0 assemblies?
  • Phil Haack writes a long post on Duck Typing. VB9 originally supported duck typing – the feature was called Dynamic Interfaces – when it was first announced, but it was subsequently cut. I was really looking forward to that feature. Between it and XML Literals, VB9 was really stepping out of C#’s shadow. I guess it still is, even without dynamic interfaces.
  • Since I’ve been doing some LINQ to XML work lately, I decided to go back and re-write my code in VB9 using XML literals. While XML literals are nice, I don’t think they’re a must have. First, LINQ to XML has a nice fluent interface, so the literals don’t give you that much cleaner code (though you do avoid writing XElement and XAttribute over and over.) Second, I find VB9′s template syntax (like ASP <%= expression %>) clunky to work with, especially in nested templates. Finally, I like the namespace support of XNames better. As far as I can tell, VB9 defines namespaces with xmlns attributes just like XML does. So I’m not dying for literal XML support in a future version of C#. How about you?

DataReaders, LINQ to XML and Range Generation

I’m doing a bunch of database / XML stuff @ work, so I decided to use to VS08 beta 2 so I can use LINQ. For reasons I don’t want to get into, I needed a way to convert arbitrary database rows, read using a SqlDataReader, into XML. LINQ to SQL was out, since the code has to work against arbitrary tables (i.e. I have no compile time schema knowledge). But XLinq LINQ to XML helped me out a ton. Check out this example:

const string ns = "{http://some.sample.namespace.schema}";

while (dr.Read())
{
    XElement rowXml = new XElement(ns + tableName,
        from i in GetRange(0, dr.FieldCount)
        select new XElement(ns + dr.GetName(i), dr.GetValue(i)));
}

That’s pretty cool. The only strange thing in there is the GetRange method. I needed an easy way to build a range of integers from zero to the number of fields in the data reader. I wasn’t sure of any standard way, so I wrote this little two line function:

IEnumerable<int> GetRange(int min, int max)
{
    for (int i = min; i < max; i++)
        yield return i;
}

It’s simple enough, but I found it strange that I couldn’t find a standard way to generate a range with a more elegant syntax. Ruby has standard range syntax that looks like (1..10), but I couldn’t find the equivalent C#. Did I miss something, or am I really on my own to write a GetRange function?

Update: As expected, I missed something. John Lewicki pointed me to the static Enumerable.Range method that does exactly what I needed.

Morning Coffee 112

  • The Lee Holmes over at the Powershell Team Blog writes about alternatives to the “decades-old” Windows console host. Powershell Plus looks awesome. PoshConsole also looks pretty cool (though far from finished yet) and is free.
  • WL IDWeb Authentication SDK has been released. Details on the WL ID team blog. It looks like what Passport SDK provided for quite some time, but now it’s free. There’s also a client auth SDK in development. (via Dare Obasanjo)
  • Libor Soucek leaps to the wrong conclusion about not differentiating enterprise & support systems. Of course, different systems will have different availability requirements. But what happens when we connect them together? We can’t let the support system effect the availability of the enterprise system, right? To me, that implies either a) the support system now needs to conform to enterprise system availability requirements or b) we need some other mechanism (like async durable messaging) to act as a buffer between them. Personally, I like “b”.
  • Nick Carr points to an article The Trouble with Enterprise Software by Cynthia Rettig. Cynthia writes that while the massive complexity of enterprise software, especially large-scale ERP systems like SAP, significantly hinder it’s value. It’s a must read. Choice quotes:
    • “It is estimated that for every 25% increase in complexity in the tasks to be automated, the complexity of the software solution itself rises by 100%.”
    • “The notion of reusable software works on a small scale. Programmers have successfully built and reused subroutines of standard functions. But as software grows more complex, reusability becomes a difficult or impossible task.”
    • “Hope, unfortunately, has never been a very effective strategy.”
    • “Is enterprise software just too complex to deliver on its promises? After all, enterprise systems were supposed to streamline and simplify business processes. Instead, they have brought high risks, uncertainty and a deeply worrying level of complexity. Rather than agility they have produced rigidity and unexpected barriers to change, a veritable glut of information containing myriad hidden errors, and a cloud of questions regarding their overall benefits.”