Probably Wrong Info Is Worse Than No Info At All

Like many geeks, I love Dilbert. However, I rarely identify with it as well as I did Sunday.

I kid you not, I’ve had almost exactly this conversation back when I worked in MS IT. They have this big repository of information about deployed applications. Technically, you’re not supposed to deploy an application without listing it in the application repository. Like Dilbert, I never really understood what people were going to do with this information, but the projects I was on dutifully collected the relevant information and put it into the repository.

And never thought of it again. Ever.

And therein lies the problem. Populating the application repository was an artificial step on the critical path of the deployment process. Writing the software, acquiring the physical hardware to run it on, stuff like that really is on the critical path. Populating the application repository was extra busy work legislated by someone (I forget if it was the central architecture team or management) that didn’t benefit the project in the slightest. As such, it was given the minimal about of attention and effort, meaning there was little quality or consistency in the data. Worse yet, when the application changed or was decommissioned , updating the application repository just didn’t happen. I mean, it was supposed to, but rarely did.

So you ended up with a repository of information that was worse than useless. I had a colleague who insisted that the repository had some value because “not all of the data was wrong”. Of course, he couldn’t tell me with any consistency which data was accurate and therefore valuable and which was not. Hence, my argument that it was “worse than useless”.

The only way an application repository is going to be of any value at all is if you can collect the data automatically. My old teammate Buzz coined a phrase we used often: “The Truth Is On The Edge”. You should always regard any central repository of information with a very critical eye since it’s rarely going to be the truth.

(Ed. Note – Man, it’s been a long time since I’ve written about Architecture. My last Architecture post was almost a year ago. I don’t miss the job but I do miss my old teammates – in particular Buzz, Rick, Dale and of course Nick Malik.)

Kitchen Sink Variability

Nick Malik forwarded the last ZapFlash newsletter to me. I gave up on analyst newsletters like this long ago, but Nick shared it with me because it “hit directly on what [Nick] thinks an ESB is and does, and why an ESB is not a hub.” I’m not a fan of the whole ESB concept and frankly this article didn’t do much to change my opinion. However, this passage did sorta jump out at me.

[T]he main concept of SOA is that we want to deal with frequent and unpredictable change by constructing an architectural model, discipline, and abstraction that loosely-couples the providers of capability from the consumers of capability in an environment of continuous heterogeneity. This means that we have to somehow develop capabilities without the knowledge of how they might be used…[T]his means that we need to have significant variability at a variety of aspects including the implementation, infrastructure, contract, process, policy, data schema, and semantic aspects. Having this variability allows companies to have stability in their business and IT even though the IT and business continue to change. Agility, predictability, reliability, visibility, and cost-effectiveness all become that much more realistic when we can achieve that sort of abstraction.

My reading of this is that the author, Ronald Schmelzer, is advising organizations to introduce “significant variability at a variety of aspects” in their services in order to deal with what he openly admits is “unpredictable change”.

This sounds like a mind-boggling awful idea to me.

At it’s heart, any practical design – including a service-oriented one – needs to be an exercise in tradeoff analysis. You can’t add “significant variability” without also adding significant complexity, effort, time and cost. So the real question is: Is the significant variability Ronald describes worth the inevitable tradeoff in significant time, effort, cost and complexity?

I seriously doubt it.

Since unpredictable change is – by definition – unpredictable, you have no way of knowing if you will actually need any specific aspect of variability down the road. Ronald’s strategy – if you can call it that – seems to be let everything he can think of vary except the kitchen sink. That way, when said unpredictable change happens, you might get lucky and have already enabled the variability you need to handle the change with a minimum of effort.

Getting lucky is not a strategy.

Chances are, a specific aspect of variability won’t ever be needed. In other words, most of the the time, effort and money you spent building these aspects of variability will be wasted. And remember, this isn’t just a one time cost – the increased complexity from including this significant variability means you’ll pay the price in additional time, effort and money every time you have to change the system.

I wonder if Ronald is familiar with the term “You Aren’t Gonna Need It“. He talks about increasing business agility, but he eschews many of the principles of agile development. I realize they aren’t the same thing, but I have a hard time believing that they are so diametrically opposed that a core principle of agile development should be readily violated in order to enable business agility.

Maybe it’s cliche, but I try to always come back to “What’s the simplest thing that could possibly work?”. I would think that building a ton of currently-unnecessary variability into your system on the off chance that someday you’ll need it fails the “simplest thing that could possibly work” test spectacularly.

Personally, given the choice of taking advice from Ward Cunningham or pretty much any enterprise analyst on the planet, I’ll pick Ward every time.

What is the ROI on EA?

Nick Malik took me to task for my suggestion that Enterprise Architecture provides no value.

You implied that I could not answer the question, “How does EA demonstrate value.” That is not true. I can readily answer the question, from my viewpoint, but I chose instead to *ask* the question to see if my answer matches the various answers that I may hear back. I got a lot of valuable input, both on the blog and on the forum on Shared Insights where I asked the same question.

You are the ONLY person to reply and say that EA provides no value.

Perhaps you should read about the role and value of Enterprise Architecture from established sources before you bash the entire profession.

EA is real, my friend. It is as real as city planning. The only major city in the US without city planning is Houston. I have visited a few times, and I can honestly say that without city planning, they are a mess.

Nick also provided a link to an article in Architecture and Governance magazine. I was going to read it, however their web site is down as I write this. I feel comfortable interpreting that as a sign that I’m right… 😄

Actually, Nick’s got a point. It was wholly unfair of me to say that EA provides no value. However, I do believe the return on investment of enterprise architecture is fairly low, perhaps even negative. In other words, I shouldn’t have argued that EA doesn’t deliver any value, but instead that I don’t think it delivers enough value, given what we spend on it.

Architecture ROI is hard enough to calculate on a project by project basis. I would argue that measuring it at the EA level is probably impossible, but I think that’s both a blessing and a curse. It’s a curse because EA can’t justify their existence in terms the business can understand. It’s a blessing because if EA is running as deep in the red as I suspect they might be, the company would cut EA entirely in a heartbeat.

Since Nick started my asking a question about value, let me turn it around and ask some questions of my own:

  1. How much do you think your organization spends on EA per year?
  2. What do you think your organization’s EA ROI is?
  3. What can you do to improve your organization’s EA ROI?

Nick’s Flawed Vision of a Shared Integration Model

Of all the things you might say about Nick Malik, “thinks small” is not one of them. He takes on a significant percentage of the .NET developer community over the definition of Mort. He wants to get IT out of the applications business. He invents his own architecture TLA: SDA (aka Solution Domain Architecture). He’s a man on a mission, no doubt. And for the most part, I’m with him 110% on his ideas.

However, when he starts going on about a shared global integration model, I start to wonder if he has both hands on the steering wheel, as it were.

Nick’s been talking about this for over a year. As he points out, SaaS integration layer is the new vendor lock-in. One of the attractions of SaaS is that you could – theoretically, anyway – switch SaaS application providers easily which would drive said SaaS companies to constantly innovate. However, if the integration layers aren’t compatible, the cost to switch goes up dramatically, leaving the customer locked-in to whatever SaaS company they initially bet on – even if that bet turns out to be bad.

OK, I’m with him so far. Not exactly breaking news here – we’ve seen the same integration issues inside the enterprise for decades. SaaS adds new wrinkles – if your ERP vendor goes belly-up, they can’t take your data with them or worse sell it to your competition – but otherwise it sounds like the same old story to me.

However, where Nick loses me is when he recommends this solution:

“To overcome this conflict, it is imperative that we begin, now, to embark on a new approach. We need a single canonical mechanism for all enterprise app modules to integrate with each other. If done correctly, each enterprise will be able to pick and choose modules from different vendors and the integration will be smooth and relatively painless.”

Yeah, and if a frog had wings, it wouldn’t bump its ass when it hopped.1 There are so many things wrong with this approach, I’m not sure I can get them all into a single web post.

First off, it won’t, in fact, be done correctly – at least, not the first time. I realize everyone knocks MSFT for never getting an application right before version 3.0, but I believe it’s actually systemic to the industry. Whatever you think you know about the problem to be solved, it’s at best woefully incomplete and at worst wrong on all counts. So getting it right the first time is simply not possible. Getting it right the second time is very unlikely. It isn’t until the third time that you really start to get a handle on the problem you’re really trying to solve. Getting an effort like this off the ground in the first place would be a Herculean task. Keeping it together thru a couple of bad spec revisions would be impossible.

Meanwhile, the vendors aren’t going to be waiting around twiddling their thumbs waiting for the specs to be done. We’ve seen efforts to unify multiple completing vendors around a single interoperable specification. By and large, those efforts (UNIX, CORBA, Java) have been failures. The technologies themselves haven’t been failures, but the idea that there was going to be “relatively painless” portability or interoperability among different vendors never really materialized. If it didn’t work for UNIX, CORBA or Java, what makes Nick think it will work for the significantly more complex concept of a shared global integration model? Not only more complex in terms of spec density, but the mind-boggling number of vendors in this space.

Nick is worried that either “we do this as a community or one vendor will do it and force it on the rest of us.” But if you look at how specifications evolve, retroactive realization of defacto standards is the way the best standards get created. For example, I could argue that TCP was forced on us by the US Military, but I don’t hear anyone complaining. I realize there’s a big difference between having a vendor force a spec down our throat vs. a single big customer, but either way it’s not designed by committee. Besides, if we do see get an enterprise integration standard forced on us, I don’t believe it will be the vendors doing the forcing. If I were a betting man, I’d bet on Wal-Mart. Business leverage trumps IT leverage and Wal-Mart has more business leverage than anyone in this space these days.

BTW, would design-by-committee be an extreme example of BDUF? Do we really want to develop a this critical integration model using the same process that produced the XSD spec?

Finally, Nick thinks that this model will improve innovation, where I think it will have the exact opposite effect. Once you lay a standard in place, the way you innovate is to build proprietary extensions on top of that standard. However, by definition, these extensions aren’t going to be interoperable. If someone has a good idea, others will copy it and eventually it will become a defacto standard.

A recent example of the process of defacto standardization is XMLHttpRequest. Microsoft created it in 1999 for IE 5, Mozilla copied it for their browser a couple of years later, followed by the other major browser vendors. Google innovated with it, Jesse James Garrett coined the term AJAX, everyone else started doing it and then finally – nearly a decade later and still counting – a standards body is getting around to putting their stamp of approval on it.

However, if you’re worried about painless integration and not having something forced on you by some vendor, then you’re not going to embrace these innovations – which means, you won’t embrace any innovation. Well, there may be some innovation in the systems themselves that doesn’t affect the interface, but once that interface is cast in stone, the amount of innovation will go way down. How do vendors differentiate themselves? There’s only two ways: price and innovation. Take away innovation with standardization, and you’re left with a race to the rock bottom price with no incentive to actually improve the products.

I get where Nick is going with this. He looks around our enterprise and sees duplication of effort and massive resources being spent on hooking shit together. It sure would be nice to spend those resources on something more useful to the bottom line. But standardizing – or worse legislating – the problem out of existence isn’t going to work. What will? IMO, applying Nick’s ideas of Free Code to interop code. If we’re going to get IT out of the app business, can’t we get out of the integration business at the same time?


  1. It’s exceedingly rare that you get to quote Wayne’s World or Raising Arizona in a discussion about Enterprise Architecture, much less both at the same time. Savor it.

ADO.NET Data Services and Idempotence

I was reading thru the ADO.NET Data Services Quickstart, because I wanted to understand how it support data updates. The quickstart uses the Customers table from the Northwind sample database, which unlike most of the other tables uses an nchar(5) value as the primary key. Categories, Employees, Orders, Products, Shippers and Suppliers all use an auto-increment integer field for their primary keys.

I only point this out because inserting into Customers is idempotent, while inserting into those other listed tables is not. Is it a coincidence that the ADO.NET data services team chose the Customers table for their quickstart? Maybe, but I doubt it. Regardless, making a non-idempotent insert call from the browser is a bad idea, if you care about Exactly Once.