The SOA Manifesto – Pointlessness Manifested

)

You know what the Agile Manifesto doesn’t have? Video of a “very formal ceremony” announcing said manifesto. Instead, we just have artistically rendered picture of what sure looks like Martin Fowler pointing at a white board while some of the other original signatories look on. Sure, it’s a cool picture, but wouldn’t it have been much cooler if they had captured that moment on video instead? Especially if it was video of them all standing around looking vaguely uncomfortable while photographers took their picture and someone gravely read the manifesto to give it artificially inflated importance.

I only watched ten seconds of the SOA Manifesto announcement video before I realized there’s nothing to see here, move along, just a bunch of navel gazing from the usual SOA suspects.

Seriously, if you having a big announcement about how cool, earth shattering, significant or, hell, even interesting your manifesto is, then it’s not any of those things. It’s a waste of my time.

Then I noticed that my previous manager and personal friend John deVadoss is one of the signatories. I have metric tons of respect for John, so I gave the SOA Manifesto a second chance.

It lost me at the second sentence.

Service orientation is a paradigm that frames what you do. Service-oriented architecture (SOA) is a type of architecture that results from applying service orientation. We have been applying service orientation to help organizations consistently deliver sustainable business value, with increased agility and cost effectiveness, in line with changing business needs.

Are you frakking kidding me? “Service-oriented architecture (SOA) is a type of architecture that results from applying service orientation.” Who the hell came up with that? I didn’t realize the International SOA Symposium had a Department of Redundancy Department.

When you define SOA in terms of SO, then you can’t possibly score well on the practicality quotient.

The rest of the manifesto isn’t much better. What made the Agile manifesto great IMO was that it ran counter to “common” beliefs like “changing requirements late in the process is bad” and “shipping software cycles should take years”. Sure, we all realize how right those Agile manifesto guys were now, but at the time it was the next best thing to heresy for many organizations. Those guys were agents of change. And I mean real change, not “this will help me sell books” change.

SOA manifesto on the other hand is basically repackaged common sense. Stuff like “Recognize that SOA ultimately demands change on many levels” and “The scope of SOA adoption can vary. Keep efforts manageable and within meaningful boundaries” Any help figuring out what that change is or what the scope should be? Nope. Thanks for the advice guys, but I had your so called “Guiding Principles” figured out long ago.

But hey, I’m sure the manifesto will help Thomas Erl et. al. sell more SOA books. So, I guess from that perspective it’s mission accomplished.

Nick’s Flawed Vision of a Shared Integration Model

Of all the things you might say about Nick Malik, “thinks small” is not one of them. He takes on a significant percentage of the .NET developer community over the definition of Mort. He wants to get IT out of the applications business. He invents his own architecture TLA: SDA (aka Solution Domain Architecture). He’s a man on a mission, no doubt. And for the most part, I’m with him 110% on his ideas.

However, when he starts going on about a shared global integration model, I start to wonder if he has both hands on the steering wheel, as it were.

Nick’s been talking about this for over a year. As he points out, SaaS integration layer is the new vendor lock-in. One of the attractions of SaaS is that you could – theoretically, anyway – switch SaaS application providers easily which would drive said SaaS companies to constantly innovate. However, if the integration layers aren’t compatible, the cost to switch goes up dramatically, leaving the customer locked-in to whatever SaaS company they initially bet on – even if that bet turns out to be bad.

OK, I’m with him so far. Not exactly breaking news here – we’ve seen the same integration issues inside the enterprise for decades. SaaS adds new wrinkles – if your ERP vendor goes belly-up, they can’t take your data with them or worse sell it to your competition – but otherwise it sounds like the same old story to me.

However, where Nick loses me is when he recommends this solution:

“To overcome this conflict, it is imperative that we begin, now, to embark on a new approach. We need a single canonical mechanism for all enterprise app modules to integrate with each other. If done correctly, each enterprise will be able to pick and choose modules from different vendors and the integration will be smooth and relatively painless.”

Yeah, and if a frog had wings, it wouldn’t bump its ass when it hopped.1 There are so many things wrong with this approach, I’m not sure I can get them all into a single web post.

First off, it won’t, in fact, be done correctly – at least, not the first time. I realize everyone knocks MSFT for never getting an application right before version 3.0, but I believe it’s actually systemic to the industry. Whatever you think you know about the problem to be solved, it’s at best woefully incomplete and at worst wrong on all counts. So getting it right the first time is simply not possible. Getting it right the second time is very unlikely. It isn’t until the third time that you really start to get a handle on the problem you’re really trying to solve. Getting an effort like this off the ground in the first place would be a Herculean task. Keeping it together thru a couple of bad spec revisions would be impossible.

Meanwhile, the vendors aren’t going to be waiting around twiddling their thumbs waiting for the specs to be done. We’ve seen efforts to unify multiple completing vendors around a single interoperable specification. By and large, those efforts (UNIX, CORBA, Java) have been failures. The technologies themselves haven’t been failures, but the idea that there was going to be “relatively painless” portability or interoperability among different vendors never really materialized. If it didn’t work for UNIX, CORBA or Java, what makes Nick think it will work for the significantly more complex concept of a shared global integration model? Not only more complex in terms of spec density, but the mind-boggling number of vendors in this space.

Nick is worried that either “we do this as a community or one vendor will do it and force it on the rest of us.” But if you look at how specifications evolve, retroactive realization of defacto standards is the way the best standards get created. For example, I could argue that TCP was forced on us by the US Military, but I don’t hear anyone complaining. I realize there’s a big difference between having a vendor force a spec down our throat vs. a single big customer, but either way it’s not designed by committee. Besides, if we do see get an enterprise integration standard forced on us, I don’t believe it will be the vendors doing the forcing. If I were a betting man, I’d bet on Wal-Mart. Business leverage trumps IT leverage and Wal-Mart has more business leverage than anyone in this space these days.

BTW, would design-by-committee be an extreme example of BDUF? Do we really want to develop a this critical integration model using the same process that produced the XSD spec?

Finally, Nick thinks that this model will improve innovation, where I think it will have the exact opposite effect. Once you lay a standard in place, the way you innovate is to build proprietary extensions on top of that standard. However, by definition, these extensions aren’t going to be interoperable. If someone has a good idea, others will copy it and eventually it will become a defacto standard.

A recent example of the process of defacto standardization is XMLHttpRequest. Microsoft created it in 1999 for IE 5, Mozilla copied it for their browser a couple of years later, followed by the other major browser vendors. Google innovated with it, Jesse James Garrett coined the term AJAX, everyone else started doing it and then finally – nearly a decade later and still counting – a standards body is getting around to putting their stamp of approval on it.

However, if you’re worried about painless integration and not having something forced on you by some vendor, then you’re not going to embrace these innovations – which means, you won’t embrace any innovation. Well, there may be some innovation in the systems themselves that doesn’t affect the interface, but once that interface is cast in stone, the amount of innovation will go way down. How do vendors differentiate themselves? There’s only two ways: price and innovation. Take away innovation with standardization, and you’re left with a race to the rock bottom price with no incentive to actually improve the products.

I get where Nick is going with this. He looks around our enterprise and sees duplication of effort and massive resources being spent on hooking shit together. It sure would be nice to spend those resources on something more useful to the bottom line. But standardizing – or worse legislating – the problem out of existence isn’t going to work. What will? IMO, applying Nick’s ideas of Free Code to interop code. If we’re going to get IT out of the app business, can’t we get out of the integration business at the same time?


  1. It’s exceedingly rare that you get to quote Wayne’s World or Raising Arizona in a discussion about Enterprise Architecture, much less both at the same time. Savor it.

Morning Coffee 134

  • Bill de Hora responds to a few of my Durable and RESTful ideas. He points out that relying on a client-generated ID can be troublesome, and recommends using multiple identifiers – one created by the sender, one by the receiver and one representing the message exchange itself. However, the sender ID is vulnerable to client bugs & tampering as Bill points out, and neither the receiver ID nor the exchange ID can be used to determine if a given message is a duplicate. If you don’t trust the sender, is it even possible to determine if a given message is a duplicate?
  • Pablo Castro confirms that there are “practical limits” to what ADO.NET Data Services can do with respect to idempotence. Nothing in his post was surprising, though I hope it will be more explicitly called out in the final docs. Developers used to the comforting protection of a transaction may be in for a rude awakening.
  • Dare Obasanjo has a great post comparing the new features in C# 3.0 to dynamic languages like IronPython. I believe many of the productivity aspects of dynamic languages have little to do with being dynamic.
  • Pat Helland noodles on durability and messaging, two topics near and dear to my heart (probably from working with him for a couple of years). I’m not sure where he’s going with this – his conclusion that “Basically, big, complex, and distributed system are big, complex, and distributed” isn’t exactly ground-breaking. But his point that “durable” isn’t a binary concept is worth more consideration. Also, his description of IMS only looking at the effects of a committed transaction is very similar to how web sites work, though obviously HTTP isn’t durable so you can’t make event horizon optimizations like IMS did.
  • Tangentially related, Werner Vogels discusses the idea of eventually consistent distributed databases. Today, that’s a problem mostly only Internet-scale sites like Amazon deal with. In the near future of continued data explosion + manycore, we’ll all have to deal with it.
  • Nick Malik argues that categorizing enterprise applications by lifecycle is much less useful than categorization based on organizational impact. He might also need a new chair.
  • Jesus Rodriguez digs into one of SSB’s new features in SQL 2008: conversation priorities.
  • Arnon Rotem-Gal-Oz and Sam Gentile are mixing it up over the definition of SOA. Sam thinks SOA has to include business drivers and Arnon doesn’t. I’m with Sam on this, defining “SOA” independently from “Applying SOA” seems pointless. Then again, rigorously defining SOA – much less arguing about said definition – seems like a waste of time in the first place IMHO.
  • Wow, this guy Zed is mad at the Ruby community.
  • Andrew Baron has 8 Reasons Why The TV Studios Will Die. Personally, I think reason #2 – Expendable Middle-Person – is the most important. If content producers can reach consumers directly, what value-add will the networks provide? (via United Hollywood)

Afternoon Coffee 123

  • Morning Coffee is late this morning because we went for our Christmas portrait this morning and it took forever. The pictures turned out great though.
  • Nick Malik finishes up his series on business operation models by covering the diversification model. Also, Nick’s points about the synergy between a diversified model and the coordinated model are spot on. I happen to be a big fan of those models (aka the models with low standardization) which probably drives some of the  more my “unique” perspectives on SOA.
  • Scott Guthrie starts out a new series and future technology, this time it’s ASP.NET MVC Framework that gets the series treatment. The first entry in the series is a general overview. I wonder why there’s no cool code name for the MVC framework? Whatever it’s named, I like the auto routing and action rules – it seems very Rails-inspired.
  • Over the weekend, Don Box points out that the REST authentication story “blows chunks”. I’ve recently given up on the reliable part of the original “Secure, Reliable, Transacted Web Services” vision – and I never believed the transacted part. Security, on the other hand, is the one part of that original vision that has worked out IMO. My experience with the WS-* security stack has been pretty good, though Dare Obasanjo thinks that OpenID and OAuth are the final nail in the WS-* coffin.
  • Speaking of Dare, he goes on to say WS-* is to REST as Theory is to Practice. He makes the point that “The only times I encounter someone with good things to say about WS-* is if it is their job to pimp these technologies or they have already “invested” in WS-* and want to defend that investment.” I gave up pimping evangelizing technology a while back and I don’t want to be in the position of defending a bad investment, so I’m spending lots of time looking at REST.
  • Jesus Rodriguez takes a look at the Managed Services Engine and comes away excited. Jesus is a self-described “strong believer” in SOA governance. I’m a self-described strong disbeliever in SOA governance, so MSE sounds like more of the Worst of Both Worlds to me.
  • A little light reading: I pulled Applied Cryptography and A New Kind of Science out of my garage last weekend. Plus my copies of RESTful Web Services and Programming Erlang just arrived yesterday.

The Importance of Idempotence

Every organization has some operations or processes that have to happen Exactly Once. Your employer needs to make sure they issue your paycheck exactly once. Your bank needs to make sure that paycheck is deposited in your account exactly once. Exactly Once isn’t something that just “traditional” enterprises like banks care about. Google needs to make sure your AdSense check is issued exactly once. Amazon needs to make sure your credit card is charged exactly once. Especially when there’s money involved, the company wants to make sure it gets handled correctly – Exactly Once.

In application (aka siloed) development, transactions are often used to ensure stuff happens Exactly Once, to good effect. But how do we guarantee Exactly Once now that we’re connecting systems together? Given how well transactions work inside applications, it’s not surprising that early attempts to guarantee Exactly Once between systems relied on distributed transactions, this time to not-so-good effect. Pat Helland summarized the problems with distributed transactions this way:

“The two-phase commit protocol will ensure perfect consistency given infinite time.  I say that because it will wait and wait and wait until the transaction is resolved and then provide perfect consistency.   Of course, while partitioned and waiting, arbitrary swaths of the application’s database may be locked up rendering the application unusable.  For this reason, I’ve frequently referred to the two phase commit protocol as the “Anti-Availability Protocol”. “
Pat Helland, SOA and Newton’s Universe

So now we’re faced with a dilemma. Transactions are, for all practical purposes, unusable to ensure Exactly Once processing between connected systems. And yet, the business requirement to ensure Exactly Once hasn’t gone away. We need another way.

The first fallacy of distributed computing is that the network is reliable. It’s usually works, but usually isn’t a guarantee. If I send a message to a remote system but don’t get an acknowledgement, which got lost: the original message or the ack? There’s no way to know, so I have to send the message again. But if I send it again and it’s the ack that got lost, then the target system will receive the message multiple times.

Since the network is not reliable, there’s no way to guarantee that a message will be delivered exactly once. The best we can go is ensure a message will be delivered at least once. However, that implies the target system will receive some messages multiple times. If we need to ensure Exactly Once, we need to make sure the target system won’t duplicate the work if it receives duplicate messages. In other words, we need the target system to be idempotent.

“In computer science, the term idempotent is used to describe method or subroutine calls which can safely be called multiple times, as invoking the procedure a single time or multiple times results in the system maintaining the same state i.e. after the method call all variables have the same value as they did before.

Example: Looking up some customer’s name and address are typically idempotent, since the system will not change state based on this. However, placing an order for a car for the customer is not, since running the method/call several times will lead to several orders being placed, and therefore the state of the system being changed to reflect this.”
Wikipedia, Idempotence (Computer Science)

Or more succinctly:

“Idempotent Means It’s OK to Arrive Multiple Times”
Pat Helland (again)

I can’t overstate the importance of designing your cross-system communication to be idempotent. If you care about ensuring Exactly Once, each step of your process has to be either transactional or idempotent, or you’ll be screwed. It’s interesting to note that you have to be transactional *OR* idempotent, but not both. You can chain together multiple steps in long business process, across multiple disparate systems, but as long as each step is either transactional or idempotent, you can guarantee Exactly Once across the entire process. In other words:

Transactional/Exactly Once == Idempotent/At Least Once

This implies that you can substitute an idempotent operation for a transactional operation, and still ensure Exactly Once.

Let’s look at an example. Typically you ensure Exactly Once processing with MSMQ by receiving messages within the scope of a transaction along with whatever other work you’re doing. But what if you can’t use a transactional receive, say because it’s a remote queue? What would an idempotent equivalent for transactional receive look like?

How about:

  1. Peek a message from the remote queue
  2. Insert the message into the target system database, using the unique MSMQ Message ID as the primary key
  3. Remove the message from the queue by ID

Each of those steps is idempotent. Peek is a read, which is naturally idempotent. Inserting the message into the database is idempotent, since we use the message ID as the primary key. As long as that ID is unique, we can never insert it into the database more than once. Finally, removing a message based on it’s unique ID is also naturally idempotent. Once the message is in the target system database, we can use traditional transactions to ensure it gets processed Exactly Once.

So we took a single transactional operation and turned it into a series of idempotent steps. Both ensure each message is processed Exactly Once. Given the choice, I’d rather write the transactional operation – it’s much less code since we’re we can use existing infrastructure – aka the distributed transaction coordinator. But if the transactional infrastructure isn’t available, I’d rather write multiple idempotent steps and ensure Exactly Once rather than risk losing or duplicating messages.

I’ve got more on this topic, but in the meantime think about this: How do you think durable messaging infrastructure like MSMQ ensures exactly once delivery? You can use that pattern, even if you’re not using durable messaging infrastructure.