Morning Coffee 134

  • Bill de Hora responds to a few of my Durable and RESTful ideas. He points out that relying on a client-generated ID can be troublesome, and recommends using multiple identifiers – one created by the sender, one by the receiver and one representing the message exchange itself. However, the sender ID is vulnerable to client bugs & tampering as Bill points out, and neither the receiver ID nor the exchange ID can be used to determine if a given message is a duplicate. If you don’t trust the sender, is it even possible to determine if a given message is a duplicate?
  • Pablo Castro confirms that there are “practical limits” to what ADO.NET Data Services can do with respect to idempotence. Nothing in his post was surprising, though I hope it will be more explicitly called out in the final docs. Developers used to the comforting protection of a transaction may be in for a rude awakening.
  • Dare Obasanjo has a great post comparing the new features in C# 3.0 to dynamic languages like IronPython. I believe many of the productivity aspects of dynamic languages have little to do with being dynamic.
  • Pat Helland noodles on durability and messaging, two topics near and dear to my heart (probably from working with him for a couple of years). I’m not sure where he’s going with this – his conclusion that “Basically, big, complex, and distributed system are big, complex, and distributed” isn’t exactly ground-breaking. But his point that “durable” isn’t a binary concept is worth more consideration. Also, his description of IMS only looking at the effects of a committed transaction is very similar to how web sites work, though obviously HTTP isn’t durable so you can’t make event horizon optimizations like IMS did.
  • Tangentially related, Werner Vogels discusses the idea of eventually consistent distributed databases. Today, that’s a problem mostly only Internet-scale sites like Amazon deal with. In the near future of continued data explosion + manycore, we’ll all have to deal with it.
  • Nick Malik argues that categorizing enterprise applications by lifecycle is much less useful than categorization based on organizational impact. He might also need a new chair.
  • Jesus Rodriguez digs into one of SSB’s new features in SQL 2008: conversation priorities.
  • Arnon Rotem-Gal-Oz and Sam Gentile are mixing it up over the definition of SOA. Sam thinks SOA has to include business drivers and Arnon doesn’t. I’m with Sam on this, defining “SOA” independently from “Applying SOA” seems pointless. Then again, rigorously defining SOA – much less arguing about said definition – seems like a waste of time in the first place IMHO.
  • Wow, this guy Zed is mad at the Ruby community.
  • Andrew Baron has 8 Reasons Why The TV Studios Will Die. Personally, I think reason #2 – Expendable Middle-Person – is the most important. If content producers can reach consumers directly, what value-add will the networks provide? (via United Hollywood)

Durable and RESTful

A while back I wondered if it’s still REST if you don’t use HTTP. The reason I wondered that is because like many I’ve become disillusioned with the WS-* stack over time and see REST as a viable alternative to all that spec-driven complexity. However, just because I’m looking to REST means I’m willing to give up on durable messaging. So I shouldn’t be asking “can I do REST without HTTP?” I should be asking “what protocol can I use to do durable messaging with REST?”

It turns out HTTP is just fine for RESTful durable messaging, if you take the time to make your POSTs idempotent. There’s even a IETF RFC that builds on HTTP and specifies a mechanism to do it.

As I wrote last month, idempotence is critically important to ensuring “things” happen exactly once when connecting disparate systems together. At the end of that post, I asked you, dear reader, to contemplate just how durable messaging systems ensures exactly once delivery. They do it by assigning messages to be delivered a unique identifier. Any non-idempotent operations can be made idempotent with unique identifiers and a message ID log.

“Not Idempotent:
Withdrawing $1 Billion.
Idempotent:
If Haven’t Yet Done Withdrawal #XYZ for $1 Billion,
Then Withdraw $1 Billion and Label as #XYZ”
Pat Helland

For example, when you send a message in MSMQ, it’s assigned a 20 byte identifier which is “unique within your enterprise.” 1 If the destination system receives multiple messages with the same message ID, it knows they are duplicates and can safely toss all but one of the messages with the same ID. Exactly once, no transactions.

While many operations in REST are naturally idempotent, using REST doesn’t magically make all your operations idempotent, contrary to popular belief. Have you ever seen a message like “please don’t press submit order twice” on the checkout page of an e-commerce website? It’s there because POST is not naturally idempotent and the site hasn’t taken any extra steps to identify duplicate POSTs. If the site embedded a unique ID in a hidden form field, it could use that to identify duplicate orders.

If you’re a RESTifarian, haven’t you seen this approach somewhere before?

Given that POST isn’t naturally idempotent, I think it’s kinda surprising that new resources are created in AtomPub by POSTing them to a collection rather than PUTting them to a specific URL. RESTful Web Services specifically points out that PUT is idempotent, so I wonder why AtomPub uses POST. I’d guess most AtomPub implementations (aka blogs) aren’t much concerned about ensuring Exactly Once. If an blog entry gets posted twice, you delete one and go on with your life.

However, if you wanted to use AtomPub and ensure Exactly Once, you can use the fact that Atom entries must contain exactly one ID element which as per the spec must be universally unique. From reading the Atom spec, the ID element seems primarily designed for Atom feed consumers, but AtomPub servers could also use it as an “idempotence identifier”, similar to how MSMQ uses the message ID. If you end up with multiple entries with the same entry ID, discard all but one.

So by creating a unique identifier on the client side and logging that identifier on the server side, we can make any REST service idempotent. We can make it a durable service if we write the outgoing message – with the message ID we generate – to a durable store before trying to send it. If you write it to a durable store within the scope of a local transaction, you’re even closer to duplicating MSMQ’s functionality, yet the only protocol requirement beyond vanilla HTTP is having a unique message ID.

The one problem with the Atom entity ID approach is that it requires cracking the message in order to see if we should process it. For REST services, I would think we’d want to stick the idempotence identifier in an HTTP header. We already headers to implement conditional GET, why not a header for what amounts to conditional POST?

Turns out such a header exists in the AS2 spec, i.e. “MIME-Based Secure Peer-to-Peer Business Data Interchange Using HTTP”. AS2 defines a Message-Id HTTP header which “SHOULD be globally unique”. In the case of an HTTP error, AS2 specifies the “POST operation with identical content, including same Message-ID, SHOULD be repeated” and that “Servers SHOULD be prepared to receive a POST with a repeated Message-ID.” I assume this implies a server shouldn’t process a message with the same ID twice.

So what would a durable REST service look like? I think like this:

  1. Sending system records the intent to send a message by saving it to a local durable store, potentially in the scope of a local transaction. As part of saving the message, a unique message id is generated (I’d use a GUID, but as long as it’s unique it doesn’t matter.)
  2. A background thread in the sending system monitors the durable message store. When a new to-be-sent message arrives, the thread POSTs it to the destination, setting the Message-Id HTTP header to the unique identifier generated in step 1.
  3. The receiving system stores the Message-Id header value in a log table and processes the received message, potentially in the scope of a local transaction. Optionally, it can store the return message (if there is one) in the durable store as well.
  4. If the sending system doesn’t receive a 2xx status code, it rePOSTs the message to the receiving system until it does.
  5. If the receiving system receives a message that’s already listed in the log table, it ignores it and returns a success status code. Optionally, if the return message has been saved, the receiving system can resend the return message as long as it doesn’t redo the work.

This seems like a better approach than my original direction of doing REST over a durable protocol like MSMQ or SSB. What do you think?

Update: Erik Johnson points out that an HTTP POST’s idempotency is “left unsaid”. So my statement that “POST isn’t idempotent” isn’t quite correct. POST isn’t naturally idempotent. I’ve updated the post accordingly.


  1. Technically, the MSMQ message ID isn’t universally unique as it is a 16 byte GUID representing the source system + a 4 byte sequence number. The sequence number can rollover, after sending 2^32 messages. In practice, rolling over the message ID after 4 billion messages is rarely an issue.

The Importance of Idempotence

Every organization has some operations or processes that have to happen Exactly Once. Your employer needs to make sure they issue your paycheck exactly once. Your bank needs to make sure that paycheck is deposited in your account exactly once. Exactly Once isn’t something that just “traditional” enterprises like banks care about. Google needs to make sure your AdSense check is issued exactly once. Amazon needs to make sure your credit card is charged exactly once. Especially when there’s money involved, the company wants to make sure it gets handled correctly – Exactly Once.

In application (aka siloed) development, transactions are often used to ensure stuff happens Exactly Once, to good effect. But how do we guarantee Exactly Once now that we’re connecting systems together? Given how well transactions work inside applications, it’s not surprising that early attempts to guarantee Exactly Once between systems relied on distributed transactions, this time to not-so-good effect. Pat Helland summarized the problems with distributed transactions this way:

“The two-phase commit protocol will ensure perfect consistency given infinite time.  I say that because it will wait and wait and wait until the transaction is resolved and then provide perfect consistency.   Of course, while partitioned and waiting, arbitrary swaths of the application’s database may be locked up rendering the application unusable.  For this reason, I’ve frequently referred to the two phase commit protocol as the “Anti-Availability Protocol”. “
Pat Helland, SOA and Newton’s Universe

So now we’re faced with a dilemma. Transactions are, for all practical purposes, unusable to ensure Exactly Once processing between connected systems. And yet, the business requirement to ensure Exactly Once hasn’t gone away. We need another way.

The first fallacy of distributed computing is that the network is reliable. It’s usually works, but usually isn’t a guarantee. If I send a message to a remote system but don’t get an acknowledgement, which got lost: the original message or the ack? There’s no way to know, so I have to send the message again. But if I send it again and it’s the ack that got lost, then the target system will receive the message multiple times.

Since the network is not reliable, there’s no way to guarantee that a message will be delivered exactly once. The best we can go is ensure a message will be delivered at least once. However, that implies the target system will receive some messages multiple times. If we need to ensure Exactly Once, we need to make sure the target system won’t duplicate the work if it receives duplicate messages. In other words, we need the target system to be idempotent.

“In computer science, the term idempotent is used to describe method or subroutine calls which can safely be called multiple times, as invoking the procedure a single time or multiple times results in the system maintaining the same state i.e. after the method call all variables have the same value as they did before.

Example: Looking up some customer’s name and address are typically idempotent, since the system will not change state based on this. However, placing an order for a car for the customer is not, since running the method/call several times will lead to several orders being placed, and therefore the state of the system being changed to reflect this.”
Wikipedia, Idempotence (Computer Science)

Or more succinctly:

“Idempotent Means It’s OK to Arrive Multiple Times”
Pat Helland (again)

I can’t overstate the importance of designing your cross-system communication to be idempotent. If you care about ensuring Exactly Once, each step of your process has to be either transactional or idempotent, or you’ll be screwed. It’s interesting to note that you have to be transactional *OR* idempotent, but not both. You can chain together multiple steps in long business process, across multiple disparate systems, but as long as each step is either transactional or idempotent, you can guarantee Exactly Once across the entire process. In other words:

Transactional/Exactly Once == Idempotent/At Least Once

This implies that you can substitute an idempotent operation for a transactional operation, and still ensure Exactly Once.

Let’s look at an example. Typically you ensure Exactly Once processing with MSMQ by receiving messages within the scope of a transaction along with whatever other work you’re doing. But what if you can’t use a transactional receive, say because it’s a remote queue? What would an idempotent equivalent for transactional receive look like?

How about:

  1. Peek a message from the remote queue
  2. Insert the message into the target system database, using the unique MSMQ Message ID as the primary key
  3. Remove the message from the queue by ID

Each of those steps is idempotent. Peek is a read, which is naturally idempotent. Inserting the message into the database is idempotent, since we use the message ID as the primary key. As long as that ID is unique, we can never insert it into the database more than once. Finally, removing a message based on it’s unique ID is also naturally idempotent. Once the message is in the target system database, we can use traditional transactions to ensure it gets processed Exactly Once.

So we took a single transactional operation and turned it into a series of idempotent steps. Both ensure each message is processed Exactly Once. Given the choice, I’d rather write the transactional operation – it’s much less code since we’re we can use existing infrastructure – aka the distributed transaction coordinator. But if the transactional infrastructure isn’t available, I’d rather write multiple idempotent steps and ensure Exactly Once rather than risk losing or duplicating messages.

I’ve got more on this topic, but in the meantime think about this: How do you think durable messaging infrastructure like MSMQ ensures exactly once delivery? You can use that pattern, even if you’re not using durable messaging infrastructure.

The Durable Messaging Debate Continues

Last week, Nick Malik responded to Libor Soucek’s advice to avoid durable messaging. Nick points out that while both durable and non-durable messaging requires some type of compensation logic (nothing is 100% foolproof because fools are so ingenious), the durable messaging compensation logic is significantly simpler.

This led to a very long conversation over on Libor’s blog. Libor started by clarifying his original point, and then the two of them went back and forth chatting in the comments. It’s been very respectful, Libor calls both Nick and I “clever and influential” though he also thinks we’re wrong on this durable messaging thing. In my private emails with Libor, he’s been equally respectful and his opinion is very well thought out, though obviously I think he’s the one who’s wrong. 😄

I’m not sure how much is clear from Libor’s public posts, but it looks like most of his recent experience comes from building trading exchanges. According to his about page, he’s been building electronic trading systems since 2002. While I have very little experience in that domain, I can see very clearly how the highly redundant, reliable multicast approach that he describes would be a very good if not the best solution.

But there is no system inside Microsoft IT that looks even remotely like a trading exchange. Furthermore, I don’t think approaches for building a trading exchange generalize well. So that means Nick and I have very different priorities than Libor, something that seems to materialize as a significant amount of talking past each other. As much as I respect Libor, I can’t shake the feeling that he doesn’t “get” my priorities and I wouldn’t be at all surprised if he felt the same way about me.

The biggest problem with his highly redundant approach is the sheer cost when you consider the large number of systems involved. According to Nick, MSIT has “over 2700 applications in 82 solution domains”. When you consider the cost for taking a highly redundant approach across that many applications, the cost gets out of control very quickly. Nick estimates that the support staff cost alone for tripling our hardware infrastructure to make it highly redundant would be around half a billion dollars a year. And that doesn’t include hardware acquisition costs, electricity costs, real-estate costs (gotta put all those servers somewhere) or any other costs. The impact to Microsoft’s bottom line would be enormous, for what Nick calls “negligible or invisible” benefit.

There’s no question that high availability costs big money. I just asked Dale about it, and he said that in his opinion going above 99.9% availability increases costs “nearly exponentially”. He estimates just going from 99% to 99.9% doubles the cost. 99% availability is almost 15 minutes of downtime per day (on average). 99.9% is about 90 seconds downtime per day (again, on average).

How much is that 13 extra minutes of uptime per day worth? I would say “depends on the application”. How many of the 2700 applications Nick mentions need even 99% availability? Certainly some do, but I would guess that less than 10% of those systems need better than 99% availability. What pretty much all of them actually need is high reliability, which is to say they need to work even in the face of “hostile or unexpected circumstances” (like system failures and downtime).

High availability implies high reliability. However, the reverse is not true. You can build systems to gracefully handle failures without the cost overhead of highly redundant infrastructure intended to avoid failures. Personally, I think the best way to build such highly reliable yet not highly available systems is to use durable messaging, though I’m sure there are other ways.

This is probably the biggest difference between Libor and me. I am actively looking to trade away availability (not reliability) in return for lowering the cost of building and running a system. To someone who builds electronic trading systems like Libor, that probably sounds completely wrongheaded. But an electronic trading system would fall into the minority of systems that need high availability (ultra high five nines of availability in this case). For the systems that actually do need high availability, you have to invest in redundancy to get it. But for the rest of the systems, there’s a less costly way to get the reliability you need: Durable Messaging.

Morning Coffee 99

  • Mladen Prajdic has a great post on handling a database in your unit tests. He mentions NDbUnit but seems mostly to favor SQL 2005′s database snapshot feature. He’s got sample code for creating and restoring a snapshot. (via DNK)
  • Microsoft Robotics Studio 1.5 released yesterday. Tandy Trower – GM of the Robotics group – has the details on what’s new.
  • Herb Sutter has a new column in Dr. Dobbs on concurrency. First up, “building a consistent mental model for reasoning about concurrency”. Sounds like a must read column. (via LtU)
  • Scott Hanselman describes “Sez You Architecture”. I wonder, do architecture ninjas get to wear a Shinobi shozoku?
  • From the Not Everyone Agrees With DevHawk Dept.: Libor Soucek disagrees with me and thinks that durable messaging should be avoided. I had a hard time following Libor’s logic but needless to say, I disagree with his disagreement. He writes that one of the reasons to use DM is for “Cooperating on transaction with external system”. While multiple systems may be cooperating on a business transaction, in no way do I believe they are going to cooperate on a database transaction. But since he started talking about the DTC, I suspect we’re talking past each other. Libor, drop me a line and we can discuss further.