Morning Coffee 113

  • I’m in Chicago today and tomorrow for a reunion of sorts. In my last job, I managed a group of external architects called the Microsoft Architecture Advisory Board (aka the MAAB). We discontinued the program a while back, but the core of the group found the program valuable enough they have continued to meet anyway. I found the MAAB meetings incredibly valuable and insightful, so I’m really excited to be invited to continue my involvement with the group.
  • I picked up Bioshock Tuesday (Circuit City had it on sale) on my way to my bi-weekly campus excursion. My meetings were over around 2pm so I headed home early, expecting to surprise the kids. But Jules had decided to skip naps and go shopping with them. Her cell phone was dead, so I ended up at home with a couple of hours to myself and a brand new copy of Bioshock. Wow, is that a good game. Certainly deserving of the amazingly good reviews it’s garnered.
  • Speaking of reviews, this transparently biased review of Bioshock over at Sony Defense Farce Force is frakking hilarious. Somehow, I doubt their dubious review will stem the tidal wave of Bioshock’s well-deserved hype. Can’t wait to read their Halo 3 review.
  • Pat Helland writes at length on master-master replication. I reformated it into PDF so I could read it – the large images were messing up the text flow on my system. As usual for Pat, there’s gold in that thar post. His thoughts on DAGs of versions and vector clocks as identifiers are very exciting. However, I think he glosses over the importance of declarative merging. I would think programmatic merge would likely be non-deterministic across nodes. If so, wouldn’t you end up with two documents with the same vector-clock identifier by different data?
  • Joe McKendrick points to a few people who predict the term “service-oriented” will eventually be subsumed under the general heading of “architecture”. Not to brag, but I made that exact same prediction almost three years ago.
  • Erik Johnson thinks that SOA 2.0 centers on transformational patterns. The idea (I think) is that if systems “understand each other more deeply”, then we can build a “smarter stack” and design apps via new constructs to promote agility and simplicity. Personally, I’m skeptical that we can define unambiguously system semantics except in the simplest scenarios, but Erik talks about using “graph transformation mathematics” to encode semantics. I don’t know anything about graph transformation mathematics, but at least Erik has progressed beyond hand waving to describing the “what”. Here’s looking forward to the “how”.
  • New dad Clemens Vasters somehow finds time to implement an XML-RPC binding for WCF 3.5. I was encouraged that it didn’t require any custom attributes or extensions at the programmer level. Of course, XML-RPC fits semantically into WCF’s interface based service model, so it shouldn’t be a huge surprise that it didn’t require any custom extensions. But did it need WCF 3.5? Would this work if recompiled against the 3.0 assemblies?
  • Phil Haack writes a long post on Duck Typing. VB9 originally supported duck typing – the feature was called Dynamic Interfaces – when it was first announced, but it was subsequently cut. I was really looking forward to that feature. Between it and XML Literals, VB9 was really stepping out of C#’s shadow. I guess it still is, even without dynamic interfaces.
  • Since I’ve been doing some LINQ to XML work lately, I decided to go back and re-write my code in VB9 using XML literals. While XML literals are nice, I don’t think they’re a must have. First, LINQ to XML has a nice fluent interface, so the literals don’t give you that much cleaner code (though you do avoid writing XElement and XAttribute over and over.) Second, I find VB9′s template syntax (like ASP <%= expression %>) clunky to work with, especially in nested templates. Finally, I like the namespace support of XNames better. As far as I can tell, VB9 defines namespaces with xmlns attributes just like XML does. So I’m not dying for literal XML support in a future version of C#. How about you?

Morning Coffee 110

  • Monday @ Gamefest, the XNA team announced XNA Game Studio 2.0. The two big new things are support for the entire VS product line (1.0 only works on VC# Express) and the addition of networking APIs. Let’s Kill Dave has a good wrapup of the announcements from Gamefest Day One.
  • Speaking of Xbox 360, I played thru the demos of Stranglehold and Bioshock. Two thumbs up on both. It’s gonna be an expensive year for Xbox gamers.
  • Mark Cuban noodles on taking your house public. “Why not create a market or exchange where homeowners can sell equity in their homes?” I’ve thought about this myself from time to time. However, Mark thinks making it happen would “probably take the country’s biggest banks working together”. I wonder if there’s a more Web 2.0 social lending approach that would work better.
  • Jeff Atwood calls virtualization as “the next great frontier for computer security”. I agree 100%. But I don’t think the action is going to be in “full-machine” virtualization like Virtual PC. Rather, it’s going to be sandbox virtualization. Jeff mentions GreenBorder (now part of Google) but it’s not the only solution. Some time ago, Microsoft acquired SoftGrid which uses sandbox virtualization for application deployment, but using SystemGuard for security sandboxing seems like a logical step.
  • The WCF LOB Adapter SDK has released. Sonu Arora has the details. As part of the Integration team @ MSIT, I have a feeling we’re going to become fairly familiar with this technology. (via Jesus Rodriguez).
  • Speaking of Jesus, he thinks the six new SCA4SOA committees are “going to help”. Why? Because inventing technology in committee has turned out so well in the past?
  • John deVadoss cements BPM’s fad du jour status by contrasting “big” BPM and “little” BPM. It’s fairly obvious to me that big *anything* just doesn’t work in the enterprise. But I worry that little *anything* doesn’t work that well either. So how long until someone (probably Nick) starts arguing for “middle out” BPM?
  • David Bressler wonders “What is it about registries that everyone thinks is a panacea for all things SOA?” Amen, Brother! Joe McKendrick claims it’s required for governance, but then gets to what I think is the *real* reason for focus on registries: the “registry is a tangible offering” that vendors can sell. Just because it’s productizable doesn’t mean you need it.
  • Hartmut Wilms responds to my retire the tenets post, but he seems to contradict himself. On the one hand, he suggests that “the four tenets just expressed, what “almost” everybody outside the MS world knew already”. But then he goes on to dispute that the SO paradigm shift has even occurred! Hartmut, I’ll grant you that WCF (among other similar stacks) are way too focused on “you write the classes, we’ll handle the contracts and messages”. On the other hand, if you don’t provide a productive interface that most everyone can pick up and run with, the technology won’t get adopted in the first place.

Retire the Tenets

John Heintz and I continue to be in mostly violent agreement. It’s kinda like me saying “You da architect! Look at my massive scale EAI Mashup!” and having him respond “No, you da architect! The SOA tenets drive me bonkers!” Makes you wonder what would happen after a few beers. What’s the architect version of Tastes Great, Less Filling? ^[Not that you would catch me drinking Miller Lite. Ever.]

Speaking of the tenets, John gives them a good shredding:

Tenet 1: Boundaries are Explicit
(Sure, but isn’t everything? Ok, so SQL based integration strategies don’t fall into this category. How do I build a good boundary? What will version better? What has a lower barrier to mashup/integration?)

Tenet 2: Services are Autonomous
(Right. This is a great goal, but provides no guidance or boundaries to achieve it.)

Tenet 3: Services share schema and contract, not class
(So do all of my OO programs with interface and classes. What is different from OO design that makes SOA something else?)

Tenet 4: Service compatibility is based upon policy
(This is a good start: the types and scope of policy can shape an architecture. The policies are the constraints in a system. There not really defined though, just a statement that they should be there.)

Ah, I feel better getting that out.

As John points out, the four tenets aren’t particularly useful as guidance. They’re too high level (like Mt. Rainier high) to be really actionable. They’re like knowing a pattern’s name but not understanding how and when to use the actual pattern. However, I don’t think the tenets were ever intended to be guidance. Instead, they were used to shift the conversation on how to build distributed applications just as Microsoft was introducing the new distributed application stack @ PDC03.

John’s response to the first tenet makes it sound like having explicit boundaries is obvious. And today, maybe it is. But back in 2003, mainstream platforms typically used a distributed object approach to building distributed apps. Distributed objects were widely implemented and fairly well understood. You created an object like normal, but the underlying platform would create the actual object on a remote machine. You’d call functions on your local proxy and the platform would marshal the call across the network to the real object. The network hop would still be there, but the platform abstracted away the mechanics of making it. Examples of distributed object platforms include CORBA via IOR, Java RMI, COM via DCOM and .NET Remoting.

The (now well documented and understood) problem with this approach is that distributed objects can’t be designed like other objects. For performance reasons, distributed objects have to have what Martin Fowler called a “coarse-grained interface”, a design which sacrifices flexibility and extensibility in return for minimizing the number of cross-network calls. Because the network overhead can’t be abstracted away, distributed objects are a very leaky abstraction.

So in 2003, Indigo folks came along and basically said “You know the distributed object paradigm? The one we’ve been shipping in our platform since 1996? Yeah, turns out we think that’s the wrong approach.” Go back and check out this interview with Don Box from early 2004. The interviewer asks Don if WCF will “declare the death of distributed objects”. Don hems and haws at first, saying “that’s probably too strong of a statement” but then later says that the “contract, protocol, messaging oriented style will win out” over distributed objects because of natural selection.

The tenets, IMHO, were really designed to help the Windows developer community wrap their heads around some of the implications of messaging and service orientation. These ideas weren’t really new – the four tenets apply to EDI, which has been around for decades. But for a generation of Windows developers who had cut their teeth on DCOM, MTS and VB, it was a significant paradigm shift.

These days, with the tenets going on four years old, the conversation has shifted. Platform vendors are falling over themselves to ship service/messaging stacks like WCF and most developers are looking to these stacks for the next systems they build. Did the tenets do that? In part, I think. Mainstream adoption of RSS was probably the single biggest driver of this paradigm shift, but the tenets certainly helped. Either way, now that service orientation is mainstream, I would say that the tenets’ job is done and it’s time to retire them. Once you accept the service-oriented paradigm, what further guidance do the tenets provide? Not much, if any.

Morning Coffee

  • Libor Soucek continues our conversation about durable messaging. We still don’t agree, but he says he “fine” with durable messaging. He does go out of his way to differentiate between enterprise and supporting systems. But when you’re building connected systems, does that differentiation still matter?
  • After taking a few months off, John deVadoss is back at the blog. Check out his Big SOA/Little SOA post. I especially like his snowball analogy “How do you build a big snowball? You start with a small snowball.”) though he’s also on this “middle out” bandwagon. Do we really believe “middle out” works, or are we just saying it because we know top down and bottom up don’t? And John: You’re welcome!
  • Anyone coming to the Microsoft SOA & Business Process Conference this fall? Maybe we can have a shindig / blogger dinner / unconference / something?
  • Remus Rusanu writes about SSB’s dynamic routing. One of the (many) cool things about SSB is that all the addressing is logical, not physical. Routing is what binds logical addresses to physical addresses, and it’s extensible.
  • Martin Fowler discusses the value of sticking to one language. I agree with his points about large frameworks being as difficult to learn as a new language. I’ve said for a long time “If you build a framework, build tools to make it easy to use your framework”. Language is obviously a core example of a tool. Another interesting point Martin makes is the traditional “intimate relationship” between scripting languages and C, but that the rise of JVM & CLR makes them impossible to ignore. Does the need to play well in a managed environment hinder a C based language like Ruby when compared to a natively managed scripting language like Powershell? Finally, Martin’s “jigger of 80 proof ugliness” quote made me laugh.
  • Politics 2.0 Watch: EJ Dionne says that DailyKos is doing for Democrats what Rush Limbaugh did for Republicans almost twenty years ago: mobilization. Josh Marshall points out that “what’s happening today is vastly more participatory and distributed…than anything happening back then.”

Where Have All the SOA Mashups Gone?

John Heintz responded to my serendipitous reuse post. Nice to see I misunderstood his opinions about how easy RESTful systems are to integrate:

I didn’t mean to imply that building RESTful system would lead to magical integration without any hard work. I can see how that came across in my post, and I guess I got the reaction I asked for 😉

I get the feeling that John would be a good guy to have a beer with.

John spends most of his post writing about the SOA in the Real World book. I’ve flipped thru it and I’m familiar with the model (it is my old team after all) but I haven’t read it so I don’t really want to comment about the book specifically. But there were two things John mentioned that I did want to comment on.

First, at the end of his post John writes:

Can some of the constraints of REST be applied to SOA? Absolutely. I think an asynchronous, message-passing architecture with a uniform interface would be astoundingly interesting! I’m not the only one: see MEST, AMPQ, and Erlang.

This goes back to a REST question I asked two months ago: is it still REST if you don’t use HTTP? I’m guessing John would say yes.

I might be going out on a limb here, I’ll bet the core of John’s problem with SOA is how toolkits like WCF all but force you to build RPC style services that can easily be modeled as method calls. That’s certainly one of my problems with SOA. Tim Ewald said it best:

It’s depressing to think that SOAP started just about 10 years ago and that now that everything is said and done, we built RPC again. I know SOAP is really an XML messaging protocol, you can do oneway async stuff, etc, etc, but let’s face it. The tools make the technology and the tools (and the examples and the advice you get) point at RPC. And we know what the problems with RPC are. If you want to build something that is genuinely loosely-coupled, RPC is a pretty hard path to take.

If SOA == RPC and REST == loosely coupled messages, then I’ll start growing dreadlocks right now. Frankly, as Tim says, I think it’s a problem with the tools (I’m looking at you WCF) and not the underlying architecture, but how many people can distinguish the architecture from the tools? Not many, I’m afraid.

Second, John asks an interesting question:

Where are the SOA mashups?

That’s easy! They’re inside the firewall where you can’t see them! 😉

Seriously, I’m not sure about “SOA” mashups, but I’m working with what you might call a huge “enterprise” mashup system inside Microsoft. Our Enterprise Data Integration Services push around massive amounts of data to downstream systems. There are over fifty datasets in production, each with scores of tables, millions of rows and hundreds of subscribing systems. One example, our Products dataset, has over 100 tables and nearly 300 subscribing systems.

Is it “service oriented”? No, but then again it was originally developed ten years ago on SQL 6.5. But is it a mashup? Is it an “application that combines content from more than one source into an integrated experience“? Yep. Is it easy to work with? No, but guess why I’m involved? We’re looking at ways to “modernize” the system. Am I going to build RPC style services as part of this modernization? Hell, no.

So John, am I right or wrong about that beer?