Portability without Productivity

I said this was going to be a slow week, so I dug out something I wrote while I was on paternity leave. I was saving it for a rainy day, which is pretty silly since if it was raining I wouldn’t mind staying indoors and writing long posts about levels of abstraction, portability and productivity.


My father and I have a running argument debate about the value of platform independence in the system design and development process. As you might guess, as an employee of a platform company I stand firmly on the “platform neutrality is irrelevant” side of the debate. Having been around Bell Labs during the development of Unix and C as well as having done a stint for an ISV, my father is firmly on the “platform neutrality is important”. Typically, these discussions turn into childish argument where my father continuously says “what if” and I continuously say “that never happens” and neither of us actually makes any ground on convincing the other of the error of their ways.

So herein is yet another salvo in the discussion, for your entertainment. It’s long and drawn out, but since it’s written it means he can’t interrupt me to say “what if” 😄

As mentioned above, my father was at Bell Labs in the early 70′s when Unix and C were developed. I guess it’s no surprise that he harps so much on portability – going back and reading some of the papers that came out of Bell Labs at the time, it’s obvious that their culture heavily valued portability. On Dennis Ritchie’s site (the ‘R’ in ‘K&R’) there is a wide variety of relevant material including The Evolution of the Unix Time-sharing System, The Development of the C Language, and Portability of C Programs and the UNIX System. Given the drastic evolution in computing at the time, it’s not surprising that both Unix and C had portability as primary goals. While I jokingly refer to C’s portability as “write once, compile everywhere”, the reality is that the portability of C and Unix was a key to Unix’s success. According to the portability article, 95% of the C portion of the Unix kernel required no changes when they ported Unix from PDP-11 to the Interdata 8/32. Only the core kernel, which was written in assembly, had to be completely rewritten. Even a significant portion of the device drivers ported over to the new machine.

However, C isn’t just portable. It also provides significant abstraction above assembly code. Anyone who has done any assembly work knows how low level it is and how significant the abstraction jump up to C really is. For example, the simple C statement of “a = b + c” requires three lines of assembly code. Here’s how Ritchie describes C’s level of abstraction:

BCPL, B, and C all fit firmly in the traditional procedural family typified by Fortran and Algol 60. They are particularly oriented towards system programming, are small and compactly described, and are amenable to translation by simple compilers. They are `close to the machine’ in that the abstractions they introduce are readily grounded in the concrete data types and operations supplied by conventional computers, and they rely on library routines for input-output and other interactions with an operating system. With less success, they also use library procedures to specify interesting control constructs such as coroutines and procedure closures. At the same time, their abstractions lie at a sufficiently high level that, with care, portability between machines can be achieved.  (emphasis added) [The Development of the C LanguageDennis M. Ritchie]

That last sentence is key. C’s portability derives from raising the level of abstraction. But raising the level of abstraction also had an important productivity impact. Even if you’re are only building for a single platform and you don’t care about portability, most people would rather code in C rather than assembly because the raised level of abstraction makes the developer much more productive. However, raising the level of abstraction comes with a performance cost. C is pretty ‘close to the machine’ as Richie put it, but there is still a small penalty. If you’re writing ultra-perf sensitive code, sometimes writing in assembly is necessary. There’s a reason why books on the topic keep getting written and the Visual C++ compiler supports inline assembly code.

So here’s my point: Raising the level of abstraction is powerful because it can enable both portability and productivity. However, it also typically carries a performance penalty. As long as the portability and productivity benefits outweigh the performance penalty, everything’s cool. The problem is that productivity is typically WAY WAY WAY more important than portability. So abstractions that enable portability without significant positive productivity benefits will not offset the performance penalty associated with raising the level of abstraction.

The canonical example of “portability without productivity” abstraction that leaps to mind today is the Java platform. Certainly, Java has been pretty successful, though I would argue that its success has been extremely localized. Java on the client has never had mass adoption (the only non-toy Java client app I can think of off the top of my head is Eclipse) and the many of the parts of Java on the server bear a striking resemblance to Microsoft technology. (ODBC vs. JDBC, ASP vs. JSP, Session beans vs. MTS, etc.) Either way, Java adoption has fallen below .NET adoption even though Java had the promise of platform neutrality as well as several years head start. 1

I would argue that one of the main reasons that Java has had only limited success is that while Java is portable, it doesn’t provide much in the way of productivity improvements. Sure, garbage collection is much easier to program to than C++’s explicit memory management model. But Java’s primary competitor (at least in hindsight) was Visual Basic, not C++. While Java focused on portability, VB focused on productivity and in the end it was productivity that drove VB’s massive market share. If you were a Java developer, you had worse tools than VB and worse performance than C++ and the only advantage you had was portability which turned out to be more problematic and less important than advertised. Server apps are rarely re-platformed and Java’s UI implementation caused as many problems as it solved.

From a geek aesthetic perspective, you would have guessed that Java with its clean language and programming model would crush VB and COM in the marketplace, but it just didn’t happen. VB, with its software factory-esque approach, was easier to use and thus attracted more developers who got more apps written. I’d guess that ease of use / developer productivity is the key indicator of success for programming environments. If Java had focused on productivity and portability, I might be working for Sun today.


  1. Obviously, there are varying opinions on this point. SteveB said @ TechEd that .NET is the weapon of choice (my words not his) for 43% of all developers. Java was second with 35% and non .NET Windows development was third (no percentage given).  Even if you want to nit-pick on the numbers, you’d be hard pressed to argue that Java hasn’t been losing ground dramatically to .NET in the past three years since VS 2002 RTMed.

Blah Blah Architecture

If you heard my PDC05 Buzzcast but just can’t get enough of my voice, check out Architect MVP Mario Cardinal‘s Blah Blah Architecture podcast. I spent some time with Mario at TechEd talking about the Architecture Resource Center, why we designed it the way we did, why we have a site on MSDN and MS.com, how are taxonomy works, etc. Mario then takes my basic explanation and helps fill in more of the details, such as the relevance of the Architecture Resource Center to folks not using Microsoft technology. We also talked about Architecture Journal as well as why Microsoft invests in architecture.

BTW, I make some of the same points about architecture – as it differs from engineering, that it’s the space between business and technology, and the effect of title inflation – that I’ve made here on this blog this past week.

I guess I need some new material. 😄

Architecture Innovation?

So if architecture is the connection between business and technology, where does innovation fit? Microsoft has been pushing an idea of “integrated innovation” for several years now, but that’s primarily around technology innovation:

[T]he mission [of] Integrated Innovaton is about ensuring that the value of the Microsoft platform is greater than the sum of its components. It’s the coordination of software products, the way entire systems can be made to work together better. It’s a strategy to add customer-driven features and functionality to achieve specific business results while reducing cost and complexity
[‘Integrated Innovation’ Provides Partners with Roadmap to Success]

But what about business innovation? How often do we talk about that? Not enough IMO. Especially since the business innovation is much more likely to make or break a company than technology innovation. For example, how did Dell became the dominant company in the PC business? To quote from the Amazon review of Michael Dell’s book: “What makes Dell Computer unique is not what it sells, but rather how it sells it”. Usually, people note Dell’s direct-selling model as the catalyst for their success. Direct-selling might have been absent in the PC industry before Dell came along, but it’s not like direct-selling is a particularly innovative business model. However, what made the direct-selling of highly-configurable computers a reality was the innovative approach Dell took to managing it’s supply chain. Quoting from an Accenture case study on Dell’s supply chain:

Explains Dick Hunter, vice president, supply-chain management: “We now schedule every line in every factory around the world every two hours, and we only bring into the factory two hours’ worth of materials. We typically run a factory with about five or six hours’ worth of inventory on hand, including work in progress. This has decreased the cycle time at our factories and reduced warehouse space—space that has been replaced by more manufacturing lines.”

Not surprisingly, the project has produced more than just enhanced supply chain efficiencies and accelerated, highly reliable order fulfillment. At any given time, there is less than four days of inventory in the entire Dell operation, while many competitors routinely carry 30 days or more. In addition, automation has helped Dell react more quickly to correct potentially out-of-balance situations, made it much easier to prevent components from becoming obsolete and improved response times across the supply chain by providing a global view of supply and demand at any specific Dell location at any time.

Certainly, there is technology innovation involved in the management of Dell’s supply chain (and w/ RFID on the horizon, significantly more technology innovation in this space is still to come) but the primary innovation here was business oriented. I wonder where the next business innovations are going to be?

John asked  me over lunch if the people who read this blog would think I’m an architect or an engineer. Personally, using the definitions I’ve laid out this week, my heart’s in engineering but I’m getting more interested in architecture. Maybe I’m wrong, but it feels to me that pure engineering problems are giving away to more architectural problems as Moore’s law and it’s corollaries in network speed and storage space keep pushing out the limits of computing power. Jim Gray & Charles Levine wrote an funny article pointing out that Jim’s two year old TabletPC with a 1.6 GHz Centrino processor can handle over 8,000 transactions per second. To put that in perspective, in 1976, Bank of America’s DebitCredit system reached 100 transactions per second. It took a decade to build a system that handle over 200 transactions per second. Now, most of us are walking around with machine that could easily handle 40 times that performance.

As Moore’s law continues to solve technical challenges, I think it is creating new business challenges. And you know me…I like a challenge.

PDC05 Architecture Symposium Buzzcast

Michael cornered me on Tuesday to record a PDC05 Buzzcast with Mike Burner about the Architecture Symposium. At PDC03, the Architecture Symposium was one of the more popular and successful aspects of the overall confernece (though it was marred by a major room change issue that caused literally hundreds of attendees to be forced to watch from the hallway outside as the room was literally overflowing) and we’re looking to do something engaging again this year.

Like last time, the Architecture Symposium will be held the last day of the conference, Friday from 8:30 until noon (with breaks of course). After lunch, we’ll have a panel discussion featuring Gregor Hohpe, David Ing, Tony Redmond, Steve Swartz.

Here’s the full symposium abstract:

You’ve had a tantalizing week of cool technology, but now you need to transition back to your real job: making all of the pieces work together. The PDC Architecture Symposium will zoom you through the solutions lifecycle – from requirements to modeling to requirements to iterative development to requirements to operational feedback (which you might look at as another set of requirements) – showing you how traditional best practices and recent innovations can be used together to build robust solutions that accelerate business value creation.

Topics include:

The Architecture of Connected Systems
In the beginning there are the models – from the thing you scrawl on a napkin at lunch to that enormously complex diagram that your network architect carries around in a cardboard tube. What models are worth creating, and how do they relate to each other? Who are the key stakeholders for each, and how can you help them talk to each other? This session explores how to decompose value chains into your key models – your process and work flows, the information at the heart of your processes, and the access, deployment, and other operational models that you need to stay trustworthy and compliant. We will then map these models into a collection of services, orchestrations, and policies that define a highly integrated solution.

Building Connected Systems
With so much complexity and so many stakeholders, how do you build the right thing on time? This session explores the techniques to iterate agilely through a connected system project, including the patterns and practices for building solutions that combine messaging, workflow, structured information, and human interaction across platforms and across organizational boundaries. How can we give the right access to everyone in the value chain, respecting the very real boundaries around information and process control? How do we keep our models current, and use them to communicate with all of the stakeholders throughout the development lifecycle?

Managing the Connected Systems Lifecycle
As each iteration of your connected system is deployed and used, new requirements and system refinements emerge. How do we design in the operational hooks that give us the insight to learn from our deployed solutions? How do we re-factor and version our services and orchestrations to improve service reuse, scalability and operational efficiency? The key theme is driving collaboration between development and operations groups from the earliest design phase through the ongoing maintenance of the system.

As Michael said on the buzzcast, you just can’t miss the Architecture Symposium. See you there.

Believe Me, You’re Not Architecting

So I said yesterday that I’m talking about architecture, not architects. However, today I am going to discuss the word architect and the dramatic misuse of the word I see pretty regularly. Or at least, I see now because Paul Preiss of IASA mentioned it when we were hanging out at TechEd.

“Architect” is a noun, not a verb.

Usually, when I hear the term “architect” used as a verb, it’s being used as a synonym for design. In fact, the term “architecture” is also often used as a synonym for design. For example, Arnon wrote this:

Architecture is design (but not all design is architecture)
[Arnon Rotem-Gal-Oz, What’s “Software Architecture”?]

Like Fowler’s description I talked about on Monday, I don’t like this one much either. Architecture isn’t just “good design” which is what Arnon’s description makes it sound like. This begins to get into the title inflation that Alan Cooper wrote about a few years ago. Speaking of Alan, here’s his definition of architect.

The panoply of software construction includes three vital roles: the programmer, the engineer, and the architect. The architect is responsible for determining who the user is, what he or she is trying to accomplish, and what behavior the software must exhibit to satisfy these human goals. The engineer’s responsibilities are comparable but focused on technology. A good engineer can and should ignore human issues, confident that the architect will cover the human side.

That definition of architecture (i.e. what Alan’s idea of an architect does) dovetails pretty nicely with mine. The architect and architecture is the link between the users (i.e. the business) and the software (i.e. IT). The only thing I would change is that as we get deeper into service orientation, interop and connected systems we have lots of process that don’t have direct user involvement. So the “human goals” Alan mentions may not be end user goals so much as business / organization goals. Either way, they certainly aren’t IT goals.

As Dave Welsh said and my dad pointed out in my comments, Business is from Mars, IT is from Venus. But that doesn’t mean they can’t get together. In fact, they have to get together. You show me a system with no business drivers or impact and I’ll show you a failed architecture.