Two new architect bloggers to note. Jim Clark is a business architect on the Architecture Strategy Team. Jim spends a lot of time with what he calls “Red River” – identification and definition of business architectures, ontologies and environments that promote trusted business solutions. His first post is about Familiarity and Trust. Steve Cook is a contributor to Software Factories and works for Keith. Steve is looking forward to OOPSLA. So am I.
The Most Popular Modeling Environment Ever (So Far)
Steve’s post on “the modeling problem” hits the nail on the head. We’re all familiar with the concept of “fast, good, cheap – pick two”. Steve breaks down modeling into “general, precise, efficient – pick two (and favor one)”. Furthermore, you can’t have a language that is both general and precise. UML takes what Steve calls the “Favor efficiency, accept generality and compromise precision” approach:
The UML metamodel is flexible enough to allow it to describe virtually any system out there. However, from a formal semantic perspective, the resultant model is gooey and formless which makes it very difficult to compile into anything useful. At best, we can get some approximation of the underlying system via codegen, but even the best UML tools only generate a fraction of the code required to fully realize the model. The lack of precision within the model itself requires operating in both the model domain and the system domain, and implies that some facility exist to synchronize the two. Thus, the imprecision of UML forces us to solve the round-tripping/decompilation problem with 100% fidelity, which is generally difficult to do.
Software Factories, on the other hand, takes what he calls the “Favor efficiency, accept precision, and compromise generality” approach:
This, I think, it the sweet spot for Microsoft’s vision of Software Factories. Here’s why: the classic problem faced by modeling languages is Turing equivalency. How do you model a language that is Turing-complete in one that’s not without sacrificing something? The answer is: you don’t. You can either make the modeling language itself Turing-complete (which sacrifices efficiency) or you can limit the scope of the problem by confining yourself to modeling only a specific subset of the things that be expressed in the underlying system domain. Within that subset, it might be possible to model things extremely precisely, but that precision can only be gained by first throwing out the idea that you’re going to be able to efficiently and precisely model everything.
When describing Software Factories, I have two analogies that I use to explain the idea. The first is the “houses in my neighborhood” example I blogged before. That does a good job describing economies of scope, but doesn’t really cover the modeling aspect of software factories. Talking about how you model cars or skyscrapers doesn’t really capture the essence of software modeling – you don’t generate the construction plans from a scale model of a skyscraper. However, it turns out that all developers have at least a passing familiarity with my second analogy: Visual Basic, the most popular DSL and modeling tool of all time (so far).
The original Visual Basic was a rudimentary software factory for building “form-based windows apps”. (Today, VB.net has been generalized to support more problem domains) Like the factory approach that Steve describes, VB was very efficient, sufficiently precise, yet not particularly general (especially in the early years). There were entire domains of problems that you couldn’t build VB apps to solve. Yet, within those targets problem domains, VB was massively productive, because it provided both a domain specific language (DSL) as well as a modeling environment for that domain.
A DSL incorporates higher-order abstractions from a specific problem domain. In the case of VB, abstractions such as Form, Control and Event were incorporated directly into the language. This allowed developer to directly manipulate the relevant abstractions of the problem domain. Abstractions extraneous to the problem domain, such as pointers and objects in this case, got excluded, simplifying the language immensely. Both of these lead directly to productivity improvements while limiting the scope of the DSL to a particular problem domain.
In his post, Steve makes the point that it’s pointless to distinguish between modeling and programming languages. VB certainly blurred that line to the point of indistinguishably. Regardless, graphical languages are typically more compelling and productive than textual ones. It’s hard to argue with the productivity that VB form designer brought to the industry. Dragging and dropping controls to position them, double clicking on them to associate event handlers, changing properties in drop down boxes – these idioms have been widely implemented to the point that essentially all UI platforms provide a drag-and-drop based modeler. It’s such a great design that 10 years later, UI modelers are essentially unchanged.
Once you realize that VB’s DSL and modeling environment was a rudimentary software factory, you realize that Software Factories methodology is about generalizing what VB accomplished – building tools that achieve large gains in efficiency by limiting generality. Since each of these tools focuses on a limited problem domain, you need different tools for different problem domains. The problem is that while building apps with VB may be easy, but building VB itself was not. Most enterprises have the expertise to develop abstractions in their domain of expertise and to codify those abstractions in frameworks, but very few can develop tools and DSLs for manipulating those frameworks. One of the goals of Software Factories (and VSTS Architect for that matter) is to make it easier to build tools that are really good at building a narrow range of applications.
Note, it’s important to note that the term “narrow range” is relative. Darrell seems to think narrow range only means vertical market applications that don’t “solve new and interesting problems”. It’s true that the narrower the range, the more productive the tool can be. But VB shows us that you can achieve large productivity gains while solving new and interesting problems even in broad scope problem domains.
No Time To Experiment, So I’m Reading About Cw
COmega (otherwise known as Cw since most people don’t have an omega key on their keyboard) is on a long list of stuff for me to look at. But instead of actually coding with it, so far I can just read Steve Maine’s blog. He’s got interesting posts on syncronization and streams, the two big features of Cw over C# (come to think of it, we use the “#” symbol as most people don’t have an actual sharp key on their keyboard). I also learned from Steve that Cw comes with basic VS integration – you get project support, syntax highlighting and some Intellisense. Now I just need a few extra hours in the day.
Another Team Blogger
Actually, I don’t think he’s “officially” part of the team ’til next week, but Josh Lee has already started a blog about his new job on the Architecture Strategy team. Josh is “The FinServ Guy”, and is a member of the IFX Forum Board of Directors. Nothing really meaty on his blog yet, just a Hello World post, but I hear great things about him.
Speaking of the Architecture Strategy team, I finally took 5 minutes to term serv into the machine that hosts DevHawk to update the theme. I keep mentioning the Architecture Strategy extended team OPML file, but I wanted to add a blogroll to the site theme. Now, I have a Team Blogroll on the left hand side of my website featuring all of my blogging teammates as well as all the blogging architect evangelists. Enjoy.
More MSFT Architect Bloggers + a Standard Rant
We keep getting more and more field architects and architecture strategy team members blogging. Remember, I keep a list (I am becoming the Scoble of Microsoft Architecture). Anna Liu is a field architect evangelist who presented at TechEd Australia (but we didn’t get a chance to hang out). Anna’s also been thinking about software development as an engineering discipline.
In addition to Anna, two of my teammates are blogging: Chris Keyser and Dave Welsh. Chris is a solution architect who’s doing some awesome next gen SOA work. He’s been bloggingabout using WSE2 to manage Security Context Tokens. Chris, like John deVadoss (who has relapsed into silence), is very pragmatic so it’s great to run radical ideas past him.
Earlier this year, our team “inherited” a group of awesome vertical architects – I’ve blogged about John Evdemon before who’s from that group. Dave is also from that group. Like many of our vertical architects, Dave is heavily involved in standards bodies – in Dave’s case it’s UN/CEFACT. He’s got an great article on how Standards Development Organizations traditionally work and another on how MSFT (and our specification partners) is improving on that process. He’s shining a light on the dark corners of the standard process, which is a good thing since so many people act like standards are a silver bullet solution. I love Dave’s description of the traditional standards process:
[L]aunch a committee, politically pick a chair, generate lots of hype and expectation on how this spec will solve world hunger, stack the new committee with people who may be able to contribute, host conference calls and arm wrestle the original idea down to some compromise that seems to make sense, then hope someone’s got a number of free weekends over to write up a draft of the new spec.
You want an example of the results of a traditional standards process? How about XSD? I think XSD is the ugliest widely-used spec around. Don agrees, according to his comments from last years SellsCon:
Nothing illustrates [the cost of standardization] more than XML schema. XML schema is the quintessential example of what happens with a design by standards body specification. Rather than taking something that worked and something that was done and that there was experience with and effectively dotting the i’s and crossing the t’s you had two from every company off doing wanton innovation and invention without implementation experience. It was a train wreck in the making, especially when you consider the fact that you had people who vehemently disagreed about what they were building. Some people thought they were bringing object orientation to XML. Some people thought they were bringing database schema concepts to XML. Some people thought they were just, you know, reliving the SGML dream. So what do we get? We get a Frankenstein’s monster that is dumber than the dumbest person in the committee. No one person on that committee could have produced something this bad. It took an army of people to work hard day and night to build something that is not good. It’s not terrible – can we make it work? Yes. But it’s going to take a lot of work from a lot of plumbers and a lot of tool vendors to make XML schema palatable to the average developer.
A great example of the opposite approach is RELAX NG. It is widely believed at this point in time that RELAX NG is a better schema language for XML than XML schema. Why? Because two guys who were really smart said why don’t we go do this and let’s get it working and let’s build it while we do it and let’s iterate it and see what works and what doesn’t work. And then when we’re done we will take it to the rubber stamp – I’m sorry, Oasis – where they will carefully vet every decision and bless it and give it UN status.
I’m with Don and Tim: I want RelaxNG. More importantly, I want standards that are built like WS-* and RelaxNG.