Blog Posts from November 23, 2005 (page 1 of 1)

JavaScript Utilities

Can you tell it’s a slow day in the office? 😄

Speaking of raising the level of abstraction as well as browser based applications, check out Nikhil’s JavaScript Utilities project:

The project introduces the notion of .jsx (extended JavaScript) and .jsa (JavaScript assembly) files. JSX files provide the ability to include conditional code via familiar preprocessor directives such as #if, #else, #endif and so on…The tool processes these directives in order to produce a standard .js file. JSA files build on top of .jsx files. While they can include normal JavaScript and preprocessor directives, they are primarily there for including individual .jsx and .js files via #include directives. This allows you to split your script into more manageable individual chunks.

Now, that’s not raising the level of abstraction much, but here’s another example of working in a higher abstracted environment (jsx and jsa) and then compiling down to something the underlying platform can execute (js). Nikhil provides three ways of doing this compliation:

  1. A set of standalone tools that output standard .js files that you can then deploy to your web site. Command line parameters are used to control the behavior of the tools.
  2. A couple of IHttpHandler implementations that allow you to dynamically convert your code into standard .js files. This is the mode where you can benefit from implementing per-browser code in a conditional manner. AppSetting values in configuration are used to control the conversion behavior.
  3. As a script project in VS using an msbuild project along with an msbuild task that comes along with the project. MSBuild project properties are used to control the conversion behavior.

If you’re going to raise the level of abstraction to do implement a preprocessor, you could also go all the way and implement an entirely new language that gets compiled down to JavaScript for execution in the browser. For example, I’m not as familiar or comfortable with JavaScript’s prototype based approach. But I could imagine a more class based language that compiles to JavaScript. That’s the same way early C++ compilers worked – they were a preprocessor pass that converted the C++ into C, which could then be compiled with traditional C compilers.

I wonder what JavaScript++ would look like

As Simple as Possible, But No Simpler

Chris Bilson left the following comment to my Thoughts on CAB post

I hesitate to agree that raising the abstraction level of tools is a good idea. That’s just hiding the complexity that’s already there inside of more complexity. If you ever need to look under the hood, it’s even harder to grok. I think it would be better to go the other way. Try removing stuff to combat complexity.

Given that the software industry has been raising the level of abstraction of tools since the start, I found this comment surprising. Assuming Chris doesn’t write in assembly code, he’s leveraging something at a higher level of abstraction that’s “just hiding the complexity that’s already there”. As I wrote in Code is Model:

The only code that the CPU can understand is machine code. But nobody wants to write and debug all their code using 0’s and 1’s. So we move up a level of abstraction and use a language that humans can read more easily and that can be automatically translated (i.e. compiled) into machine code the CPU can execute. The simplest step above machine code is assembly language. But ASM isn’t particularly productive so work with, so the industry has continuously raised the level of abstraction in the languages they use. C is a higher level of abstraction than ASM, adding concepts like types and functions. C++ is a higher level of abstraction than C, adding concepts like classes and inheritance. Each of these levels of abstraction presents a new model of the execution environment with new features that make programming more productive (and sometimes more portable).
[emphasis added]

I feel like Chris has mischaracterized what I wrote. Here it is again:

If you can’t lower the complexity of your framework, it’s time to raise the abstraction of your tools.

Note I’m not advocating raising the level of abstraction for abstractions sake. Believe me, I’m hate overly complex code. The project I’ve been on (and can blog about soon I hope) had a tone of overly complex code that wasn’t particularly germane to solving the problem. I yanked all that code and have reduced the framework to a quarter it’s original size while adding functionality. But the reality is, simplification can’t always be achieved by removing code. To paraphrase Albert Einstein, solutions to problems should be as simple as possible, but no simpler.

CAB is the simplest solution to the problem it addresses, and no simpler. Since we can’t make CAB simpler and still solve the problem, the only alternative we have is to have the tools hide that complexity. Given how well this has worked in the past, I see no reason why it can’t work again in the future.

He’s Bad at Taking Advice, but Man is He Smart

He in this case is Tim Mallalieu from the data programmability team. He and I used to work on the .NET Adoption Team in the field together before we both moved to campus. Tim, if I recall correctly, was the first outside hire we made on that team. We used to constantly argue about who was the second smartest on the team. He thought it was me and I thought it was him. We both agreed the Jeromy was the smartest.

Anyway, after literally years of haranguing, Tim finally started a blog. But he’s still not really listening to me. I told him he should start with a series of smaller posts and he didn’t

I’m back working with Tim in a fashion as both my team and his team try and get a handle on how data is going to evolve as we move to service orientation. Most of the existing data paradigms and semantics such as table joins and transactions assume everything is together in one database. Obviously, as we break down large systems into smaller services, having everything in one database just isn’t practical.

Browser Based Applications in the Enterprise

I asked Scoble for his thoughts on my post about the term mashup, and he decided simply to link to it. While I appreciate the traffic, I’d appreciate opinions even more. Luckily, in the same post he linked to Sam Ramji writing about what he called “enterprise mashups“. Again, love the idea but hate the name mashup.

According to Sam, Scoble said:

[W]hat’s to prevent mash-ups from being the main way that departmental apps are built in most enterprises 3-4 years from now?

Again, it depends on the definition of the term “mashup”. Are we’re talking about browser component based apps that leverage what Scoble calls ICC’s? If so, Scoble is only wrong about the timeframe. My team has a browser based application leveraging Virtual Earth running on our intranet server right now.

Quick terminology sidebar: I use the term browser based application to refer to these AJAX and mashup style apps, since they download code and run in the browser itself. By comparison, I use the term web based application for the more traditional browser apps that did all the processing on the server and used the browser essentially as a dumb terminal.

While I think these browser based applications will go mainstream in the enterprise soon, there are a couple of things holding up adoption: availability of tools and of the components themselves.

On the tools side, the AJAX programming model is just too difficult for most programmers. Rudimentary debugging support, late bound script languages, no compiler to help you find errors. Sounds like web development circa 1997, doesn’t it? I’m hoping Atlas will be to AJAX what ASP.NET was to ASP.

As for existing components, most of the ones Scoble calls out have little relevance to the enterprise. There’s little point to a Feedmap or a Flickr bar in my enterprise app. And I’m not sure I even want to go into the implications of serving up ads in an enterprise app. That pretty much leaves mapping components like VE and Google Maps as the only browser components of interest to the enterprise developer. So far anyway.

I wonder if Scoble knows of any other enterprise relevant browser components out there?

Thoughts on CAB

As I wrote earlier, I’m really impressed with p&p‘s latest release – the Composite UI Application Block. I had to fly to Atlanta for a customer meeting last week and I spent most of my spare time (flying, in hotel, etc) digging thru CAB. Well, digging through CAB and reading State of Fear by Michael Crichton. IMO, it’s his best book in quite a while, perhaps second best he’s ever written after Jurassic Park.

Back to CAB. First off, I want more information about ObjectBuilder. p&p’s dependency injection framework is very impressive. However, with the exception of the code, the code reference in the help files, the unit tests, and a single article on how it was designed there just isn’t much info on it. For example, I know I can use it to inject a concrete type where a class is expecting an abstract type. But I couldn’t figure out how so I went and asked Peter and Brad. (Short answer – use TypeMappingPolicy). Do me a favor, go contact Peter and David and tell them you want more info on ObjectBuilder. Sample code especially. Go on, drop them a line right now while you’re thinking about it. I’ll wait.

Back already? Cool. One thing you can’t help but notice about CAB (and OB for that matter) is the heavy use of attributes, which I feel is an extremely elegant solution. Remember the first time you looked at NUnit? How sensible it seemed to use attributes like [TestFixture], [Test] and [ExpectedException] compared to what other xUnit frameworks provide? Get ready to experience that all over again when you look at OB and CAB. Now you’re looking at attributes like [CreateNew], [EventPublication] and [CommandHandler]. There’s a reason why Sun cloned attributes for J2SE 5.0 – it’s powerful (in the right hands). The attributes both drive human readability – when you’re looking at a property adorned with [CreateNew] or [Dependency] it’s obvious that these are injected – as well as the implementation. Win-win as far as I’m concerned.

CAB does a great job of codifying standard patterns in smart client design, such as events, commands and controllers, as well as implementing completely new ideas such as workspaces, work items and smart parts. And they’ve done it with an eye on the future. I love the isolation between the windows forms specific parts of CAB and the underlying infrastructure of CAB. Check out this picture of the layers of CAB. By isolating all the WinForms specific stuff in a separate assembly, they are well set up to support WPF while minimizing redundant effort. I can’t wait to see a Microsoft.Practices.CompositeUI.Avalon project (because Microsoft.Practices.CompositeUI.WindowsPresentationFoundation is just to long a namespace).

However, I can’t shake the feeling of how complex this stuff is. Yeah, you use CAB for solving complex problems, but there is a significant learning curve here. I imagine debugging a CAB app will be significantly non-trivial. Take a look at the final step of the walkthru app. Here’s the sequence that gets “hello world” painted on the screen:

  1. Main entrypoint creates and runs the ShellApplication.
  2. ShellApplication creates a ShellForm window, which in turn contains the tabWorkspace1 workspace.
  3. ShellApplication dynamically loads MyModule because it’s listed in the ProfileCatalog.xml file.
  4. CAB creates an instance of the MyModuleInit class from the MyModule assembly.
  5. MyModuleInit creates and runs an instance of MyWorkItem.
  6. MyWorkItem creates a MyView and MyPresenter.
  7. MyWorkItem adds the MyView instance to the tabWorkspace1 workspace, hosted in the ShellForm
  8. MyPresenter handles the MyView load event and sets the message property of MyView to “Hello, World”

Of course, the point of a demo app like the walkthru is comprehension. You would never use CAB to build this Hello World app. But I worry that the level of complexity will put CAB beyond the reach or inclination on many potential users. I imagine this single line of code will scare off many would-be CAB developers:

MyWorkItem myWorkItem = parentWorkItem.WorkItems.AddNew<MyWorkItem>();

Given that most people are used to writing new MyWorkItem(), the line above represents a significantly rise in complexity. Of course, CAB is trying to solve a complex problem. I’m sure CAB’s designers would rather the solution wasn’t so complex, but that’s the reality of the problem space their dealing with.

If you can’t lower the complexity of your framework, it’s time to raise the abstraction of your tools.

I wonder what CAB specific tooling would look like? At a minimum, it would like built in support for the primary concepts in CAB – WorkItem, SmartPart, Workspace and Service to name a few. Another opportunity would be to move from embedded strings, used to identify events, commands, UIExtensionSites and the like to true variable-style names that could be validated at compile time, sort of like how LINQ is extending C# to get rid of embedded database query commands. There’s lots of possibilities and the more I work w/ CAB the better idea I’ll have about them.