The Next Mainstream Programming Language

Terra Nova is not the usual place I go to get news around programming language improvements. But they linked to a great presentation from POPL 2006 by Tim Sweeney of Epic Games. Tim’s talk is called The Next Mainstream Programming Language: A Game Developer’s Perspective and it talks at great length the major issues facing game developers today. As Nate Combs at Terra Nova remarked, most of these issues are not specific to the game industry, but will likely be seen there first.

Most interesting (to me) was the issue of concurrency. Tim uses Gears of War for all his examples. Of course, Gears of War is an Xbox 360 exclusive. Xbox 360, as many of you probably know, has three hyper-threaded CPUs for a total capactiy of six hardware threads. Herb Sutter talked about this in his DDJ article The Free Lunch Is Over. Tim points out – rightly so – that “C++ is ill-equipped for concurrency”. C#, Java and VB aren’t much better. Tim conculdes that we’ll need a combination of effects-free non-imperative code (which can safely be executed in parallel) and software transactional memory (to manage parallel modifications to system state).

Tim also touches on topics of performance, modularity and reliability. And he has an eye on the practical at all times. For example, he points out that even a four times performance overhead of software transactional memory is acceptable, if it allows the code to scale to many threads.

Anyway, it’s a great read so check it out. Also, MS Research has a software transactional memory project you can download if you’re so inclined.

Web 2.0 Evolution

In his now-famous talk, Dick Hardt describes Identity 2.0 as inevitable. As in “coming for sure, but not here yet”. I wonder how much of Web 2.0 is here now, and how much is inevitable? And furthermore, how much can we generalize about the future of Web 2.0 from what is happening now? As in many things, I think the answer isn’t black and white.

For example, I think we can generalize about the bright future of peer-to-peer based technologies from looking at systems like Skype and FolderShare. Naturally, with the power shifting to the edge, I believe it’s inevitable for more edge machines to communicate directly with each other rather than being mediated by a service in the center. In fact, in many cases I believe were going to want to shift a significant percentage of social computing to the peer-to-peer model. It scales better and doesn’t have centralized privacy concerns. Furthermore, I think there may be be specific peer-to-peer capabilities that are difficult or impossible to replicate with a centralized model, though so far, I haven’t them yet.

However, I’m not sure we can generalize about the future of mashups the same way. This isn’t to say I think mashups are going away – far from it. I just think that mashups a year from now will look very different than they do today.

First off, I don’t think we can  generalize the success of Google Maps. In the Programmable Web how to guide, they mention that “Plotting markers on maps is probably the easiest place to start”. Apparently, many people are taking that advice because 297 of the 411 mashups listed use one of the three major (i.e. GYM) mapping services. However, maps are unique because of the massive amount of data, the extremely simple API and the ubiquity of location information. They are also one of the few mashup API’s that runs in the browser – the vast majority of mashup API’s are back end data type services like Amazon’s E-Commerce Service. How many more in-browser mashup API’s are out there waiting to be built? I’m not sure, but as I wrote in Browser as VM, the problem with these in-browser mashup API’s is that you can’t protect your IP.

As for back-end service mashup APIs, there needs to be a way for these service providers to make money. Even if the software they use to build the service is free, things like hardware and bandwidth are not. For an Amazon or Ebay, making money on thier services is relatively easy since they are facilitating sales transactions. In the end, they probably won’t care much if a sales transaction originated on their site or on a site leveraging their APIs. However, if the service provider is ad-funded, the service API effectively routes around the site’s revenue mechanism. Take, for example, a site for tracking events like Zvents, Eventful or Upcoming. They need to drive users to the actual site in order to drive revenue. So it remains to be seen exactly how the API based access is going to work out. Today, these API’s are specifically provided for “non-commercial use only” so one way would be to charge for access via the API (either flat-rate subscription, a per-use charge or a combination of the two). Alternatively, they could be bought up by a larger company who could then afford to run the business at a loss. Yahoo already bought Upcoming and Google Base already has an event item type, but the other big companies in this space (I’d guess Microsoft, Amazon, Ebay and maybe Apple) might be interested. Again, I’m not sure how this evolves either, but it’s got to evolve beyond “non-commercial access”.

Dennis Miller Has Jumped the Shark

I was so excited to see the Dennis Miller’s latest HBO special, but it was such let down. He spent the first half sucking wind and the second half sucking up to the Republicans. I mean, he used to have such a sharp policital wit, but now he’s just another partisan hack. It’s pretty sad.

At least Bill Maher is back next week.

Latest Architecture Journal

It’s been a while coming, but the print subscriptions of The Architecture Journal are underway. I got my copy of Journal 6 in the mail today. It’s not online yet, but you can get the back issues and sign up for your own free print subscription on the website.

New Devhawk Design

For those of you reading this via the syndication feed, I rolled out a new site design last night. I figured that after three years it was high time for a new site design. Not being much of a designer, I started with the Rounded design template from the ASP.NET Design Template Gallery. It’s much cleaner and more readable than my old deisgn, as I’ve removed all my blogrolls and fixed the width for 1024×768 screens.

As part of the switch, I moved from using a table-based layout to a CSS-based layout. I even wrote custom dasBlog macros that render my naviagation menu and date archive as unordered lists. The default dasBlog macros for those are rendered using tables. (Note, I didn’t rewrite the category list, so I’m not completely table free). If there’s interest from the dasBlog community, I’ll post the code.

I gotta say, I’m not sure I see what the big deal about CSS over tables is. I mean, I’m as impressed as the next guy with CSS Zen Garden, but honestly I don’t get it. Maybe it’s because I’m a developer, not a designer at heart. But CSS seems like hard-coded voodoo to me. This site has a simple fixed-width two-column layout, but it took a great deal of experimentation to get the floats coded correctly to render in both IE and FireFox. In fact, there’s a small issue with the new deisgn in IE that I didn’t bother to fix. But if I had just used tables, it would have taken five minutes.

Please let me know what you think of the new design.