Variadic Powershell Functions With Optional Named Params

I’ve been doing a little CPython coding lately. Even though I left the IronPython team a while ago (and IronPython is now under new management) I’m still still a big fan of the Python language and it’s great for prototyping.

However, one thing I don’t like about Python is how it uses the PYTHONPATH environment variable. I like to keep any non-standard library dependencies in my project folder, but then you have to set the PYTHONPATH environment variable in order for the Python interpreter to resolve those packages. Personally, I wish there was a command line parameter for specifying PYTHONPATH – I hate having to modify the environment in order to execute my prototype. Yes, I realize I don’t have to modify the machine-wide environment – but I would much prefer a stateless approach to an approach that requires modification of local shell state.

I decided to build a Powershell script that takes allows the caller to invoke Python while specifying the PYTHONPATH as a parameter. The script saves off the current PYTHONPATH, sets it to the passed in value, invokes the Python interpreter with the remaining script parameters, then sets PYTHONPATH back to its original value. While I was at it, I added the ability to let the user optionally specify which version of Python to use (defaulting to the most recent) as well as a switch to let the caller chose between invoking python.exe or pythonw.exe.

The details of the script are fairly mundane. However, building a Powershell script that supported optional named parameters and collected all the unnamed arguments together in a single parameter took a little un-obvious Powershell voodoo that I thought was worth blogging about.

I started with the following param declaration for my function

param (
    [string] $LibPath="",
    [switch] $WinApp,
    [string] $PyVersion=""
)

These three named parameters control the various features of my Python Powershell script. Powershell has an automatic variable named $args that holds the arguments that don’t get bound to a named argument. My plan was to pass the contents of the $args parameter to the Python interpreter. And that plan works fine…so long as none of the non-switch parameters are omitted.

I mistakenly (and in retrospect, stupidly) thought that since I had provided default values for the named parameters, they would only bind to passed-in arguments by name. However, Powershell binds non-switch parameters by position if the names aren’t specified . For example, this is the command line I use to execute tests from the root of my prototype project:

cpy -LibPath .Libsite-packages .Scriptsunit2.py discover -s .src

Obviously, the $LibPath parameter gets bound to the “.Libsite-package” argument. However, since $PyVersion isn’t specified by name, it gets bound by position and picks up the “.Scriptsunit2.py” argument. Clearly, that’s not what I intended – I want “.Scriptsunit2.py” along with the remaining arguments to be passed to the Python interpreter while the PyVersion parameter gets bound to its default value.

What I needed was more control over how incoming arguments are bound to parameters. Luckily, Powershell 2 introduced Advanced Function Parameters which gives script authors exactly that kind of control over parameters binding. In particular, there are two custom attributes for parameters that allowed me to get the behavior I wanted:

  • Position – allows the script author to specify what positional argument should be bound to the parameter. If this argument isn’t specified, parameters are bound in the order they appear in the param declaration
  • ValueFromRemainingArguments – allows the script author to specify that all remaining arguments that haven’t been bound should be bound to this parameter. This is kind of like the Powershell equivalent of params in C# or the ellipsis in C/C++.

A little experimentation with these attributes yielded the following solution:

param (
    [string] $LibPath="",
    [switch] $WinApp,
    [string] $PyVersion="",
    [parameter(Position=0, ValueFromRemainingArguments=$true)] $args
)

Note, the first three parameters are unchanged. However, I added an explicit $args parameter (I could have named it anything, but I had already written the rest of my script against $args) with the Position=0 and ValueFromRemainingArguments=$true parameter attribute values.The combination of these two attribute values means that the $args parameter is bound to an array of all the positional (aka unnamed) incoming arguments, starting with the first position. In other words – exactly the behavior I wanted.

Not sure how many people need a Powershell script that sets PYTHONPATH and auto-selects the latest version of Python, but maybe someone will find it useful. Also, I would think this approach to variadic functions with optional named parameters could be useful in other scenarios where you are wrapping an existing tool or utility in PowerShell, but need the ability to pass arbitrary parameters thru to the tool/utility being wrapped.

Testing the Untestable with Delegate Injection

My ASP.NET skills may be a bit rusty, but that’s not stopping me from working on a side project in ASP.NET MVC. While it has made significant strides in the 4.0 release, code like this demonstrates that ASP.NET still has a long way to go to improve testability.

public class AccountController : Controller
{
    ITwitterService _twitter;

    //constructor dependency injection
    public AccountController(ITwitterService twitterService)
    {
        _twitter = twitterService;
    }

    public ActionResult SignInWithTwitter()
    {
        //check for GetRedirectUrl and sets cookie
        Response.SetCookie(new HttpCookie("RedirectUrl",
            FormsAuthentication.GetRedirectUrl(string.Empty, false)));

        //build callback URL
        var callback_url_builder = new UriBuilder()
        {
            Host = Request.ServerVariables["SERVER_NAME"],
            Port = int.Parse(Request.ServerVariables["SERVER_PORT"]),
            Path = Url.Action("SignInWithTwitterCallback"),
        };

        //Helper funciton to invoke Twitter’s oauth/request_token REST endpoint
        var url = _twitter.GetRequestToken(callback_url_builder.ToString());

        //redirect to the URL returned from _twitter.GetRequestToken
        return Redirect(url);
    }

This code has several dependencies that are hard or impossible to test: FormsAuthentication, Request, Response and Url. Testing this code is a real pain in the ass. When I originally wrote this code, I bit the bullet and wrote said the PITA test code. But I couldn’t help thinking there must be a better way.

Clearly, in order to be able to test this code, I need to introduce points of abstraction that can be filled with mock implementations during unit test runs. I already have one such abstraction point – the _twitter field of AccountController is an ITwitterService instance that gets injected on construction. I have a “real” implementation that gets injected in production and a mock implementation that I manually inject in my tests.

In order to test the code above, I’ll need to wrap the calls into the untestable objects in some sort of injectable dependency that can be mocked out for tests.

C# being an OO language, typically we think of Dependency Injection in terms interfaces and classes. However, wrapping the untestables in interfaces and then implementing those interfaces is a lot of additional code. Instead of one injected dependency, the code above would need five injected dependencies. Furthermore, since objects are both the unit of dependency injection as well as the typical way the URL namespace is segmented, I also have to consider the dependencies of any other action methods on AccountController. That gets ugly fast.

Instead of thinking in terms of objects and interfaces, I wondered what DI might look like if we thought about dependencies in terms of delegates and anonymous lambdas? You know, functional programming?  It might look something like this:

Func<string> @GetRedirectUrl;
Action<HttpCookie> @SetCookie;
Func<NameValueCollection> @ServerVariables;
Func<string, string> @ActionUrl;

public ActionResult SignInWithTwitter()
{
    //check for GetRedirectUrl and sets cookie
    @SetCookie(new HttpCookie("RedirectUrl", @GetRedirectUrl()));

    //build callback URL
    var callback_url_builder = new UriBuilder
    {
        Host = @ServerVariables()["SERVER_NAME"],
        Port = int.Parse(@ServerVariables()["SERVER_PORT"]),
        Path = @ActionUrl("SignInWithTwitterCallback"),
    };

    //Call twitter.GetRequestToken
    var url = _twitter.GetRequestToken(callback_url_builder.ToString());

    //redirect to the URL returned from Twitter.GetRequestToken
    return Redirect(url);
}

(Note, I’m using the @ symbol as a prefix for injected delegates, in order to make it easier to pick them out of the code. Looks kinda odd, but it is valid C#.)

This is better in that it’s actually testable without requiring a metric crapload of test code to mock the ASP.NET intrinsics. However, this approach don’t have enough information to inject dependencies based on type alone. For example, the @GetRedirectUrl is a Func<string> (i.e. a function that takes no parameters and returns a string). However, FormsAuth FormsCookieName and DefaultUrl properties would also be represented as Func<string> delegates as well.

Most DI containers have support resolving dependencies by name and type, but that makes declaring dependencies much tougher and more fragile in my opinion. If you’re going to limit yourself to static typing write compiled code, you might as well let the compiler do as much heavy lifting as possible, right?

Also, wrapping each untestable method call in a delegate has made the explosion of dependencies problem even worse. SignInWithTwitter declares four new dependencies, the callback action (not shown) adds seven new delegate dependencies and the sign out action adds one, making a total of thirteen dependencies! (including the original ITwitterService). However, none of these twelve delegate dependencies are shared across action methods. So they aren’t really controller dependencies so much as action dependencies. So what if I went ahead and declared them as action dependencies directly?

public Func<ActionResult> SignInWithTwitter(
    Func<string> @GetRedirectUrl,
    Action<HttpCookie> @SetCookie,
    Func<NameValueCollection> @ServerVariables,
    Func<string, string> @ActionUrl)
{
    return () =>
    {
        //check for GetRedirectUrl and sets cookie
        SetCookie(new HttpCookie("RedirectUrl", GetRedirectUrl()));

        //build callback URL
        var callback_url_builder = new UriBuilder
        {
            Host = ServerVariables()["SERVER_NAME"],
            Port = int.Parse(ServerVariables()["SERVER_PORT"]),
            Path = ActionUrl("LogOnCallback"),
        };

        //Call twitter.GetRequestToken
        var url = _twitter.GetRequestToken(
            callback_url_builder.ToString());

        //redirect to the URL returned from Twitter.GetRequestToken
        return Redirect(url);
    };
}

SignInWithTwitter is now a function that takes four delegates and returns a delegate – we’re really down the functional programming rabbit hole now!

The benefit of this approach is that I can make tradeoffs as I see fit between controller and action dependencies. ITwitterService is still injected via the AccountController constructor since it is used by two of the three Account actions. Dependencies only used by a single action can be scoped to that specific action so that only tests for a given action method have to mock them out. And testing this is a breeze compared to having to mock out intrinsic ASP.NET objects.

[Fact]
public void returns_redirect_result_with_getrequesttoken_url()
{
    //inject controller dependencies
    var twitter = new Mock<Models.ITwitterService>(MockBehavior.Strict);
    twitter.Setup(t => t.GetRequestToken(It.IsAny<string>()))
        .Returns("http://fake.twittertest.local");
    var controller = new AccountController(twitter.Object);

    //inject action dependencies
    Func<string> @getRedirectUrl = () => "/fake/redirect/url";
    Action<HttpCookie> @setCookie = c => { };
    Func<NameValueCollection> @serverVariables =
        () => new NameValueCollection()
        {
            {"SERVER_NAME", "testapp.local"},
            {"SERVER_PORT", "8888"}
        };
    Func<string, string> @actionUrl = url => "/fake/url/action/result";
    var action = controller.SignInWithTwitter(@getRedirectUrl,
        @setCookie, @serverVariables, @actionUrl);

    //Invoke action
    var result = action();

    //Validate
    var redirectResult = Assert.IsType<RedirectResult>(result);
    Assert.Equal("http://fake.twittertest.local", redirectResult.Url);
}

I could make this code even smaller by moving the action dependencies out to be test fixture class fields. Assuming you write multiple tests for each action method, this allows you to reuse the mock action delegates across multiple methods. If I want to do negative testing, I can easily define test-specific delegates that throw exceptions or return unexpected values.

Of course, the down side to this approach is that MVC has no idea what to do with an action method that returns Func<ActionResult>. I could envision support for this pattern in MVC someday, though we’d need a robust solution to the type+name dependency issue I described above. For now, I will simply wrap the delegate injection version (aka the testable version) of the action in a non-testable but MVC compatible version that injects the right delegate dependencies.

public ActionResult SignInWithTwitter()
{
    return SignInWithTwitter(
        () => FormsAuthentication.GetRedirectUrl(string.Empty, false),
        Response.SetCookie,
        () => Request.ServerVariables,
        Url.Action)();
}

Since I’m using the untestable intrinsics, I can’t write any tests for this method. However, it’s nearly declarative because the anonymous delegates I’m injecting are closing over the untestable intrinsics. Personally, I’m willing to make the tradeoff of having an declarative yet untestable wrapper action method in order to get the delegate injected easy-to-test version of SignInWithTwitter that has the real implementation.

Washington Stealth Lacrosse

Last Saturday night, my family and I went with some friends from the neighborhood up to Everett to catch the Washington Stealth in the National Lacrosse League Champion’s Cup final. This was my first indoor lacrosse game, and it was a doozy – the Stealth were down four goals with a a minute to go in the third quarter, but scored eight goals in a row to take the Champions Cup 15-11. After watching my Capitals collapse in the NHL playoffs, it was awesome to see the home team come out on top.

(Side Note, at least the Caps aren’t alone when it comes to embarrassing playoff performances this year. Boston blew a 3-0 series lead against Philly and Pittsburg blew a 3-2 series lead against Montreal and got beat like a drum in game 7. I’d argue that the Caps performance was still the most embarrassing of the three, but not by much)

As I said, this was my first indoor lacrosse game. The game is basically ice hockey without the ice. In fact, the Stealth’s advertising slogan this year was “It’s like hockey…with balls”. 1 As far as I could tell, the playing area is identical to a hockey rink except for the no ice thing. Benches, boards, penalty boxes, goal position – all the same. There are five players + a goalie per side, with lots of line changes and plenty of hitting. I might not have been to a game before, but I was able to pick up the basics of strategy and rules just based on the similarity to hockey.

Since it’s so similar to hockey, it’s probably easier to talk about the things that are different – like the shot clock. Similar to basketball, in indoor lacrosse you have a limited amount of time to take a shot or else you lose possession. Maintaining possession in lacrosse seems easier than in does in hockey, so the shot clock is an important addition. Otherwise, killing penalties and running out the clock with a lead would be child’s play once you got possession. But with the shot clock, you can only chew up thirty seconds at a time.

The combination of the basketball-esque shot clock and hockey-esque line changes creates for an interesting dynamic, but not always positive. I was expecting there to be more fast breaks, But instead, unless it’s a clear one-on-none or two-on-one, the breaking player almost always pulls up and waits for the line change to finish – often going off himself. There are line changes in hockey, but it’s rare for a guy in the offensive zone to be able to just hold onto the puck and wait for the rest of the team to line change.

On the other hand, I really liked how indoor lacrosse doesn’t have constant face-offs like hockey does. Face offs in indoor lacrosse are only to start quarters and after goals. Otherwise, when the ball goes out of play or there’s a penalty, there are simple possession rules to determine who gets the ball. Face-offs are exciting, and they happen often enough given the amount of scoring in indoor lacrosse (26 goals total Sat. night, which was close to the season average for the Stealth of 24.375 total goals scored per game) without being overwhelming (there were 68 face-offs in yesterday’s Sharks/Hawks game – that’s more than one per minute).

Of course, having a good game with a come-from-behind victory by the home team certainly casts the game in the best light. Having a packed house also helped. 8,600 fans there last night – a sellout – many of whom appeared to be involved in lacrosse leagues around the Puget Sound area. The friends we went with have a teenage son who plays, which is how they got into it. Patrick says he wants to learn to play to, so I’m guessing this won’t be our last Stealth game.

This being primarily a geek blog, I’ll add that both the Stealth and the NLL in general need to modernize their marketing and fan base building efforts. The Stealth website is old school to put it mildly – I especially like the full screen ad to buy tickets for Saturday’s game that still pops up, two days after the game. Lacrosse fans claim it’s the fastest growing sport in the nation, but it gets almost zero media attention. So why not encourage citizen media by issuing press credentials to fans who blog about the Stealth like the Caps did a few years ago? Selling NLL TV rights for any significant dollars is a pipe dream right now, so why not stream the games online? I suspect the main revenue source for NLL teams is ticket sales and merchandise – streaming the games would be a good way to push both.


  1. Cute slogan, but the implication that lacrosse players are tougher than hockey players is ludicrous. NLL season lasts 16 games and the playoff are three rounds of single elimination. NHL season lasts 82 games and the playoffs are four rounds of best of seven series.

Weakly Typed Dynamic Languages and Natural Selection

I’m not reading much in the way of blogs or twitter these days – way to heads down in my new job for that right now. But I did see Scott Hanselman’s post on method overloading and dynamic types and Ted Neward’s follow-on post static-typing fundamentalism. Even though I’ve moved on from the IronPython team, dynamic typing is a topic that’s still near and dear to my heart so I can’t resist throwing in my 2¢.

First off, I agree 100% with Ted’s post – though not the over-the-top mocking tone. These static > dynamic flame bait comments are so tired that they’ve literally become cliché. I agree with Ted’s points, but by answering fire with fire he’s just perpetuating the flame war that he claims to be so tired of. I really am tired of it, so I’m not going to bother to address any of the original anti-dynamic typing faux-arguments (fauxguments?) nor Ted’s artful and devastatingly mocking takedown of them.

But I do have a question for any static-typing fundamentalists in the audience: if static typing is so much better than dynamic typing, then how come dynamically typed languages are so popular? Doesn’t natural selection apply to type systems?

Those aren’t rhetorical questions. Building software takes time and effort. While developers often donate time and effort to projects (see: open source) typically they work for money. That money has to come from somewhere – usually it comes from someone who needs the software built for some business reason. And the people footing the bill for software construction demand the highest return on investment they can get.

If dynamic typing or VARIANT (which is actually weak not dynamic typing, but I digress) really did create “horrific devastation”, wouldn’t that have caused a negative feedback loop where the business people who actually foot the bills for creating software became wary and untrusting of using VB as the language of choice for their projects in favor of strong and statically typed languages that helped developers “make good choices”?

Yet the opposite happened. VB was the most popular programming language in the world for the better part of a decade. And while VB’s reign at the top is over, I’d argue that these days the most popular programming languages are PHP and JavaScript, both of which are weakly typed dynamic languages too.

Now clearly, popular != better. However, static-typing fundamentalism isn’t an argument about which way is “better” so much as an argument about which way is “worthy”. But how can you argue that you’re approach is the only worthy path when the opposite approach has been so successful? Remember, one developer’s “horrific devastation” might be another businessman’s “successful project because it helped me enter a new market faster than my competitors”.

Fixing Powershell’s Busted Resolve-Path Cmdlet

Usually, my PowerShell posts are effusive in their praise. However, who thought up this “feature” gets no praise from me:

PS»Resolve-Path ~missing.file
Resolve-Path : Cannot find path 'C:Usershpiersonmissing.file' because it does not exist.

In my opinion, this is a bad design. Resolve-Path assumes that if the filename being resolved doesn’t exist, then it must be an error. But in the script I’m building, I’m resolving the path of a file that I’m going to create. In other words, I know a priori that the file doesn’t exist. Yet Resolve-Path insists on throwing an error. I would have expected there to be some switch you could pass to Resolve-Path telling it to skip path validation, but there’s not.

And the worst thing is, I can see that Resolve-Path came up with the “right” answer – it’s right there in the error message!

Searching around, I found a thread where someone else was having the same problem. Jeffrey Snover – aka Distinguished Engineer, inventor of Powershell and target of Erik Meijer’s Lang.NET coin throwing stunt – suggested using –ErrorAction and –ErrorVariable to ignore the error and retrieve the resolved path from the TargetObject property error variable. Like Maximilian from the thread, using this approach feels fragile and frankly kinda messy, but I needed a solution. So I wrote the following function that wraps up access to the error variable so at least I don’t have fragile messy code sprinkled through out my script.

function force-resolve-path($filename)
{
  $filename = Resolve-Path $filename -ErrorAction SilentlyContinue
                                     -ErrorVariable _frperror
  if (!$filename)
  {
    return $_frperror[0].TargetObject
  }
  return $filename
}

The script is pretty straightforward. –ErrorAction SilentlyContinue is PowerShell’s version of On Error Resume Next in Visual Basic. If the cmdlet encounters an error, it gets stashed away in the variable specified by ErrorVariable (it’s also added to $Error so you can still retrieve the error object if ErrorVariable isn’t specified) and continues processing. Then I manually check to see if resolve-path succeeded – i.e. did it return a value – and return the TargetObject of the Error object if it didn’t.

As I said, fragile and kinda messy. But it works.