Writing an IronPython Debugger: Breakpoint Management

Setting a breakpoint was the second feature I implemented in ipydbg. While setting a breakpoint on the first line of the Python file being run is convenient, it was obviously necessary to provide the user a mechanism to create their own breakpoints, as well as enable and disable existing breakpoints.

First thing I had to do was to refactor the create_breakpoint method. Originally, I was searching thru the symbol documents looking for the one that matched the filename in OnUpdateModuleSymbols. However, since I wanted to specify by new breakpoints via the same filename/line number combination, it made more sense to move symbol document logic into create_breakpoint:

def create_breakpoint(module, filename, linenum):
    reader = module.SymbolReader
    if reader == None:
      return None

    # currently, I'm only comparing filenames. This algorithm may need

    # to get more sophisticated to support differntiating files with the

    # same name in different paths

    filename = Path.GetFileName(filename)
    for doc in reader.GetDocuments():
      if str.Compare(filename, Path.GetFileName(doc.URL), True) == 0:
        linenum = doc.FindClosestLine(linenum)
        method = reader.GetMethodFromDocumentPosition(doc, linenum, 0)
        function = module.GetFunctionFromToken(method.Token.GetToken())

        for sp in get_sequence_points(method):
          if sp.doc.URL == doc.URL and sp.start_line == linenum:
            return function.ILCode.CreateBreakpoint(sp.offset)

        return function.CreateBreakpoint()

The new version isn’t much different than the old. It loops thru the symbol documents looking for one that matches the filename argument. Then it creates the breakpoint the same way it did before. Eventually, I’m going to need a better algorithm than “only compare filenames”, but it works for now.

Once I made this change, it was trivial to implement a breakpoint add command. What was harder was deciding on the right user experience for this. I decided that breakpoint management was going to be the first multi-key command in ipydbg. so all the debug commands are prefixed with a “b”. I use the same command routing decorator I used for input commands. As you can see, my breakpoint command looks a lot like my top level input method – read a key from the console then dispatch it via a commands dictionary that gets populated by @inputcmd decorators.

@inputcmd(_inputcmds, ConsoleKey.B)
def _input_breakpoint(self, keyinfo):
    keyinfo2 = Console.ReadKey()
    if keyinfo2.Key in IPyDebugProcess._breakpointcmds:
        return IPyDebugProcess._breakpointcmds[keyinfo2.Key](self, keyinfo2)
    else:
        print "nInvalid breakpoint command", str(keyinfo2.Key)
        return False

Currently, there are four breakpoint commands: “a” for add, “l” for list, “e” for enable and “d” for disable. List is by far the simplest.

@inputcmd(_breakpointcmds, ConsoleKey.L)
def _bp_list(self, keyinfo):
  print "nList Breakpoints"
  for i, bp in enumerate(self.breakpoints):
    sp = get_location(bp.Function, bp.Offset)
    state = "Active" if bp.IsActive else "Inactive"
    print "  %d. %s:%d %s" % (i+1, sp.doc.URL, sp.start_line, state)
  return False

As you can see, I’m keeping a list of breakpoints in my IPyDebugProcess class. Originally, I used AppDomain.Breakpoints list, but that only returns enabled breakpoints so I was forced to store my own list. Note also that I’m using the enumerate function, which returns a tuple of the collection count and item. I do this so I can refer to breakpoints by number when enabling or disabling them:

@inputcmd(_breakpointcmds, ConsoleKey.E)
def _bp_enable(self, keyinfo):
  self._set_bp_status(True)

@inputcmd(_breakpointcmds, ConsoleKey.D)
def _bp_disable(self, keyinfo):
  self._set_bp_status(False)

def _set_bp_status(self, activate):
  stat = "Enable" if activate else "Disable"
  try:
    bp_num = int(Console.ReadLine())
    for i, bp in enumerate(self.breakpoints):
      if i+1 == bp_num:
        bp.Activate(activate)
        print "nBreakpoint %d %sd" % (bp_num, stat)
        return False
    raise Exception, "Breakpoint %d not found" % bp_num

  except Exception, msg:
    with CC.Red: print "&s breakpoint Failed %s" % (stat, msg)

Since the code was identical, except for the value passed to bp.Activate, I factored the code into a separate _set_bp_status method. After the user presses ‘b’ and then either ‘e’ or ‘d’, they then type the number of the breakpoint provided by the breakpoint list command. _set_bp_status then simply iterates thru the list until it finds the matching breakpoint and calls Activate. Note that since it’s possible to have 10 or more breakpoints, I’m using ReadLine instead of ReadKey, meaning you have to hit return after you type in the breakpoint number.

Finally, I need a way to create new breakpoints. With the refactoring of create_breakpoint, this is pretty straightforward

@inputcmd(_breakpointcmds, ConsoleKey.A)
def _bp_add(self, keyinfo):
  try:
    args = Console.ReadLine().Trim().split(':')
    if len(args) != 2: raise Exception, "Only pass two arguments"  
    linenum = int(args[1])

    for assm in self.active_appdomain.Assemblies:
      for mod in assm.Modules:
          bp = create_breakpoint(mod, args[0], linenum)
          if bp != None:
            self.breakpoints.append(bp)
            bp.Activate(True)
            Console.WriteLine( "Breakpoint set")
            return False
    raise Exception, "Couldn't find %s:%d" % (args[0], linenum)

  except Exception, msg:
    with CC.Red:
      print "Add breakpoint failed", msg

Most of _bp_add is processing the input arguments, looping through the modules and then storing the breakpoint that gets returned. When I set the initial breakpoint inside OnUpdateModuleSymbols, I have the module with updated symbols as an event argument. However, in the more general case we’ve got no way of knowing which module of the current app domain contains the filename in question. So we loop thru all the modules, calling create_breakpoint on each until one returns a non-null value. Of course, “all the modules” will include the IronPython implementation, but assuming you’re running against released bits the call to create_breakpoint will return right away if debug symbols aren’t available.

As usual, the latest version is up on GitHub. This will be the latest update to ipydbg for a little while. I worked on it quite a bit while I was at PyCon and have been busy with other things since I got home. Don’t worry, I’ll come back to it soon enough. As I mentioned Monday, I want to get function evaluation working so I can have a REPL console running in the target process instead of the one I’ve got currently running in the debugger process.

Pygments for WL Writer v1.0.1

I just replaced the original v1.0.0 Pygments for WL Writer installer with a new and improved v1.0.1. The original URL still works – I archived the old version off with a new name. Updated source is available on on GitHub.

The only change is that I now override OnSelectedContentChanged in the sidebar control. That way, if I have multiple blocks of pygmented code in a given post, the sidebar UI updates with the correct language and color scheme of the currently selected code block.

Writing an IronPython Debugger: REPL Console

While I was banging my head against a wall experimenting with understanding how CorValue extraction worked, I found myself wanting to dink around with the debugger objects in a REPL console. One of IronPython’s core strengths is support for “exploratory programming” via the REPL. It turned out bringing a REPL to ipydbg was quite simple.

Python includes two built-in features that making DIY REPL quite easy: compile and exec (though technically, exec is a statement, not a function). As you might assume from their names, compile converts a string into what Python calls a code object while exec executes a code object in a given scope. Technically, exec can accept a string so I could get by without using compile. However, if you’re compiling a single interactive statement compile can automatically insert a print statement if you’ve passed in a an expression. In other words, if you type in “2+2” on the console it will print “4”, which is the behavior I wanted.

Here’s what my REPL console code look like. I love that it’s only 20 lines of code.

@inputcmd(_inputcmds, ConsoleKey.R)
def _input_repl_cmd(self, keyinfo):
  with CC.Gray:
    print "nREPL ConsolenPress Ctl-Z to Exit"
    cmd = ""
    _locals = {'self': self}

    while True:
      Console.Write(">>>" if not cmd else "...")

      line = Console.ReadLine()
      if line == None:
        break

      if line:
        cmd = cmd + line + "n"
      else:
        try:
          if len(cmd) > 0:
            exec compile(cmd, "<input>", "single") in globals(),_locals
        except Exception, ex:
          with CC.Red: print type(ex), ex
        cmd = ""

It’s pretty straightforward. I set up a dictionary to act as the local variable scope for the code that gets executed. I’m just reusing the current global scope, but I want the local scope to start with only the reference to the current IPyDebugProcess instance which is passed into _input_repl_cmd as “self”. All the other local variables like cmd and line won’t be available to the REPL code. Then I drop into a loop where I read lines from the console and execute them.

In order to support multi-line statements, I build up the cmd variable over multiple line inputs and I don’t execute it until the user inputs an empty line. In the standard Python console, it can recognize single line statements and execute them immediately. Dino showed me how to use the IronPython parser to do the same thing, but I haven’t implemented that in ipydbg yet. To exit the REPL loop, you type Ctl-Z, which returns None (aka null) from ReadLine instead of the empty string.

Since I never execute the code more than once, I have my exec and compile statements together on a single line. Compile takes the string to be compiled, the name of the file it came from (I’m using <input> for this) and the kind of code. Passing in “single” for the kind of code adds the auto-expression-print functionality I mentioned above. Then I exec the code object that’s returned in specified scope I’m managing for this instance of the REPL loop. If you exit out of the REPL and re-enter it, you get a fresh new copy of the local scope so any functions or variables you define in the last REPL are gone.

Runtime execution of code into a given scope is a hallmark of dynamic languages, but I’m still fairly green when it comes to Python so it took me a while to figure this out. Python code executes in a given scope, a combination of global and local variables. When you’re in the ipy.exe REPL, you’re at top level scope anyway, so global and local scope are the same – if you add something to global scope, it shows up in local scope and vis-versa. Inside a function, you’ll have the same global scope, but the local scope will be different and changes to one won’t be reflected in the other. The ipydbg REPL isn’t a function per-se, but it does provide an explicit local scope that gets disposed when you exit the REPL.

While having a debugger REPL is really convenient for prototyping new ipydbg commands, it’ll really shine once I get function evaluation working. Then I’ll be able to open a REPL console where the commands are executed in the target process instead of the debugger process as they are now. That will be very cool. Until then, the latest code is – as always – up on GitHub.

Writing an IronPython Debugger: Getting Arguments

It’s a small update, but I added support for displaying method arguments along side the local variables. As I mentioned in that post, breaking out the CorValue extraction and display code into a shared function was a good idea – adding support for getting arguments was trivial since I could reuse that code.

Because there’s no hierarchy of scopes to deal with and the names are in the metadata instead of debug symbols, getting arguments is much easier than getting local variables.

def get_arguments(frame):
    mi = frame.GetMethodInfo()
    for pi in mi.GetParameters():
      if pi.Position == 0: continue
      arg = frame.GetArgument(pi.Position - 1)
      yield pi.Name, arg

You’ll notice that I’m yielding the arguments as a tuple of the name and value, the same as get_locals yields. I did refactor get_locals a bit – there’s no longer an argument to skip hidden variables anymore (though get_locals still skips dynamic call sites caches as it did before). Now, it’s up to the the caller of get_arguments and get_locals to filter hidden variables as they see fit.

Because get_locals and get_arguments yield the same types, I was able to factor the code to print a value and loop through the collection of values into separate local functions.

@inputcmd(_inputcmds, ConsoleKey.L)  
def _input_locals_cmd(self, keyinfo):  
  def print_value(name, value):  
    display, type_name = display_value(extract_value(value))  
    with CC.Magenta: print "  ", name,
    print display,  
    with CC.Green: print type_name  

  def print_all_values(f, show_hidden):  
      count = 0  
      for name,value in f(self.active_thread.ActiveFrame):  
        if name.startswith("$") and not show_hidden:  
          continue  
        print_value(name, value)  
        count+=1
      return count  

  print "nLocals"  
  show_hidden =  
    (keyinfo.Modifiers & ConsoleModifiers.Alt) == ConsoleModifiers.Alt  
  count = print_all_values(get_locals, show_hidden)  
  count += print_all_values(get_arguments, show_hidden)  

  if count == 0:  
      with CC.Magenta: print "  No Locals Found"

I really like the local functions feature of Python. In C#, you can define an anonymous delegate using the lambda syntax. But for a scenario like this, I like local functions better. However, I do like C#’s support for statement lambdas – Python only supports expression lambdas. So while I like local functions better in this scenario (because I’m using the method more than once) in something like an event handler, I like the statement lambda syntax better.

As usual, the latest version of ipydbg is up on GitHub.

Pygments for Windows Live Writer

For the past few years, I’ve used the CodeHTMLer plugin for Windows Live Writer for the code snippets in my blog. However, recently I discovered the Pygments Python syntax highlighter package which supports scores more languages than CodeHTMLer does. It also support multiple color schemes and was easily extensible so I could build an HTML formatter that didn’t use <pre> tags (which I’ve found DasBlog has issues with in the RSS feed, though honestly I’m running three minor releases behind the latest DasBlog release). IronPython supports Pygments just fine – at least, the one IPy bug that Pygments exposes has a simple workaround – so I set about building a Windows Live Writer plugin that uses it.

If you’re simply interested in the plugin itself, you can get it from my SkyDrive. The source is up on GitHub. For now, if you find any bugs, please leave a comment on this post. If there’s enough interest I’ll setup a site somewhere (CodePlex perhaps) where I can track bugs and feature requests.

Pygments for WL Writer is a smart content source. In WL Writer’s terminology, that means when you click inserted text in the editor window, it is treated as an atomic entity which you can then edit by using the Edit Code button in the Pygments for WL Writer sidebar editor. I I often found that I would edit my code multiple times – usually to shorten lines so they’d fit on my blog without wrapping. CodeHTMLer for WL Writer is a standard content source, so it just spews the formatted code as HTML onto the page.

From an IronPython perspective, there’s some interesting stuff there. I decided to compile the pygments library into a DLL for easier distribution. If you look in the source, there’s a folder for the Pygments source as well as the parts of the standard Python library that Pygments depends on and my custom HTML formatter. Those all get compiled via a custom script which can be called by the build.bat file in the project root.

Some features I’m thinking about adding:

  • An extensibility model so that you can add new languages by dropping new Pygments lexers into the same folder the plugin is installed to. Pygments supports lots of languages, but not all of them – notably it’s missing Powershell and F#.
  • Support for new HTML formatters and color schemes using the same extensibility mechanism described above.
  • Support for selecting an HTML formatter.
  • Improving the code editor window. Currently, I’m using a standard WinForms multi-line TextBox, but that leaves a lot to be desired. With the Python work I do, I often need to be able to select a bunch of text and change it’s indenting via tab and shift-tab. If anyone has a suggestion for a good WinForms text editing control, let me know.
  • Being able to specify the font and size of the Pygmented code.
  • Storing user preferences – remembering the most recent syntax and color scheme the user used.

Feedback, as always is appreciated. I’ll probably write a few posts about the project when I get a chance, so let me know if there’s anything you’re dying to hear about.