Monday, May 25, 2009

Code layout experiements

I managed to get my Black Triangle for the code layout project that I rambled on about earlier.

It has been pretty fun, getting to this stage I realize how long it's been since I got to sink in a good chunk of time into a side project. Also, making these ideas more solid convinces me that I wasn't on a total wild goose chase.

Let's start with a pretty typical view of Java code in an editor, in this case, emacs.



There are a few problems with reading code in this format:

  • The modifiers place the name of the class, member methods and members in an unpredictable place.

    • If you scan down the source code it's hard to pick out the method names because you have to scan left each time.
    • Take a look at how JavaDoc tries to make this easier. But there is still a problem, if you are scanning for names, it's hard to scan for arguments at the same time.
    • Scanning for extends and implements, very important pieces, also takes some time.

  • Indentation is used to separate the class header from its member declarations, but this space is wasted since there aren't too many top level classes in a file.
  • A lot of colors are needed to emphasize the various pieces of syntax, and the colors switch a lot as you are scanning through the text, which can be distracting.


Some of these can be solved by changing around the whitespace conventions, but these can only go so far. Looking at a few books on graphic design you will quickly realize that there are so many more effective ways to organize information.

Here's my first attempt at addressing these issues:


  • I've tried to stay away from colors, and use size and space to organize the information.
  • Modifiers aren't handled yet, I'm thinking of using something small like icons to indicate that information.
  • A lot of brackets, parentheses, and other punctuation have been removed. I think this makes everything look a lot less cluttered. The layout will have to make sure that it's still possible to see what's going on.
  • Indentation is reduced, and the class header and method declarations become more like headers in regular documents.
  • I've been experimenting with different ways of displaying the arguments, the current one which is a table with the types greyed out looks pretty good so far.
  • The for loop layout isn't fully worked in yet, it's trying to bring the initial and update pieces of the loop closer together.
  • I rely on Cairo for the text render dimensions, this gives me inconsistent results if the text happens to have/not have descenders or ascenders.


Ok, so IANAGD (I Am Not A Graphic Designer) but enough is set up now so that many different possible layouts are possible.
I'm hoping that this will be a nice sandbox that can be used to try out all sorts of different ways of laying out code.

It's written in Python and uses Cairo to render. To represent the Java syntax, I made a class for each grammar element that I figured made sense, then the layout class just does a big isinstance dispatch to render all the pieces. This seemed like the best way to allow multiple layout engines for any particular format. All grammar elements provide a text() method that allow you to have a fallthrough case, giving it a bit of graceful degradation.

The layout works in a TeX-ish way, building boxes into horizontal and vertical boxes with baseline information with another basic box type for Cairo rendered text. It only does the straightforward computations, not the more complicated line breaking and constraint solving.

Academic work is probably about to ramp up again, so this project is back on the back-burner, but I still have more that I hope to work in:

  • Integrate ANTLR parsing of Java files using Terrence Parr's Java 1.5 grammar. I've figured out mostly how to do this, so what's left is to go through the pain of coding it. Also it will take a fair bit of work to shape actual parse tree into the type of AST that I want. So some grammar hacking to get the right tree out, then additional processing to get it into a form that's friendly for the layout engine.
  • Implement horizontal and vertical "rules" or lines.
    Since I based this version on the functional version of TeX's layout, boxes have no notion of parent. It seems like the best way to make these work is to add in parents, which will also open up room for more interesting features. Hopefully some ideas from Hopscotch port over.
  • Work in some clipping, vertical boxes do not have to render contents outside of the window bounds... etc. Then add scrollbars.
  • More layout features. A basic table container has already been written, but I'd like something more general, maybe linear constraints, that would allow more flexible grid layouts. These would be used in small areas since they are harder to optimize.
  • A few more TeX features, fraction and exponent layout.
  • Interactivity! I didn't plan on this originally, but now that I'm at this stage, I have a fairly good idea of how this could work. Implementing vertical rules will give me an easy way to display a cursor.

Wednesday, April 22, 2009

Moving On

So far this blog experiment has worked out well, I've been writing regularly and getting a decent amount of words out in a fairly short amount of time.

I don't like the quality of the posts I've done so far though, so I'm going to start something new, where there is still a regular posting schedule, but I set aside enough time to make high quality posts.

I'll probably still post on here, but not on a regular schedule.

Wednesday, April 15, 2009

Open source

I've been using open source software on and off for quite a while, and it constantly amazes me how much there is out there and how well supported it is. That the huge package repositories of Debian and Gentoo actually get tested and sometimes maintain their own patch sets of different software packages seems like an incredible amount of work, and it's volunteer supported.

There are recurring complaints about open source, but in some ways its a measure of what people come to expect from it. Say 10 years ago, you had to muck around with a lot of stuff and eventually it worked and that was good enough, some of the good projects had a small dedicated group and you could get pretty good help from the forum or mailing list. These days it almost seems like every large project has an army of people doing everything from testing, to documenting and programming.

A major complaint used to be about the huge number of window managers available. A lot of the complaints were saying that there wasn't enough standardization, that these other projects were diluting the effort. But then GNOME and KDE got more mature, and those complaints died down. It seems like the number of window managers wasn't the real problem, just the lack of some nice defaults.

This is great, because I happen to really like this whole scene of experimentation that goes on because you can change the window manager, rip stuff out, add stuff in, and have different ways of organizing how you work with your computer. The other thing that is nice about this is that the X.org people think about how to introduce new features in a window manager friendly way. This slows development down, but just looking at the new things that are coming in to X.org such as indirect rendering, new architectures for drivers and MPX makes me think that there's only more good stuff to come.

There's a lot of back-end work going on that doesn't look too impressive, especially when your other software stops working for random reasons, but from a programmer's viewpoint, it looks clean and thoughtfully designed. I can't wait to see what's going to be built on top of it.

Wednesday, April 8, 2009

Memory Consistency Modelling

I made this presentation a while ago as a quick introduction to memory consistency models. I've lengthened it out a bit and made it my first attempt at online presentations.

You'll probably find I'm going at a very slow pace in this video, I tended to mess it up when I tried a quicker pace, this ended up being the best way to actually get through the whole thing.

Some things to note, I am talking very generally about the models, so I didn't use specific processors like SPARC or x86. In my research I mostly got to work with simpler theoretical models, so that might be a reason for my uncommon optimism about these methods.

Intro to Memory Consistency Modelling.

I don't think that the theory behind memory consistency models is too bad, and actually should be able to handle quite a bit. I'm confident that almost all the shared memory concurrency that we have now could be eventually handled by memory consistency model theory, if it keeps getting developed through research. I won't take a guess about future applications and how well it could be built into a method for constructing software though.

More references and details are available in my thesis.

Wednesday, April 1, 2009

FRP Design Challenge

To start off with, the purpose of this post is to find out what ideas are out there and maybe start some new ones. I'm not looking to get people to work on things that I've thought up, more interested to see what they think about them.

I think programmers need to start using the word design more than art. This really captures the balance between usefulness of a piece of software and how nice it is to work with and even just look at. Haskell and other functional languages seem to have a huge potential for software design. So that should be the key, abstractions that combine mathematical beauty, with practical purpose and are just nice to work with. Abstractions that are only about mathematical beauty are more art than design.

I've been quite interested in the functional reactive programming since these projects seem to be working towards a nice solution for programming both user interfaces and games. One thing I've noticed is that the emphasis is on local definition of what components do, which seemed to be hinted at in this lambda the ultimate post.

This probably works well for user interfaces, but for games, it would be really nice to be able to mix: 1. global rules, such as physics, that are best computed by considering the whole world at once, and 2. local rules, specially defining actions for certain types of objects in the world. The challenge is to find a way to mix these two ways of defining how objects act, in a way that is nice to work with in a functional language.

The challenge proceeds in stages:

1. First implement a global rule for objects, let's say circles, in the world, using the Barnes-Hut gravity simulation algorithm. The only purpose in this stage is to get some objects orbiting around each other.

2. Now get individual objects to react to the global algorithm. Have each circle change color based on the acceleration/velocity/change in either.

3. Allow this reaction to be customized, some circles turn blue, some circles turn red ...etc.

4. Get individual objects to react to each other. Add a simple collision response, it could also just be a color change, does not need to consider physics.

5. Allow this collision response reaction to be customized.

6. Add flocking behavior to objects, now objects are aware of their environment.

7. Allow the flocking behavior to be customized.

This is a design challenge rather than a strict programming challenge, I know that this can be done, but what are some nice abstractions that would suit this problem? Would they generalize to other situations, if I changed out the physics engine for a user interface layout algorithm?

I would like to know where the likely problems would be in implementing these challenge stages in various frameworks, what things I could read that would help me solve it myself, or suggestions in changing the challenge itself.

Wednesday, March 25, 2009

Stopping, completing, reversing and concurrency

Keeping on schedule is tough, but I want to keep to this Wednesday thing. I want to talk about some concepts that seem to be important to concurrency (more specifically, shared memory concurrency) but don't seem to be distinguished too often.

The first concept is stopping, a process can stop for a number of reasons. One thing to think about is what happens when a process stops. In lock based systems, it usually means that a lock won't get released, so a process can't continue running. So one of thing that is nice to do is to ensure that the system can keep running even if a process stops. Also, in some systems, such as a database, it is also good to be able to arbitrarily stop a process, perhaps to be able to undo a deadlock.

Once a process is stopped for some reason, in the middle of some action, what do we do now? In some cases we can just ignore the process, but usually this is not desirable. If we could complete the process' action then we should be ok. One way to do this is to use descriptors for actions, which would give a full description of what was to be done. Then it is possible for another process to complete the stopped process' action. In many lock-free algorithms, this completion is attempted immediately, to fulfill the lock-free progress condition. But helping is expensive. I prefer to think that the descriptor now gives you the option of helping, and helping is safe, but we should manage this helping to increase efficiency. Descriptors work for many actions, like as data structure operations such as add and delete. For a whole transaction, this information may be a lot harder to store, it has to include most, or all the power of a programming language.

There is another option, if we can't ignore the action and can't complete it, maybe we can reverse it and make it like it didn't happen. Reversing can be easier than completing, there are many ways to reverse, taking snapshots, recording actions performed and more. This is a good option for transactional systems, since full information cannot be known about the transaction, and even if it is, you know that reversing will be able to complete, but you don't know if the transaction will halt.

Stopping, completing and reversing are important concepts. It would be nice if there was a richer theory behind it. Reversing of operations is used in work by Herlihy and others in Transactional Boosting, in order to be able to use lock-free data structures with transactional memory, we need to be able to reverse things that happened to the data structure. These data structures require different augmentations to handle reversing. In some cases the ability to reverse or complete may depend on the current state of the system.

Two other aspects that are also important are idempotence and commutativity. These also can depend on the current state of the system.

I think a solid theory of stopping, completing, reversing, idempotence and commutativity operations on data structures would go a long way to making it easier to work with lock-free data structures.

Wednesday, March 18, 2009

A Code Editor with Powerful Layout

Text editors have often been a favourite hobby hacking project. There's a lot of innovation going on in user interface, extensibility, and networking. One thing I haven't seen a lot of is a text/code editor with a powerful layout engine.

By powerful I mean that it has support for nested layout structures, and accurate placing of elements. A few editors have powerful enough editing widgets that allow pictures and interactive buttons. Emacs allows a fair bit of this and the plt-scheme environment allows embedding richer data into code. Unfortunately, it doesn't seem like it is possible to place elements fairly accurately.

I want a framework to play around with layout ideas for code. We're still stuck with colors and indentation as the only way to emphasize various pieces of code. The design world has huge collections of techniques and strategies for indicating hierarchies, important pieces, and the relationship between elements. Also, so much time is spent looking through and reading code that anything that makes it nicer would probably quite appreciated.

Surprisingly, I actually started on this one already, though it'll be a while before I get a working version out. I've decided that for my project, the TeX way of layout will bring out a good balance of power and efficiency. A good introduction the TeX layout is "A Functional Description of TeX's Formula Layout" by Heckmann and Wilhelm. TeX uses aligned boxes, which are nested, to setup its layout. This means that the underlying data structure doesn't support rotating elements, which I think is reasonable to expect. You do however, gain an incredibly flexible system, that let's you position elements very precisely, with the nesting making some tasks a lot simpler.

Ok, this is still very vague sounding, but the way I hope that this works is that you have some code, which is roughly parsed to get the parts that are important for layout.

The results of the parse are then sent to the layout engine, which is custom made for the parser. It takes the parse tree, and produces a layout in box format, which is then drawn on the screen, using a graphics library like Cairo. The first version would probably be output only. It would only do layout once, without worrying about local updates in between.

This first version would already be quite useful though, once it is up, you can start to create different layout engines, and try out different ideas. The main thing I want to play around with first is the way hierarchies are displayed. Indentation is nice, but in most editors is sort of limited. With a box format, we can play around with laying out blocks of code side by side, changing the indent distance based on the context, maybe even some light borders in the right places.

Wednesday, March 11, 2009

Formal Methods and Me

My research isn't strictly in formal methods, or formal verification as it's sometimes called, but at some point I really found that I liked being a little extra careful with proofs. It started when I stumbled on Leslie Lamport's "How to Write a Proof", which talks about a way of structuring proofs in a way that encourages you to explain all the small steps. It is similar to the style of proofs introduced by David Hilbert. "How to Write a Proof" has several interesting stories about writing a proof, then trying to rewrite them in a structured proof format, and finding an error in the original proof as a consequence. As a programmer, it seemed like this was a great way to reduce errors in proofs.

So I tried it out, in my Analysis class. I took a proof from the book and created a structured version. It wasn't too bad since the proof was quite careful, but at one point I wondered "why did they use an open set here?" since the intuitive description of the proof didn't seem to need it. After a bit more thinking, I figured out that it was because they had to use a theorem described earlier in terms of open sets. And I don't think it mattered too much in that specific case, but at that moment I felt like I had a deeper understanding of what was going on, that I might have just skipped over once I got the intuition.

So then I got to thinking that in a program, a difference like this, between a function that does and doesn't use an open set, is an off-by-one error that often causes things to muck up. For more complicated programs, shouldn't we have tools that allow us to do this kind of reasoning in an automated way? This started my interest in formal methods, which at that point was mainly me trying to make structured proofs in my math courses. You would figure that mathematicians and programmers alike would love to find any way to reduce errors, but the resistance comes quite quickly.

There are horror stories of multipage proofs that 1+1=2, I think this one is actually true, but I don't know where it comes from. Mathematicians, I found, really like reading proofs as puzzles in themselves, this was also noted in "How to Write a Proof". Structuring the proof spoils the flowing prose, and detailing out the small steps in between is kind of like explaining a joke. Programmers have a similar feeling, though their concerns are usually over how much work is involved, or they will use the argument that "no proof or model can totally capture a system".

These concerns are somewhat justified, but for me, a nicely structured proof is just as nice as a nicely structured program, where the pieces fit together like parts of a puzzle. I think that is one good way to frame my view, some people might look at a jigsaw puzzle and say "ah, well there's a leaves here and there's the rest of the rest of the trunk, so these pieces must fit together", and indeed it is certainly true, but to me it's even better when you can actually see the pieces fit together.

Wednesday, March 4, 2009

Computer Science - What's that?

I'm going to try to stick to posting every Wednesday. This post needs a lot more work, but my main goal is just to get a regular schedule going.

A lot of writing has been devoted to just explaining what computer science is or isn't. Some people even want to get rid of the term as a whole.

Here's my view on what computer science is, for a regular person who is curious. Computer science is a really broad field these days, but to really understand the core, it's good to start at complexity theory. It's not the most accurate use of the term, but I am talking about an area that deals specifically with problems and how they are solved by computers.

Why is this fun? I think that question isn't asked enough. Usually because serious science and fun aren't commonly associated. Puzzle solving is fun, and puzzle solving with computers is a new puzzle. Solving crosswords, jigsaw puzzles, and sudoku puzzles are pretty fun. After you start getting good, you develop a few rules that help you solve the puzzles faster.

Now, try to think of how you would get a computer to solve these puzzles for you. It spoils the fun of the original puzzle, but now you have a whole new set of tools to work with. Just like a sudoku puzzle, once you get good at solving it, you want to solve it faster. With a computer program, you have to count how many operations it is doing and try to squeeze that down. Even trying to count the operations is a puzzle in itself.

Consider this puzzle, you have a box with a row of switches, and a lightbulb. The game is to flip the switches to that the lightbulb turns on. Every time you add a switch, the number of combinations to try doubles. The boring way to play the game is to try all the combinations one at a time.

To make it more interesting, we can now see inside the box. The inside of the box is made up of the three basic pieces: AND, OR, and NOT. It's easier to describe these than a sudoku puzzle! Now that we can see that the box is made up of, is there a way to use that information to turn the lightbulb on in a way that is significantly faster than trying all the combinations? This puzzle is part of a whole class of puzzles that are called NP-complete, which most researchers suspect have no efficient solution, but have no way to show it.

Wednesday, February 25, 2009

First Post! Java for the Command Line

I keep coming up with ideas for blog posts, but never end up writing them.
So, I've decided to use write or die to try and get ideas out there, with only a little bit of polish.

The first idea here is the Java shell tool. I probably won't get around to implementing it, but it's pretty neat.

Reflection in Java gives you a lot of runtime flexibility. Java is not usually used as a scripting language, but you should be able to quickly make an interesting amount of Java "command line" accessible with a couple reflection calls. I'll call this command line access tool "javait". We want javait to pass stuff around through pipes, we can do that through serialization. I think a good default is that javait instances pass around a single List object through pipes.

To hook it to regular shell programs, just make a "javaize" that makes a list of lines from standard input, and an "unjavaize" that takes a list and just runs toString on each of the list members and prints them. So what does javait do? Allows you to call a method, then returns the result to standard out. If the method doesn't return a list, just stick it in one.

In short, I'd like the following to work:

grep "Name: " file | javaize | javait "MyClass.substitute" "theStandardInList" "namepattern" "replacenamepattern" | unjavaize

Some problems I can think of with javait are finding a nice syntax for calling methods on the command line (and adding support for non-static methods), you need a special argument to represent standard in, and probably some flexible syntax to add in the extra arguments. I think there are some neat things that you could do with even a hacked together version of this.

Think it's a bit boring (or already done)? Google up some shell "one-liners" and see how many you can make.

Why not just use an actual scripting language? It's interesting in a "look how much I can abuse this" way and I think it's neat to pass around these strict-ish Java objects around through pipes.