What I cannot create, I do not understand.

Just because you've implemented something, doesn't mean you understand it.

RepRap n00b, three months later

My RepRap is working! Over the Thanksgiving weekend I had a solid chunk of time to address the problems I’d been having. After a full day of tweaking I was able to print a nice looking single walled box and a very ugly looking minimug.

The minimug was ugly because I hadn’t set the feed diameter in Skeinforge and it had defaulted to 1.75 mm and I’m using 3 mm PLA. So the printer was extruding a lot more plastic than it thought it was. After I fixed that I printed another minimug and it came out much nicer, although it was neither water nor alcohol tight.

I’ve since printed a bunch of stuff and have experimented a bit with some of the 3D design software. I’m still a RepRap n00b but at least I have a working printer! It’s pretty exciting. I’m very impressed with the quality of the prints I’ve done, and this isn’t bragging because the quality of the prints has very little to do with my efforts and a lot more to do with the all the work that’s been put into the RepRap project.

The rest of this post is a collection of notes for other people who are really new. These are places where I really stumbled, so this list might be valuable to someone else.

  1. After I assembled my printer I was tempted to just jump in and try printing something but was immediately disappointed. I didn’t realize how much careful hardware and software calibration was required. Eventually, this guide is what got me from a totally non-functioning printer to a working machine:

http://buggerit.blogspot.com/2011/08/hains-prusa-mendel-reprap-calibration.html

  1. I disovered that leveling the print bed is really crucial. This is stressed a few times on the wiki and in other documentation, including the guide above, but it’s easy to underestimate. I think this is especially true if you’re printing on cold blue tape (without a heated bed). I gather that a heated bed is more reliable, but I think a lot of novices will opt for printing PLA on blue tape as I did, simply because it cuts down a little bit on cost and complexity. (That being said I think I’m going to get a heated bed very soon and would recommend anyone else just skip cold blue tape because it poses some ugly problems like prints getting totally stuck.)

    In my case, the level of the bed can be the difference between perfect print and total garbage. When I started, total garbage was the extruder’s nozzle dragging a growing tangle of plastic around the bed, with nothing sticking to the bed at all. After I carefully leveled the bed so that the nozzle at Z=0 was only one sheet of paper above the bed, I got pretty decent prints right away.

  2. While I was calibrating the machine, my Z-axis started making a horrible metallic grinding noise. I was lucky to have some bicycle grease handy and smeared a dab on each of the Z-axis threaded rods which silenced them. Perhaps this is obvious, but I would recommend getting some machine oil or grease if you don’t have any, just in case.

  3. As I was assembling my printer, I made a few mistakes that weren’t deadly but damaged some printed parts slightly. When I was putting the extruder together, the small gear needed to be reamed out to fit on the spindle. Since it was such a tight fit, I didn’t think I needed to use the set screw to hold it in place, especially since I would need to file out the nut trap to fit a nut in. Over time, the gear loosened and I ended up having to use a set screw. This was fine, until I needed to disassemble the extruder to clean out the hobbed bolt. After I did, I realized that the nut trap was no longer able to hold the nut in place, and I couldn’t screw the set screw into place. This meant the motor would just spin its shaft inside the gear, and the extruder wouldn’t work at all. Eventually, I was able to force the set screw into place to hold the gear on, but if I hadn’t I would have been in trouble.

    The lesson I learned was this: if you realize that a printable part might fail, print it as soon as possible, unless you have access to another machine that works. I’m sure that if you’re hacking on your RepRap all the time, this is obvious, but it’s easy to get carried away and put this off if you’re a novice. Right now I’m printing a few other parts that were weakened when I was building the printer (for example: one of those “h” shaped endstop holders), since the printer is working now I don’t want to take any chances.

  4. I initially bought a Makergear hot end kit because it seemed to be recommended by a lot of people and was available in the US. I had some issues when I first started printing which could be attributed to the hot end, but it’s certainly possible (and even very likely) that it was an error on my part. I eventually gave up and bought a J-Head hot end from RepRap-USA.com, and that has worked really well for me. The J-Head hot end is also much, much simpler to use, as it involves almost no any assembly. Frankly, the Makergear hot end is quite fiddly to assemble.

  5. This follows from the last note. Since I had two hot ends, I also had two thermistors, that happened to be identical. When I received the J-Head in the mail, I decided to use the thermistor that came with the Makergear kit and put the new one aside. A few nights and a few assemble/disassemble cycles later, one of the thermistor’s leads broke.

    Since I had a spare, all I had to do was swap it out, but if I hadn’t I would have had to order a new one. Thermistors are super cheap, so it wouldn’t have been a big deal, but it would have been a bit of a pain in the ass. In the future, I’ll be sure to have several spares of parts like these, those that are cheap but essential. If you’re going order something from McMaster or another online supplier, and the part you’re ordering is around a dollar or two, save yourself the hassle and buy two or three even if you only need one.

RepRap n00b, one month later

I’ve read numerous posts on Boing Boing and elsewhere about cheap 3D printers from both the RepRap project and companies like Makerbot but I never paid a whole lot of attention to them. I usually shy away from electronics projects, and instead have stuffed myself solidly in the “software” pigeonhole. The only hardware projects I’ve ever done are building gaming PCs in high school (nothing fancy like overclocking or watercooling) and working as a bicycle mechanic at a tool co-op (I can disassemble and reassemble a 70s era road bike or a modern track bike, but that’s the limit of my knowledge). Both of those are pretty far from physical computing or electrical engineering though. Last month I ended up digging a little deeper into DIY 3D printing and ended up developing a strong urge to build a RepRap.

In the past I’ve avoided electronics projects mostly due to fear of destroying circuit boards. I kind of regret that attitude, which is not only a bit cowardly but misguided too. After playing with an Arduino and a breadboard I realized that if you’re careful, there’s really plenty of opportunity for the kind of experimentation and tinkering I’m used to in the world of software. And the feedback loop is just as compelling as it is in programming: examine the system, think about the interactions, tinker, watch it light up/spin around/make noise! It’s so much fun, and gives me the same kind of feeling I got when I started programming. Why didn’t I try this stuff earlier!?

Anyway, in the past month I’ve built a RepRap Prusa Mendel. But I’m not printing yet. I’ve got a ways to go. It is basically up and running: after some minor problems with the wiring and electronics this week I was able to test the motors and the heater. My first attempt at a real test print (a small, flat square I made in Google Sketchup) resulted in the extruder just dragging cooled PLA around the bed. I found a few blog posts and a forum thread addressing this issue. Basically the Z axis end stop needs to be lower, so that the extruder is much closer to the surface of the bed. Also, I think the hot end is too hot, as filament extrudes while it is just idling. So, I’ve got work to do.

Setting up Emacs for Lisp hacking on OS X, pt. 3: Common Lisp and Clojure, again

I wrote about getting Emacs and SLIME to play nice with both Clojure and Common Lisp a while ago, but that post had a kind of hacky setup that likely doesn’t work any more.

The good news is that if you want to hack Clojure and Common Lisp with SLIME, on the same Emacs installation, this is now a lot easier to set up. First, clone this frozen copy of SLIME and load it in your Emacs init file with this snippet. Don’t use the ELPA/package.el/Marmalade SLIME as that won’t work with Common Lisp.

For Clojure, install Leiningen (or Cake), and the swank plugin. Now you can run a swank server with Leiningen (or Cake) projects with lein swank (or cake swank) and connect to it with M-x slime-connect. I had issues with threads getting stuck or deadlocked or something, but this wasn’t an issue after I installed the clojure-swank version 1.3.1. You can test this by running the ant sim demo and checking that the ants do indeed go scurrying around.

For Common Lisp, make sure your inferior-lisp-program is set to your implementation, and just use M-x slime. You can even run them at the same time without any problems, as far as I can tell.

References:

Phil H’s great work and recent blogpost on clojure-jack-in (Note that if you follow those instructions I think this will should still work, although you might not be able to run them side by side.)

Sam Aaron’s Overtone screencast is where I found out about this frozen copy of SLIME.

RE: A tip for using Git on top of CVS, getting yourself out of trouble

Occasionally, git cvsexportcommit will fail. I don’t know how it works inside, but I have noticed that it will sometimes have a problem committing to CVS in the case of added or deleted files.

In this post, I will outline a method of manually exporting a patch file from a local Git repository that has been based off of a project in CVS. This requires a bit of fiddling, but once you package everything in a small script, it’s quite painless. Of course, the least pain would be to just switch to Git, but this might help someone else stuck hassling with a legacy project.

Since you need a CVS working directory to use cvsexportcommit, I assume you already have one. To start, checkout the project twice, once with a different name, like this:

$ cvs co -d project-cvs project
$ cvs co -d project-git project

This results in two copies of project. You can then delete the CVS directories and create a new Git repository in one of them:

$ cd project-git
$ cvs release # not necessary
$ rm -rf $(find . -name 'CVS')
$ git init

Now you can work away in the Git working directory as normal, with all the ease of Git, local branching, etc… I would recommend branching from master and working there, then merging back into master before exporting to CVS, just to keep things clean. You can then import into master as well, and rebase your local branches against it to incorporate any changes from your colleagues. Importing from CVS would be the same as exporting as outlined below, just reverse the diff-ing and patch-ing.

When you’re ready to export your work and commit to CVS, you need to create a patch by diffing your Git working dir with the CVS working directory, and commit this patch to the CVS repository.

After a little hunting, I found this pair of incantations:

$ diff --exclude='CVS' --exclude='.git*' -urPp cvs git

… where cvs and git refer to your working directories. You invoke this in the directory above them, and it will spit out the patch to stdout, so you probably want to redirect it to a file by appending > patch the end. It’s also helpful to use the -q option to diff which will just list the changed files, and give you a quick sanity check that you’re setting up to commit what you think you are.

To apply the patch to your CVS working directory, change to it, and run

$ patch -p1 < ../patch

… where patch is that patch file, presumably created in the directory above your working directory. Then you can commit to CVS normally, with cvs ci -m '...' etc.

This all might seem like a lot of work just to use Git, but in the transition between CVS and Git, or in a situation where you want to use Git in “guerilla” mode, this works in a pinch.

A tip for using Git on top of CVS

I was recently asked to start a discussion on Git at work, and give a presentation on it to my colleagues. We use CVS, and while there’s a bit of interest in migrating, that’s relatively big task since we have a lot of infrastructure set up around CVS.

In the meantime, I’ve started exploring the Git tools for CVS interoperability. For a small, new project, cvsimport and cvsexportcommit (as described in this excellent SO answer) worked pretty well. However, when trying to do this with an older project with a lot of history in CVS, cvsimport failed, although I don’t know why (The result seemed to be complete history for only a subset of the files in the module). There are other tools for migration, specifically cvs2git, but those require access to the CVS repository directly.

Yesterday I found this question about running Git on a directory that’s also versioned with CVS, and generating a patch to be committed back into CVS. This gave me an idea, what if you skipped the cvsimport step? Could you check out the project with cvs co, then run git init and hack away in bliss, and then when you’re ready to commit to CVS, use cvsexportcommit. That was essentially what this person was asking about, although they didn’t mention cvsexportcommit.

It turns out this totally works, and in fact there are a few answers elsewhere on SO that mention this method. It’s not as nice as having all the old, CVS history visible in things like git log and gitk, but if cvsimport failed for you, and cvs2git isn’t an option, it’s probably the next best thing.

Vagrant may prevent me from rage-quitting OS X

If you’re a Mac user who came to OS X from any other Unix (one with a really solid package manager, like apt on Debian GNU/Linux and derivatives), I highly recommend checking out Vagrant.

I’ve really chafed at the lack of decent package management on OS X, and this will likely fill that gap for me. Homebrew is good, but it’s just not as easy as apt. I like using OS X and all the “syntactic sugar” that comes with it, like Netflix Instant and fast Flash, but it’s still hard to do some things, like install LaTeX. If your package manager tells you to go off and download a standalone installer, you know something’s wrong. (I don’t blame Homebrew for this, I believe them when they say building LaTeX from source is hairy, but I still don’t like it.)

Vagrant is a tool for managing virtual machines geared toward web development, but it’s actually more useful than just that. It’s super easy to download a base Ubuntu box and in a few steps have an ssh session into a new virtual machine instance, with shared folders already set up and a command-line interface for suspending, resuming and tearing down the virtual machine.

You have access to all the same packages in the Ubuntu repos without the overhead of a Gnome session running inside VirtualBox. I know you can set this stuff up manually with the VirtualBox GUIs or command-line tools, but Vagrant makes it so easy!

You can install packages directly with apt-get through an ssh session or provision the box with Puppet or Chef, neither of which I had ever used before. Using simple manifest files or “cookbook” scripts would mean you wouldn’t need to worry about the state of the VM, and with careful maintenance of the provisioning scripts, you could trash it and start fresh without a lot of difficulty.

The only tip I have for using Vagrant in this way is that if you’re using Ubuntu, make sure to run “apt-get update” before doing any provisioning, and start the provisioning system with all the verbose and debug options you can enable, because otherwise you might end up with some problem on the apt end but the “vagrant up” command would just hang for several minutes with no output until everything fails and you find out that apt couldn’t find some package lists. At least, that’s what happened to me when I tried to install some LaTeX packages.

book reviews, Clojure 101: Programming Clojure, Practical Clojure, and Clojure for Lisp Programmers

I’ve recently read these two introductory Clojure books and thought that reviewing them might be helpful to anyone with a background similar to mine, that is someone new to Clojure but not new to Lisp.

I wish I had found Rich Hickey’s talk “Clojure for Lisp Programmers” before buying either of these books. I think that both of them are decent books which present a startlingly fresh and exciting language in a rather plain and unexciting way. Neither of them is the kind of quirky Lisp book I apparently really like.

For programmers with a decent understanding of another Lisp, I would strongly recommend seeking other introductions to Clojure. If you know any Lisp, skip the books and watch Rich Hickey’s talk. I think it’s a much, much better introduction. Not only is it more concise but it reaches deep into what sets Clojure apart from Common Lisp and Scheme. I would imagine his talk targeted at Java programmers is probably also great.

Hickey comes across as very smart and a bit opinionated, but what’s great is that he’s also very convincing. There is a room full of Common Lisp and Scheme people asking interesting questions in that video (unfortunately some of those are unintelligible and the trascript doesn’t help). Watching that talk got me much more excited about Clojure than reading these books did. And I actually understood some of the design decisions in Clojure which had previously seemed a bit odd.

Michael Fogus’s The Joy of Clojure might be the book I really wanted. Although it’s targeted at programmers new to Lisp, from glancing over the table of contents, it looks like it covers much more than either of these books, and so I may end up reading that too. In the meantime, I’m going to start actually writing some Clojure:

user=> (load-file "goblinfort.clj")
#'user/make-name
user=> (doseq [i (take 10 (repeatedly (fn [] (make-fight-sentence (make-name 2) (make-name 3)))))] (println i))
Vuxqu lacerated Qafumes with a rough knife
Aqnup crushed Coirvol
Gelna cut off Daqikzub's leg
Fuzov poked Gomluuj
Zizbo poked Kakenok's finger with a sharp mace
Pinub lacerated Jisojve's finger with the dull mace
Eyic slashed Ofqeem
Viwwe smashed Udwuluc's leg with the rough mace
Movis chopped off Erreob's toe in a rough club
Zear tore off Iwzevzaw's arm
nil
user=>

Goblins!

book reviews, Lisp 101: ANSI Common Lisp, Practical Common Lisp, and Land of Lisp

I’m going to start posting reviews of books that I read in 2010, starting with three introductions to Common Lisp. These three books are all quite good, but a little bit different from each other, so I thought reviewing them together might be helpful if someone is on the fence about buying one or all of them.

Paul Graham’s ANSI Common Lisp

I was interested to see how Paul Graham, surely one of Lisp’s most ardent champions, would present the language for beginners. Like his essays, the prose ANSI Common Lisp is clear and very well written. The programming examples are short and concise, and quite easy to follow. The examples are quite engaging and interesting: a sentence/poem generator named “Henley”, a simple ray-tracer, an object-oriented framework, and a logic language stick out in my memory.

But, I don’t think it’s necessarily the best introduction to Lisp in 2011, and not given the price of a new copy on Amazon. I found a cheap used copy, and that’s why I bought it. I wasn’t really disappointed, because I’m happy to make room for it on my shelf, but if you are new to Lisp, my recommendation would be to check out Practical Common Lisp or Land of Lisp, and after reading one of those, peruse the source for ANSI Common Lisp from Graham’s site.

His much more advanced macrology book, On Lisp, is more well known I think, and has fewer peers than this introductory text. That is even more expensive new (or used) due to the limited printing, I think, but Graham has graciously provided the text free from his site. I haven’t read it yet, but it’s high on my list of Lisp books to read.

Peter Seibel’s Practical Common Lisp

If you’re a serious, professional programmer who wants to learn Lisp, and doesn’t want to muck around with games (see Land of Lisp below) or with relatively academic examples, then this is probably the book you want. Peter Seibel’s book has a nice conversational tone and he doesn’t waste much time evangelizing. I think a blurb on the back of his other book, Coders at Work (review coming soon) says something like “Seibel asks the sort of questions only a fellow programmer would” and I think that’s kind of true of this book too, it’s definitely written as one programmer to another. He compares certain features to Python, Java, C++ and a few other languages, but doesn’t dwell on disparaging them too much, instead relying on Common Lisp features to stand on their own.

The examples in Practical Common Lisp are definitely practical, and could even be called a bit dry. This isn’t bad at all, and might even be more to some people’s taste. If concrete, “business-like” examples appeal to you more, this book would be a better read than Land of Lisp. If you get excited by more academic examples and prefer a lighter tone to examples in programming books, Land of Lisp might be a better bet. Or just read them both.

An impressive feat is that Seibel presents macros almost as soon as possible (in the third chapter) and demonstrates why Lisp’s macros are unique. I think Paul Graham has said that he tried to race to the macro chapter in ANSI Common Lisp as quickly as possible, but here Seibel has beaten him to it with a clever example that I thought was pretty easy to follow, even though it was my first exposure to “true” macros. Additionally, this chapter almost stands on its own, so you can forward a link to the online version (the whole book is up on Seibel’s site for free) to your coworkers who want to know what the fuss is about Lisp macros, but aren’t willing or interested in diving deep themselves.

Dr. Conrad Barski’s Land of Lisp

I’d read Dr. Barski’s online mini-tutorial “Casting SPELs in Lisp” a little while ago, so when I saw that he had finished Land of Lisp, and saw what an absolutely wonderful, whimsical music video he had produced for it, I bought a PDF copy immediately.

Using games to explore Common Lisp (or any programming language) is a pretty good idea, because games engage a wide variety of programming problems. I think reasonably motivated high schoolers could probably get through most of Land of Lisp, and I sort of wish it had been written when I was in high school. That being said, this book does has plenty for “grownups”. One of the cooler examples is using lazy evaluation to improve the efficiency of searching a game tree for a computer opponent’s best move. Barski also presents an SVG-based web interface to this game, and a simple HTTP server written using a socket library, getting into low-level details of web programming, which isn’t something you usually encounter in an introductory text to any programming language.

If you read and enjoyed “Casting SPELs in Lisp” or watched the music video on the Land of Lisp site, and you had a big smile on your face, you’ll probably like this book. If you hate fun, stick with Practical Common Lisp.

I think any of these books are fine first Lisp books, but there’s nothing stopping you from reading all of them. (I did after all) This year I plan to dive into some of the big, epic Common Lisp books, so I expect I’ll have a second round of Lisp reviews in a year, maybe a “Lisp 201” to follow this.

How to build BlackBerry applications with Eclipse on Mac OS X

The other day a pretty scathing critique of the current state of BlackBerry app development was submitted to HN. While the post (from developers at a company called Atomic Object) was absolutely spot on, more interesting to me was a link to one of their older posts describing their development setup on OS X. I had messed around with this a little bit before, trying to get Eclipse and Ant to properly compile BlackBerry apps on OS X, but without success. I didn’t try hard enough apparently.

The post from the Atomic Object team is pretty detailed in explaining how to do this, using IntelliJ and Parallels. The fundamentals are not specific to those tools though, so I followed along and have adapted it to Eclipse and VirtualBox. This gets filed under inane notes about development environments, but as I said when I posted about Emacs and Lisp on OS X, this kind of thing has benefited me in the past, maybe it will help someone else.

  1. Install Eclipse (I am using 3.5, but it may not matter) and VirtualBox.
  2. Create a new VM and install Windows (tested with XP SP3). This would probably work using VMWare Fusion or Parallels too.
  3. On the VM, install Java 6 and the version of the BlackBerry JDE that matches your target OS.
  4. In OS X, download bb-ant-tools.jar and move it to \~/.ant/lib.
  5. Get an OS X version of preverify, which is included in the Sun J2ME SDK 3.0 for OS X. Install it and either copy /Applications/Java_ME_SDK_3.0.app/Contents/Resources/bin/preverify to somewhere in your PATH or just add that directory to your PATH.
  6. If you don’t have one already, create the file \~/.MacOSX/environment.plist.
  7. Edit this file with /Developer/Applications/Utilities/Property\ List\ Editor.app/.
  8. Create a new variable called PATH and set it to the value of your shell PATH, making sure that the directory containing preverify is included. This allows Ant, via Eclipse, to see the preverify command when Eclipse is launched from Eclipse.app and not from the command line. See this for more details.
  9. In OS X, create a directory for the BlackBerry components (something like “bb-components”).
  10. From the BlackBerry JDE installation in the VM, copy both “lib” and “bin” directories to this directory.
  11. In Eclipse, create a new Java project.
  12. Choose “Use an execution environment JRE:” and select Java 1.3.
  13. Right click the project in the “Package Explorer” and select “Build Path” and then “Configure Build Path.”
  14. Add bb-components/lib/net_rim_api.jar as an “External JAR.”
  15. Remove the “JRE System Library.” This is so that only BlackBerry supported classes will be offered via autocompletion etc.
  16. Copy the attached minimal build.xml into the project.
  17. Edit the build.xml to suit your environment (specifically the jde.home property) and anything else you want to customise.
  18. Right click and select “Run as” and then “Ant Build” (the first one). You can also build using Ant on the command line, of course.

Now you should be able to build BlackBerry apps in Eclipse without too much fuss. Testing on the simulator would require you to copy the files to the VM, which can be done by creating a shared drive and copying them there, or connecting to the VM using shh and scp. I use the former, because it’s a little easier to set up. The simulator is not perfect though, and while it runs in VirtualBox, you should really be testing it on a device anyway. So you can use JavaLoader.exe via VirtualBox to deploy to the device.

I had to jump through some hoops to get a BlackBerry connected via USB to work in VirtualBox. What worked for me was to install the BlackBerry Desktop Software for Mac and make sure that it was able to sync the devices I use for testing. Then I was able to enable it in VirtualBox under “Devices” and “USB Devices.”

What Git gets right, pt. 1, “stash” and “add –patch”

Earlier this year I started using Git for all my personal projects, only after giving Mercurial a try first. Joel Spolsky’s hginit tutorial and the beginning of Bryan O’Sullivan’s book Mercurial: the Definitive Guide brought me to the light side (distributed version control). In the end I picked Git over Mercurial because of Github (it’s very sweet UI won me over), deployment to Heroku, and because (as far I can tell) Git is a little bit more popular. I’m sure Mercurial is great too, and also puts CVS and SVN to shame, but I really don’t have time in my life for two cutting edge version control systems, at least not right now.

Git (and distributed version control in general) is great for personal projects for a few reasons. Some people think it’s overpowered, but I think they’re wrong (actually, in defense of Mike Taylor, I think he also saw the light). I think distributed systems are way better for small, one-person projects since you don’t need any of the overhead of setting up a server, even if it’s just a server program running on a local machine. To start using Git, all you need to do is install it and then run git init in your project’s directory. I’ve set up CVS locally and it’s a horrible huge pain in the ass in comparison. You don’t want the tool to get in the way and encourage bad practice (like not using version control at all!).

Setting it up is a one-time cost of course, so if you have CVS or SVN already set up, you might not be that compelled to switch to Git. However, there are loads of other things about Git which make it awesome. I’m going to start making a note of these things here as I come across them, because writing about it will help me learn Git and also help remember these things and their use cases.

Tonight I was hacking on some of the exercises from the metacircular evaluator in SICP, and had a pernicious bug due to some changes I had recently made to implement internal definitions. I had my implementation of this mostly completed, but weird things were breaking, unrelated to internal defines. I wanted to go back to an older version, but I didn’t want to commit the changes I had made, since they were obviously broken, but I also didn’t want to throw them out.

I remembered reading about git stash so I looked it up, and indeed that’s what I wanted, the description in the manual pretty much sums it up: “Stash the changes in a dirty working directory away.” Very sweet. You can put a side changes in your working directory and come back to them later.

So now I had a the previous version, I could see if it still worked. It did, but after playing around with it a little bit, I found another bug. The metacircular evaluator code from the SICP site is all in one file, which is annoying and should probably be fixed to make hacking around on it easier, but I would like to move on to the lazy evaluator, logic programming, and register machine some day this year, so I don’t spend a whole lot of time making the code really extensible, I’m just playing around and solving as many of the exercises that I can.

My point is, after I fixed the bug in this one, giant file, there were a ton of other changes in it: a bunch of crap I had just messed with and even some debugging printfs. I didn’t want to commit that stuff. I needed to somehow tease the few good lines out of a big file with many more “bad” lines changed.

This time, I wasn’t too sure, but I also had a little deja vu. I had bookmarked Ryan Tomayko’s post from 2008 a little while ago and remembered something about reordering commits after the fact. This wasn’t exactly what I wanted to do, but it was close, and sure enough, his post describes a totally killer feature of Git (apparently present in other modern systems like Mercurial, Darcs and Bazaar), git add --patch. This gives you an interactive dialog which allows you to pick and chose which hunks get added to the index from a given file. It even lets you break a hunk up if need be. It’s totally awesome.

So there are the first two little bits of Git-fu that I’ve stumbled across. There is a certain joy in finding a tool which is powerful and excellently designed, and I think Git falls in this category. I’m really looking forward to learning more and I’ll be posting notes on what else Git gets right when I do.