Spring Is Here, It's Time to Plant

Once again life triumphs over the cold death of winter. Spring at last. Every time we traverse the Saṃsāric Wheel of the Seasons I hold my breath as the days begin to grow long again. Will the leaves come back? Every year without fail they do. And I sigh a little bit in relief each time.

This January I dug out a few plots in our weirdly shaped, unused, and full of not quite weeds not quite grass. Three 4' x 15' holes about 8 inches into the Texas Blackland Prairie clay. Over the last couple of weekends we planted some peppers, tomatoes, tomatillos, peas, letuce, cucumbers, and melons.

Anatomy of a Function Call

Lately I've been having fun with Clang's (and GCC's) -S option in order to examine the assembly output of small programs written in C. I've found this is a really great way to both learn about what your compiler is doing with your code and how to write assembly code on your own. One of the more interesting things I've learned from doing this is just how function calls are made on UNIX-like systems (Linux, BSD, macOS, Solaris, etc.) running on x86-64 hardware. When using a high(er) level language like C it's really not something I ever think about; the compiler takes care of the details. But what exaclty happens when you call a function?

The answer is actually fairly complicated and depends on several factors including the computer's architecture, the operating system, the language, the compiler, and the number and nature of the arguments being passed to the function. For simplicity's sake in this post I'm going to make a few assumptions: we're on an x86-64 machine, running macOS, using C, compiling with Clang or GCC, and only passing 64-bit integer arguments. Given all that what we'll be looking at is the System V AMD64 ABI calling convention. Under this convention the first six integer arguments are passed via the rdi, rsi, rdx, rcx, r8, and r9 registers in reverse order. After that, integer arguments are passed on the stack, also in reverse order. Floating point arguments are a little different and are beyond the scope of this post.

Sometimes the best way to really get a feel for how something works in general is to look at several specific examples that differ only slightly. In the case of function calling conventions one important aspect that can easily be changed is the number of arguments being passed. The following scenarios present a simple and contrived C program and its corresponding assembly output. Each example calls a function with more arguments than the last.

It's important to note that the output presented here is being created without any optimizations. Compiler optimizations tend to produce much faster code at the expense of understandability. Often times the resulting optimized assembly code bears little resemblance to the structure of the original C code. Of course, it's a good idea to try looking at the output produced with optimizations, but for the purposes of this post it would only confuse things. Because no optimization is being done the compiler is more or less forced to make naive assumptions about the state of the call stack. This can result in verbose assembly output that appears to serve little or no purpose in these or similar contrived programs.

Each example program below has been compiled with both Clang (1000.11.45.5) and GCC (9.2.0) using the following flags: -O0 -S -fno-asynchronous-unwind-tables. I'll go over the Clang output and then the GCC output individually before comparing the two.

One Argument

When only one argument is passed to the function only the rdi register is used. It's about as straight forward as you can hope for. The program below takes an integer and passes it to a function called doubleNumber which uses a left shift to double the argument before returning the result.

Example 1: Original C Code

#include <stdint.h>

uint64_t doubleNumber(uint64_t a) {
  return a << 1;
}

int main(int argc, char* argv[]) {
  uint64_t a = 1;

  doubleNumber(a);

  return 0;
}

Note that the result is not printed after being calculated as it turns out that printf, and variadic functions in general, use a slightly different calling convention that is beyond the scope of this post. Additionally, if you ran this code through a linter (such as Splint) it would warn you about the return value of doubleNumber being unused. This is intentional, but we'll see how the System V AMD64 ABI handles integer return values in a moment even if nothing in this post actually uses them.

Example 1: Assembly Output (Clang)

	.section	__TEXT,__text,regular,pure_instructions
	.macosx_version_min 10, 13
	.globl	_doubleNumber           ## -- Begin function doubleNumber
	.p2align	4, 0x90
_doubleNumber:                          ## @doubleNumber
## %bb.0:
	pushq	%rbp
	movq	%rsp, %rbp
	movq	%rdi, -8(%rbp)
	movq	-8(%rbp), %rdi
	shlq	$1, %rdi
	movq	%rdi, %rax
	popq	%rbp
	retq
                                        ## -- End function
	.globl	_main                   ## -- Begin function main
	.p2align	4, 0x90
_main:                                  ## @main
## %bb.0:
	pushq	%rbp
	movq	%rsp, %rbp
	subq	$32, %rsp
	movl	$0, -4(%rbp)
	movl	%edi, -8(%rbp)
	movq	%rsi, -16(%rbp)
	movq	$1, -24(%rbp)
	movq	-24(%rbp), %rdi
	callq	_doubleNumber
	xorl	%ecx, %ecx
	movq	%rax, -32(%rbp)         ## 8-byte Spill
	movl	%ecx, %eax
	addq	$32, %rsp
	popq	%rbp
	retq
                                        ## -- End function

.subsections_via_symbols

The key part of the above assembly output is the following three instructions:

	movq    $1, -24(%rbp)
	movq    -24(%rbp), %rdi
	callq   _doubleNumber

The first instruction moves our immediate argument, 1, to the top of the stack (24 bytes from wherever rbp is pointing) presumably for safe keeping. The second moves the argument into rdi for the call to _doubleNumber. And the third actually calls the function. In this particular case storing the argument on the stack is not necessary, and in fact if you change the above to just be

	movq    $1, %rdi
	callq   _doubleNumber

it will work just fine. I'm fairly certain the compiler does this because without further analysis (the kind it does when making optimizations) it can't know for sure whether or not the original value will be needed later.

Let's take a closer look inside _doubleNumber:

	movq    %rdi, -8(%rbp)
	movq    -8(%rbp), %rdi
	shlq    $1, %rdi
	movq    %rdi, %rax

We can see some similar business with the stack is going on before doing a logical left shift on rdi and moving rdi into rax as the function return value. Like in the main function, all that stack manipulation isn't really necessary.

Example 1: Assembly Output (GCC)

	.text
	.globl _doubleNumber
_doubleNumber:
	pushq	%rbp
	movq	%rsp, %rbp
	movq	%rdi, -8(%rbp)
	movq	-8(%rbp), %rax
	addq	%rax, %rax
	popq	%rbp
	ret
	.globl _main
_main:
	pushq	%rbp
	movq	%rsp, %rbp
	subq	$32, %rsp
	movl	%edi, -20(%rbp)
	movq	%rsi, -32(%rbp)
	movq	$1, -8(%rbp)
	movq	-8(%rbp), %rax
	movq	%rax, %rdi
	call	_doubleNumber
	movl	$0, %eax
	leave
	ret
        .ident	"GCC: (Homebrew GCC 9.2.0) 9.2.0"
	.subsections_via_symbols

Inside the main function there's four instrunctions worth looking at:

	movq    $1, -8(%rbp)
	movq    -8(%rbp), %rax
	movq    %rax, %rdi
	call    _doubleNumber

This moves the immediate value 1 onto the stack (at 8 bytes from where rbp is pointing) just in case it is needed later, takes that value on the stack and moves it into rax, moves rax into rdi, and then finally calls _doubleNumber. All of the funny business with placing the argument on the stack and moving it into rax is unecessary but should come as no surprise when considering the lack of optimization.

GCC's output for doubleNumber, however, is somewhat unexepected:

	movq    %rdi, -8(%rbp)
	movq    -8(%rbp), %rax
	addq    %rax, %rax

First, the argument in rdi is moved onto the stack and from there it's moved into the return value register rax. As we've seen before, using the stack isn't necessary here. The reason I say this output is unexepected is because -O0 was used, which should eliminate all optimizations (at least that's how I understand it). Despite using a left shift in the C code we can see that GCC instead simply adds the contents of rax to itself. This is functionally equivalent and sets things up nicely for the function return since the final computed value is already in rax.

Differences between Clang and GCC

In this first example the output of Clang and GCC is largely the same. There's some superficial differences in where they push values on the stack before calling _doubleNumber but the real surprise is the use of addq over shlq.

How Did That Get There?

I.

Earlier this week a co-worker of mine was working on some old code for running reports that had been written by someone who had long since departed. Every time he ran a report of a certain type he would always get wildly incorrect results. He had isolated the problem to a call to one particular method. RubyMine, his editor of choice, wasn't being very helpful in revealing the definition of the method. Exasperated, he said something to the effect of "How am I supposed to know where this method is defined?" I was more than a bit excited that I was able to tell him I knew of a way: Method#source_location.

Method#source_location returns the file name and line number where a given method is defined or nil if it's a native compiled method (e.g. part of the standard library). In order to actually use Method#source_location one must first actually have a method. Fortunately this is pretty easy; the Object#method method will return a Method object when called on an instance. If all that is available is a Class or Module then Module#instance_method can be used to get an UnboundMethod object. Either will work.

Here is an example of Method#source_location when called on a method that is defined in native code as part of the standard library.

2.4.1 :001 > x = 100
 => 100
2.4.1 :002 > x.method(:to_s)
 => #<Method: Integer#to_s>
2.4.1 :003 > x.method(:to_s).source_location
 => nil

In an irb session a method's source location won't have a file name, but the result won't be nil.

2.4.1 :004 > class Hello
2.4.1 :005?>   def hi
2.4.1 :006?>     puts 'hi!'
2.4.1 :007?>   end
2.4.1 :008?> end
 => :hi
2.4.1 :009 > Hello.new.method(:hi)
 => #<Method: Hello#hi>
2.4.1 :010 > Hello.new.method(:hi).source_location
 => ["(irb)", 5]

Here is an example of UnboundMethod#source_location.

2.4.1 :011 > Hello.instance_method(:hi)
 => #<UnboundMethod: Hello#hi>
2.4.1 :012 > Hello.instance_method(:hi).source_location
 => ["(irb)", 5]

If given a file named "goodbye.rb" with the following contents...

class Goodbye
  def bye
    puts 'bye!'
  end
end

...then here is an example of Method#source_location for a method defined in a file.

2.4.1 :013 > require_relative 'goodbye'
 => true
2.4.1 :014 > Goodbye.new.method(:bye)
 => #<Method: Goodbye#bye>
2.4.1 :015 > Goodbye.new.method(:bye).source_location
 => ["/home/sean/goodbye.rb", 2]

And once again the UnboundMethod#source_location version.

2.4.1 :016 > Goodbye.instance_method(:bye)
 => #<UnboundMethod: Goodbye#bye>
2.4.1 :017 > Goodbye.instance_method(:bye).source_location
 => ["/home/sean/goodbye.rb", 2]

II.

Using this technique my co-worker was able to quickly identify where the method in question was defined. Case closed. Well, not quite. It turned out the method's source location raised more questions than it answered. The method he was looking for was in a related but different class from the one he was expecting. This seemed suspicious so I suggested he try inspecting the inheritance chain of the object the method was called on using Module#ancestors method.

Module#ancestors returned nothing out of the ordinary at first glance.

[ReportA, ReportBase, Object, ..., Kernel, BasicObject]

What was confusing though was that the source location for the method was inside ReportB not ReportA. So, just how was that happening? After staring at the ReportA class for a minute I realized that it didn't inherit from ReportBase, instead it included it. I had a hunch, so I suggested we take a look at the ReportBase module.

Below is a minimal reproduction of the behavior.

module ReportBase
  def self.included(base)
    helpers = if base.const_defined?(:Helpers)
                base::Helpers.extend(Helpers)
              else
                Helpers
              end

    base.const_set(:Helpers, helpers)
  end

  module Helpers
    def greet
      puts 'greetings from ReportBase::Helpers#greet'
    end
  end
end

class ReportA
  include ReportBase

  module Helpers
    def greet
      puts 'greetings from ReportA::Helpers#greet'
    end
  end
end

class ReportB
  include ReportBase

  module Helpers
    def greet
      puts 'greetings from ReportB::Helpers#greet'
    end
  end
end

class ReportC
  include ReportBase
end

ReportA, ReportB, and ReportC are all pretty simple. All three include ReportBase. ReportA and ReportB both have a submodule named Helpers which defines a method named greet.

Where things start to get a little strange is inside ReportBase. The first thing to take note of is that Module#included is overridden. Module#included is a callback which is called whenever the module is included in another module or class. This allows for performing some specified action upon inclusion.

In the case of ReportBase the callback first checks to see if the constant Helpers is defined for the including class/module. If it is, then the including class/module's own Helpers submodule is extended into the ReportBase::Helpers submodule which is then assigned to helpers otherwise ReportBase::Helpers is assigned to helpers. Then the including class/module's Helpers constant is set to the helpers.

The end result of this is that if the including class/module has its own Helpers submodule then it is effectively merged with ReportBase::Helpers which then overwrites the Helpers submodule in the including class. This happens every time a class or module includes ReportBase. Because Module#const_set sets the Helpers constant to the Module object rather than creating a new copy this causes ReportBase::Helpers to end up polluted with the Helpers submodule of every class or module that include ReportBase. Worse, it also pollutes the Helpers submodule of each including class or module!

Below demonstrates the sort of frustration my co-worker was experiencing because of this.

2.4.1 :018 > require_relative 'report_test'
 => true
2.4.1 :019 > ReportA::Helpers.greet
greetings from ReportB::Helpers#greet
 => nil
2.4.1 :020 > ReportB::Helpers.greet
greetings from ReportB::Helpers#greet
 => nil
2.4.1 :021 > ReportC::Helpers.greet
greetings from ReportB::Helpers#greet
 => nil

At the risk of being hyperbolic: this behavior is awful. Truly, maddeningly, awful. Please do not write code like this!

In retrospect, after careful dissection, this code makes perfect sense. However, at a glance, the actual behavior is surprising. If all I could look at was the definitions of ReportA, ReportB, and ReportC it would take me ages to divine what is actually happening. And even with the source for ReportBase it still wasn't obvious what the source of the behavior was until I spent several minutes parsing through it in my head and writing a minimal reproduction similar to the one presented here.

I think what the author of the code was trying to do was make it so helper methods from one report class were available in all report classes. That sounds like it might be useful, but the way it was done clobbers the namespacing that the actual structure of the code appears to have. Rails helpers actually have very similar behavior. I suspect that's where the idea for this reporting code was taken from.

This is definitely a case where the code was a little too magical. The most impactful change that could be made to the code would be to make inclusion of the helper modules be explicit. Rather than automatically extend all the helper modules into one module, each report class could instead explicitly include any helpers. The urge to be clever and creative when writing code for an unexciting task like generating reports can be great. You're better off resisting that urge and instead keep things explicit and unsurprising. Your co-workers and your future self will thank you for it.

Compacting an Assocation List in Scheme

I must admit, I went way too long not knowing about association lists in Scheme. There's really nothing particularly special about them, they're just a list of pairs. The assq, assv, and assoc functions are what makes them useful. The aforementioned functions work by finding the first pair in the list whose first value matches what is being searched for (using different functions to test for equality). This means updating an association list is as simple as using cons to prepend a new pair. This works even if there is already a pair that already has the same "key" (the first element in the pair list).

While prepend and read options are trivial with association lists, things get slightly trickier when you need to come back and traverse the list and use the stored value part of each pair. Take the following association list: (define my-list '((100 3) (100 2) (100 1) (200 2) (200 1))). If you're building a frequency count of elements in another list you might end up with a structure not unlike this one. Each time you count a new occurrence of a number you just cons a new (value count) pair onto the association list. This means that operations like (assv '100 my-list) will return (100 3) and (assv '200 my-list) will return (200 2) as expected. But what about when we want to reduce the association list but only work with the "final" value for a given key? Naive attempts to use reduce on the association list will give you potentially very incorrect results depending on how many times you have "overwritten" a pair.

An easy way to overcome this problem without resorting to proper hash tables is compacting the association list:

(define (compact-alist alist)
  (fold-left (lambda (result pair)
    (if (assv (car pair) result)
      result
      (cons (assv (car pair) alist) result)))
    '() alist))

This probably isn't the prettiest way of going about this but at least it's pretty clear what's going on. The association list is reduced with fold-left using a lambda that returns the accumulator result as is if the key has already been added to the result ((car pair) is necessary because assv just looks at the first element in each pair). If the key wasn't found then the first matching pair from the original association list is consed onto the result.

The result of (compact-alist my-list) would be ((100 3) (200 2)) which in some situations is much more useful and as far as assq, assv, and assoc are concerned is the effectively the same as the non-compacted association list.

RIP Marvin Minsky

In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.

"What are you doing?", asked Minsky.

"I am training a randomly wired neural net to play Tic-tac-toe", Sussman replied.

"Why is the net wired randomly?", asked Minsky.

"I do not want it to have any preconceptions of how to play", Sussman said.

Minsky then shut his eyes.

"Why do you close your eyes?" Sussman asked his teacher.

"So that the room will be empty."

At that moment, Sussman was enlightened.

Creating an OpenGL 4.1 program with GLEW and GLFW in XCode

This post was written with OS X 10.10 and Xcode 6.4 in mind. With OS X 10.11 and Xcode 7 just around the corner (as of the time this was written) it's safe to say some of the details in this post will become quickly outdated. If I learn that this is the case I will try and update this post. But if you're reading this in 2021 or something don't expect any of this to be accurate.

A few days ago I set out to create a simple OpenGL program on OS X (10.10.4 to be exact) without using a wrapper library like SDL. It turns out there's lots of information out there on how to go about this. But, I never came across anything that had complete start to finish instructions for the latest versions of OS X, Xcode and OpenGL. By "start to finish" I mean from creating a new project all the way up to creating a .app that can run on another person's computer without them needing to install any dependencies. Every blog post or tutorial I came across had at least one frustrating gap somewhere in it where the author had (most likely unintentionally) assumed either the reader knew what they were doing or that the step(s) in question were easy to figure out without lots of detail. I don't know about you, but I have no idea what I'm doing and I'm real dumb, so figuring new things out ain't easy. My goal here is to document the process in as much detail as possible so you can't mess it up. There's definitely other, probably better, ways to do this; but this way works pretty well for me. I'm going to try and assume that you have very little knowledge of any of the tools involved and that you're goal is the same as mine: start from scratch and end up with something you can send to a friend. I will however assume that you're no stranger to C++ or git since they are way outside of the scope of this post.

Before getting started you'll want to acquire the latest version of Xcode (6.4 (6E35b) as of 7/28/15) and Homebrew. Xcode can be downloaded through the App Store. Detailed instructions for installing Homebrew can be found here but you can probably just use the instructions on the home page. Homebrew doesn't really have versions as far as I know, just run brew update to make sure you're up to date. You'll also want to make sure you've got the Xcode Command Line Tools Package installed; run xcode-select --install and you'll be good to go.

Now that you've got Xcode and Homebrew you're ready to install the two libraries you're going to use to make working with OpenGL a less shitty experience: GLEW and GLFW. The somewhat ironic thing is that getting GLEW and GLFW set up in an Xcode project is itself a real shitty experience.

A quick aside: the last time I touched OpenGL was probably around 2003, maybe early 2004. At the time I was on Windows XP and wasn't using either GLEW or GLFW (or GLUT or any similar library). Instead I used CreateWindow and wglCreateContext (and a shitload of other boilerplate) to create a window and rendering context, switch statements that approached several hundred lines inside a good old fashioned WindowProc for handling input, and I don't think I even had a graphics card capable of using anything past OpenGL 1.3 so I definitely wasn't bothering with GLEW.

All that is to say, when I started reading up on OpenGL again I was kind of confused by the purpose of these libraries. If you've actually been keeping up with OpenGL for the last eleven or twelve years then they're probably not so mysterious. But if you're way out of the loop or totally new to this I think I can shed some light on them and hopefully explain in a convincing way why you'd want to use them. You can skip ahead if this is all familiar territory for you. It wasn't for me so I'm going to provide a brief synopsis of each.

Let's start with GLEW. OpenGL, like most standards and specifications, is an ongoing and living thing. Since it is fundamentally tied to its implimentation and the underlying graphics hardware, care has been taken to introduce new features in a controlled manner. This is done through extensions which allow hardware vendors to implement and expose particular features independently. One fairly large downside to the way extensions are implemented is that each extension function's location has to be determined at run time. For this calls to dlopen and dlsym are necessary (or if you're on Windows: wglGetProcAddress). Doing this for every extension function you intend to call is obviously less than ideal. GLEW takes care of this for you and makes calling OpenGL functions a seamless process. You might wonder, "Is this really necessary?", the answer is no, but boilerplate code is no fun to write.

Like GLEW, GLFW serves the purpose of reducing boilerplate code to a minimum. However the type of boilerplate it focuses on is entirely different. Before you can actually start using OpenGL you need a rendering context. To get a rendering context you need a surface or window to render to. It should go without saying that creating a window is extremely platform dependent. Even when you're targeting just one platform the process can be laborious. GLFW's glfwCreateWindow makes the whole process nice and easy. As an added bonus GLFW also normalizes the process of dealing with multiple monitors and handling input. I'm usually a big believer in DIY, but unless you've got a really good reason you should probably let GLFW or a similar library handle the details of this kind of highly platform specific initialization. If you're curious you will definitely learn something useful by creating windows and handling input without help, but either can be a very deep rabbit hole if you're not careful.

The Equivalence of MIN and AND and MAX and OR in K

Maybe you've heard of APL before? It's a now-somewhat-esoteric language from the mid 1960's that makes heavy use of unusual symbols and vector math. It's always been a sort of unapproachable curiosity for me. I have no problem learning weird new syntax, but a language that requires a special keyboard is a bit much! A few weeks ago I discovered that the spirit of APL lives on in languages such as J and K, both of which have shed the need for a non-standard input device.

I decided to give K a try. Fortunately there's a free implmenentation. After playing around for a while I ended up reading through the help output to see what the & verb does in its monadic form. What caught my attention though was something I hadn't been looking for: the description for the dyadic form.

& dyadic   min/and. MIN(x,y) or AND(x,y)

min or and. Now that's something I've never seen before. In retrospect I'm not entirely sure how. I've been programming for a good while now and until now I've never seen min overloaded with and. After thinking about it for a minute I realized that it makes a good deal of sense. If you're working in a language where numeric 0 is a falsey value and you only really have numeric values (i.e. definitely not Ruby) then finding the minimum of two values x and y is equivalent to the boolean and of x and y. If either value is 0 then the result is 0 (falsey) but if both values are not 0 then the result is the non-zero minimum (truthy).

I was not surprised to find that K also overloads the dyadic | verb to mean both max or or:

| dyadic   max/or. MAX(x,y) or OR(x,y)

In a way similar to min, finding the max of two values when both are 0 returns 0 (falsey) but when either value is non-zero the result is always truthy regardless of what the maximum actually is.

Like just about everything I post here, this is totally obvious in hindsight. I'm only writting about it because despite having dealt with boolean logic for half my life now I've never seen this property of min and max pointed out explicitly before. It makes sense that a language like K, which has such a compact grammar, would overload a those functions in such a way.

Finding All ActiveRecord Callbacks

Most of the time ActiveRecord Callbacks are pretty straight forward. But sometimes in larger projects or when using certain gems you can end up with more callbacks happening than you realize. If you're curious about just what is happening when on your model there's no straight forward way that I'm aware of to find out. However, it's actually not too difficult to do yourself.

If you look at the methods available on an ActiveRecord model you'll find several related to callbacks. Here's what we find when inspecting a model that has a Paperclip attachment (you'll see why in a minute).

~/my_project% rails c
Loading development environment (Rails 4.2.0)
2.2.1 :001 > MyModel.methods.select { |method| method.to_s.include?('callback') }
 => [:_validate_callbacks,
 :_save_callbacks,
 :_destroy_callbacks,
 :_commit_callbacks,
 :_post_process_callbacks,
 :_post_process_callbacks?,
 :_post_process_callbacks=,
 :_file_post_process_callbacks,
 :_file_post_process_callbacks?,
 :_file_post_process_callbacks=,
 :_validate_callbacks?,
 :_validate_callbacks=,
 :_validation_callbacks,
 :_validation_callbacks?,
 :_validation_callbacks=,
 :_initialize_callbacks,
 :_initialize_callbacks?,
 :_initialize_callbacks=,
 :_find_callbacks,
 :_find_callbacks?,
 :_find_callbacks=,
 :_touch_callbacks,
 :_touch_callbacks?,
 :_touch_callbacks=,
 :_save_callbacks?,
 :_save_callbacks=,
 :_create_callbacks,
 :_create_callbacks?,
 :_create_callbacks=,
 :_update_callbacks,
 :_update_callbacks?,
 :_update_callbacks=,
 :_destroy_callbacks?,
 :_destroy_callbacks=,
 :_commit_callbacks?,
 :_commit_callbacks=,
 :_rollback_callbacks,
 :_rollback_callbacks?,
 :_rollback_callbacks=,
 :raise_in_transactional_callbacks,
 :raise_in_transactional_callbacks=,
 :define_paperclip_callbacks,
 :normalize_callback_params,
 :__update_callbacks,
 :set_callback,
 :skip_callback,
 :reset_callbacks,
 :define_callbacks,
 :get_callbacks,
 :set_callbacks,
 :define_model_callbacks]

That's a pretty lengthy list, and just by glancing at it we can see several methods like _initialize_callbacks= and skip_callback that aren't likely to be relevant to the problem at hand. The protected method get_callbacks looks promising, but if you look at the source:

def get_callbacks(name)
  send "_#{name}_callbacks"
end

it quickly becomes obvious that it wasn't meant to be used to get a comprehensive list of all the callbacks on a model. Instead it just gives us the callbacks related to one particular event. That's great, but what about when we don't know all of the events? I deliberately chose a model with a Paperclip attachment because Paperclip provides some of its own callback events. They could easily be missed if we assumed only the standard ActiveRecord callbacks were available. Without knowing otherwise before hand that's a fair, but potentially incorrect, assumption.

From get_callbacks we can see that the methods it calls all take the form of "_#{name}_callbacks" where name is the name of the event. Well, a few methods in our list from before seem to match that pattern, so with a little help from a regular expression we can get just those:

2.2.1 :002 > MyModel.methods.select { |method| method.to_s =~ /^_{1}[^_].+_callbacks$/ }
 => [:_validate_callbacks,
 :_save_callbacks,
 :_destroy_callbacks,
 :_commit_callbacks,
 :_post_process_callbacks,
 :_file_post_process_callbacks,
 :_validation_callbacks,
 :_initialize_callbacks,
 :_find_callbacks,
 :_touch_callbacks,
 :_create_callbacks,
 :_update_callbacks,
 :_rollback_callbacks]

This is great, but still not quite what we want. Each of these methods returns an array-like CallbackChain object containing a set of Callback objects:

2.2.1 :003 > MyModel._save_callbacks
 => #<ActiveSupport::Callbacks::CallbackChain:0x007fbf7567e918
 @callbacks=nil,
 @chain=
  [#<ActiveSupport::Callbacks::Callback:0x007fbf7362c098
    @chain_config=
     {:scope=>[:kind, :name],
      :terminator=>
       #<Proc:0x007fbf73237cf8@/Users/sean_eshbaugh/.rvm/gems/ruby-2.2.1@my_project/gems/activemodel-4.2.0/lib/active_model/callbacks.rb:106 (lambda)>,
      :skip_after_callbacks_if_terminated=>true},
    @filter=
     #<Proc:0x007fbf7362c390@/Users/sean_eshbaugh/.rvm/gems/ruby-2.2.1@my_project/gems/paperclip-4.2.1/lib/paperclip/has_attached_file.rb:91>,
    @if=
     [#<ActiveSupport::Callbacks::Conditionals::Value:0x007fbf7362c340
       @block=
        #<Proc:0x007fbf7362c2f0@/Users/sean_eshbaugh/.rvm/gems/ruby-2.2.1@my_project/gems/activemodel-4.2.0/lib/active_model/callbacks.rb:141>>],
    @key=70230125666760,
    @kind=:after,
    @name=:save,
    @unless=[]>,
   #<ActiveSupport::Callbacks::Callback:0x007fbf75684ae8
    @chain_config=
     {:scope=>[:kind, :name],
      :terminator=>
       #<Proc:0x007fbf73237cf8@/Users/sean_eshbaugh/.rvm/gems/ruby-2.2.1@my_project/gems/activemodel-4.2.0/lib/active_model/callbacks.rb:106 (lambda)>,
      :skip_after_callbacks_if_terminated=>true},
    @filter=:autosave_associated_records_for_document,
    @if=[],
    @key=:autosave_associated_records_for_document,
    @kind=:before,
    @name=:save,
    @unless=[]>,
   #<ActiveSupport::Callbacks::Callback:0x007fbf7567ea80
    @chain_config=
     {:scope=>[:kind, :name],
      :terminator=>
       #<Proc:0x007fbf73237cf8@/Users/sean_eshbaugh/.rvm/gems/ruby-2.2.1@my_project/gems/activemodel-4.2.0/lib/active_model/callbacks.rb:106 (lambda)>,
      :skip_after_callbacks_if_terminated=>true},
    @filter=:autosave_associated_records_for_uploader,
    @if=[],
    @key=:autosave_associated_records_for_uploader,
    @kind=:before,
    @name=:save,
    @unless=[]>],
 @config=
  {:scope=>[:kind, :name],
   :terminator=>
    #<Proc:0x007fbf73237cf8@/Users/sean_eshbaugh/.rvm/gems/ruby-2.2.1@my_project/gems/activemodel-4.2.0/lib/active_model/callbacks.rb:106 (lambda)>,
   :skip_after_callbacks_if_terminated=>true},
 @mutex=#<Mutex:0x007fbf7567e8c8>,
 @name=:save>

Each of these has an interesting method named raw_filter which returns either a method name Symbol or a Proc object. Let's see what we get when we inspect that for each of our model's save callbacks:

2.2.1 :004 > MyModel._save_callbacks.map { |callback| callback.raw_filter }
 => [#<Proc:0x007fbf7362c390@/Users/sean_eshbaugh/.rvm/gems/ruby-2.2.1@my_project/gems/paperclip-4.2.1/lib/paperclip/has_attached_file.rb:91>,
 :autosave_associated_records_for_document,
 :autosave_associated_records_for_uploader]

We get an array with a Proc and a couple of Symbols which starts to give us a much better sense of what will happen when we save a model. There's one more important detail though that we've overlooked, each Callback object has a kind property that will tell us whether the callback gets called before, after, or around the event. Let's group our callbacks by kind:

2.2.1 :005 > MyModel._save_callbacks.group_by(&:kind).each { |_, callbacks| callbacks.map! { |callback| callback.raw_filter } }
 => {:after=>
  [#<Proc:0x007fbf7362c390@/Users/sean_eshbaugh/.rvm/gems/ruby-2.2.1@my_project/gems/paperclip-4.2.1/lib/paperclip/has_attached_file.rb:91>],
 :before=>
  [:autosave_associated_records_for_document,
   :autosave_associated_records_for_uploader]}
 => {:after=>[#<Proc:0x007fbf7362c390@/Users/sean_eshbaugh/.rvm/gems/ruby-2.2.1@my_project/gems/paperclip-4.2.1/lib/paperclip/has_attached_file.rb:91>], :before=>[:autosave_associated_records_for_document, :autosave_associated_records_for_uploader]}

Awesome! Finally something that will start to give us real insight into what happens when. But we can still do better, what about all the callbacks? If we combine the regular expression filter of the class methods from before with the above we get a complete picture for the whole model:

2.2.1 :006 > MyModel.methods.select { |method| method.to_s =~ /^_{1}[^_].+_callbacks$/ }.each_with_object({}) { |method, memo| memo[method] = MyModel.send(method).group_by(&:kind).each { |_, callbacks| callbacks.map! { |callback| callback.raw_filter } } }
 => {:_validate_callbacks=>
  {:before=>
    [#<ActiveModel::BlockValidator:0x007fbf7362d3f8
      @attributes=[:file],
      @block=
       #<Proc:0x007fbf7362d510@/Users/sean_eshbaugh/.rvm/gems/ruby-2.2.1@my_project/gems/paperclip-4.2.1/lib/paperclip/has_attached_file.rb:27>,
      @options={}>,
     #<Paperclip::Validators::MediaTypeSpoofDetectionValidator:0x007fbf73624320
      @attributes=[:file],
      @options=
       {:if=>
         #<Proc:0x007fbf736245f0@/Users/sean_eshbaugh/.rvm/gems/ruby-2.2.1@my_project/gems/paperclip-4.2.1/lib/paperclip/has_attached_file.rb:85 (lambda)>}>,
     #<ActiveRecord::Validations::PresenceValidator:0x007fbf7567e440
      @attributes=[:document],
      @options={}>,
     #<ActiveRecord::Validations::PresenceValidator:0x007fbf7567dc20
      @attributes=[:uploader],
      @options={}>,
     #<ActiveRecord::Validations::UniquenessValidator:0x007fbf7567d400
      @attributes=[:file_fingerprint],
      @klass=
       MyModel(id: integer, file_file_name: string, file_content_type: string, file_file_size: integer, file_updated_at: datetime, file_fingerprint: string, created_at: datetime, updated_at: datetime),
      @options=
       {:case_sensitive=>true,
        :if=>
         #<Proc:0x007fbf7567d5b8@/Users/sean_eshbaugh/sites/clickherelabs/hub/app/models/attachment.rb:22 (lambda)>}>,
     #<Paperclip::Validators::AttachmentPresenceValidator:0x007fbf7567c4b0
      @attributes=[:file],
      @options={}>,
     #<Paperclip::Validators::AttachmentSizeValidator:0x007fbf756774d8
      @attributes=[:file],
      @options={:less_than=>1073741824}>,
     #<Paperclip::Validators::AttachmentFileTypeIgnoranceValidator:0x007fbf75676510
      @attributes=[:file],
      @options={}>]},
 :_save_callbacks=>
  {:after=>
    [#<Proc:0x007fbf7362c390@/Users/sean_eshbaugh/.rvm/gems/ruby-2.2.1@my_project/gems/paperclip-4.2.1/lib/paperclip/has_attached_file.rb:91>],
   :before=>
    [:autosave_associated_records_for_document,
     :autosave_associated_records_for_uploader]},
 :_destroy_callbacks=>
  {:before=>
    [#<Proc:0x007fbf73627f48@/Users/sean_eshbaugh/.rvm/gems/ruby-2.2.1@my_project/gems/paperclip-4.2.1/lib/paperclip/has_attached_file.rb:92>]},
 :_commit_callbacks=>
  {:after=>
    [#<Proc:0x007fbf736279f8@/Users/sean_eshbaugh/.rvm/gems/ruby-2.2.1@my_project/gems/paperclip-4.2.1/lib/paperclip/has_attached_file.rb:93>]},
 :_post_process_callbacks=>{},
 :_file_post_process_callbacks=>
  {:before=>
    [#<Proc:0x007fbf75687888@/Users/sean_eshbaugh/.rvm/gems/ruby-2.2.1@my_project/gems/paperclip-4.2.1/lib/paperclip/validators.rb:67>,
     #<Proc:0x007fbf75677b18@/Users/sean_eshbaugh/.rvm/gems/ruby-2.2.1@my_project/gems/paperclip-4.2.1/lib/paperclip/validators.rb:67>,
     #<Proc:0x007fbf75676bf0@/Users/sean_eshbaugh/.rvm/gems/ruby-2.2.1@my_project/gems/paperclip-4.2.1/lib/paperclip/validators.rb:67>]},
 :_validation_callbacks=>{},
 :_initialize_callbacks=>{},
 :_find_callbacks=>{},
 :_touch_callbacks=>{},
 :_create_callbacks=>{},
 :_update_callbacks=>{},
 :_rollback_callbacks=>{}}

And for the sake of reusability we can easily wrap this up in a module (pardon the terrible name):

module ShowCallbacks
  def show_callbacks
    _callback_methods = methods.select do |method|
      method.to_s =~ /^_{1}[^_].+_callbacks$/
    end

    _callback_methods.each_with_object({}) do |method, memo|
      memo[method] = send(method).group_by(&:kind).each do |_, callbacks|
        callbacks.map! do |callback|
          callback.raw_filter
        end
      end
    end
  end
end

class MyModel
  extend ShowCallbacks
  ...
end

One of the Hard Things

There are two hard things in computer science: cache invalidation, naming things, and off-by-one errors.

I love that quote, not only because it's more amusing than it should be, but because it's extremely true. I know because I've been bitten by all three things plenty of times. Tonight it was while using the rails-settings-cached gem to handle some global settings for a Rails application.

At some point I truncated the settings table so I could reset it with new defaults. Afterwards my new settings weren't taking in the application or showing up in the database. I tried to mimic the behavior of #save_default but with some extra output by doing the following inside my initializer

if Setting.application_title.nil?
  puts 'Setting application_title.'

  Setting.application_title = 'My Application'
end

just to make sure something weird wasn't going on.

Setting.application_title wasn't returning nil so the setting wasn't being set, even after restarting the server. I discovered that when I added Rails.cache.delete('settings:application_title') before the above that it worked just fine. So of course the normal call to #save_default worked just fine as well.

It then occurred to me that the problem might be related to Spring which keeps Rails loaded and ready to get started quickly. I couldn't find confirmation in the Spring source but I'm guessing that by keeping the Rails process around it also keeps the cache nice and full. This means that, despite removing the setting's table's contents and restarting the server, the old settings were hanging around in memory. I'm hesitant to say with 100% confidence that this is what was happening, but it certainly makes sense to me.

Spring ships with Rails 4.1 by default so if you're making heavy use of the Rails cache this sort of thing is probably something you'll have to look out for. Also, keep in mind that the Spring readme does mention, "There's no need to 'shut down' spring. This will happen automatically when you close your terminal."

Converting Text into a Sorted List

Ever had a list that's sorta broken up into different lines but is also mostly just uses spaces to delimit items? Ever wanted each item in that list on its own line? Ever want that one-line-per-item list sorted? It's shocking how often I need to do this. Actually it's probably more unfortunate than shocking.

If you find yourself needing to do all that too then you're in luck! It turns out there's plenty of easy ways to turn a bunch of words into a sorted list!

First, a few notes...

Since I'm on OSX I'm accessing my clipboard with pbcopy and pbpaste. If you're on Linux with X11 you can use xclip or xsel instead. Obviously you can replace the clipboard paste with some other output command and you can omit the clipboard copy or replace it with some other command.

All of these examples use grep and sort. grep is used here to remove blank lines, I'll just leave it at that since a book could be (and has been apparently) written about grep. sort does exactly what you'd expect, it sorts a file or input. The -f option makes the sorting case insensitive. If you do really want words that start with capital letters to go first then omit that option.

sed

% pbpaste | sed 's/[[:space:]]/\'$'\n'/g | grep -v '^[[:space:]]*$' | sort -f | pbcopy

sed is ancient. Despite its age it remains incredibly powerful and versatile. If you're on OSX or some other BSD variant then your sed will function somewhat differently from GNU sed. I won't waste a bunch of space explaining the details here, but this Unix & Linux Stack Exchange question explains it nicely. Basically BSD sed doesn't do escape sequences in output. The best solution I've seen to the problem is in this Stack Overflow comment. If you're on Linux and using GNU sed then this is what you'd do:

% xclip -o -selection clipboard | sed 's/[[:space:]]/\n/g' | grep -v '^[[:space:]]*$' | sort -f | xclip -i -selection clipboard

The s command takes a regular expression, a replacement string, and optionally one or more flags in the form "s/regular expression/replacement string/flags". The g flag, like it does most places, makes the substitution global.

tr

% pbpaste | tr -s '[:space:]' '\n' | grep -v '^[[:space:]]*$' | sort -f | pbcopy

tr is similar to sed but much simpler. So simple there isn't much to say. The first argument is a set of characters to replace and the second argument is a corresponding set of characters to replace the first with one-to-one. The -s option squeezes consecutive occurrences of the replacement characters to into a single character.

awk

% pbpaste | awk '{gsub(/[[:space:]]+/, "\n"); print}' | grep -v '^[[:space:]]*$' | sort -f | pbcopy

awk reads each line and executes the action inside the curly braces for each line. In our case we're using gsub to do a global substitution and then unconditionally printing the line. awk does far more than simple substitution and printing so there's probably a million different ways to accomplish this task. I've met several people who swear by awk, and I can understand why. Personally, I find it to be too awkward (pun sorta intended) for serious use given that alternatives with far fewer rough edges and more extensibility exist.

Ruby

% pbpaste | ruby -ne 'puts $_.gsub(/[[:space:]]+/, "\n")' | grep -v '^[[:space:]]*$' | sort -f | pbcopy

This right here is actually the biggest reason why I'm writing this post. Whenever I'm faced with a task involving transforming text my natural inclination is to write a small throwaway script in Ruby to get the job done. Usually those scripts end up being fairly elaborate and proper, in the sense that they could easily be part of an actual program. I like to make it a habit to not write overly terse code. Even when I know I'm going to throw it all away I like my code to be readable with nice descriptive variable names and no magical short cuts. That being said, this article inspired me to venture forth and try my hand at something arcane and nigh unreadable. I try and avoid writing Ruby that looks like 1990's Perl, but the -n option coupled with -e is just too cool to ignore. I will, however, choose to ignore that the Ruby example looks almost exactly like the awk example. Personally I don't think that's a very flattering comparison.

If all of this seems familiar, it's probably because you've seen Avdi Grimm's excellent post on solving almost the same problem in several different languages.

Multiple Key Hashes in Ruby

Here's an idea I've had rolling around inside my head for a while: hashes with multiple keys for the same value. Or, rather, some data structure, that is like a hash, except that only the values are unique, not the key/value pair. A data structure like that would allow for multiple keys to access the same underlying data. What use could this possibly be? Well, I occasionally find myself doing something along these lines:

def flash_message_alert_class(name)
  case name
    when :success, :notice
      'alert-success'
    when :info
      'alert-info'
    when :warning
      'alert-warning'
    when :danger, :alert, :error
      'alert-danger'
    else
      'alert-info'
  end
end

Where name is a key to the Rails flash hash. That particular example isn't too egregious; it's easy enough to understand, only a handful of lines long, and most importantly has only a few possible outcomes. But what if that wasn't the case? What if we had 10, 100, or even 1000 when clauses? What if each of those clauses had as many possible values that would trigger it? That seems far fetched, and it is, but consider a more likely scenario, what if the above mapping between sets of symbols and a single string was somehow constructed at run-time based on various forms of input. It'd be very impractical or downright impossible to write a case statement to handle that. It occurred to me the other day that the above scenario could be modeled has a data structure much like the one I described.

I'm positive I'm not the first person to think of this, but I have no idea what it would be called so I can't verify whether or not it has a name. If you're reading this and know the proper technical name of the data structure I've described please send me a message, I would love to know. For now I'm calling it a "multiple key hash". Other possible names I've considered are "unique value hash", "dedupicated hash", and "double layered hash". That last one will make sense in a minute.

I did however find an interesting Stack Overflow answer which offered up what the poster called an AliasedHash. That data structure is pretty cool and is so close to what I've been thinking about but it's not quite there. I want "aliasing" to be implicit and consequently I want it to be impossible to have duplicate values. Attempting to create one will instead merely create an "alias".

Yesterday evening I finally got enough inspiration to implement a multiple key hash in Ruby. What I have so far is still very rough, untested (since I'm only one step beyond playing around in an irb REPL), and likely very bad as far as performance goes. Here's the most important parts:

class MultikeyHash
  def initialize(initial_values = nil)
    @outer_hash = {}

    @inner_hash = {}

    @next_inner_key = 1

    if initial_values
      initial_values.each do |keys, value|
        if keys.is_a?(Array) && !keys.empty?
          keys.each do |key|
            self[key] = value
          end
        else
          self[keys] = value
        end
      end
    end
  end

  def [](outer_key)
    inner_key = @outer_hash[outer_key]

    if inner_key.nil?
      nil
    else
      @inner_hash[inner_key]
    end
  end

  def []=(outer_key, new_value)
    inner_key = @inner_hash.select { |_, existing_value| existing_value == new_value }.map { |key, _| key }.first

    if inner_key
      @outer_hash[outer_key] = inner_key
    else
      @outer_hash[outer_key] = @next_inner_key

      @inner_hash[@next_inner_key] = new_value

      @next_inner_key += 1
    end
  end
end

A quick note before I explain this code in detail. The MultikeyHash#new method behaves a bit differently from Hash#new method; rather than take the default value (a feature I have not yet implemented) it takes a hash that represents the initial values of the MultikeyHash. Here is an example of how it would be used:

m = MultikeyHash.new(['a', 'b', 'c'] => 'letters', [1, 2, 3] => 'numbers') #=> #<MultikeyHash:0x007f9ad31bb370 @outer_hash={"a"=>1, "b"=>1, "c"=>1, 1=>2, 2=>2, 3=>2}, @inner_hash={1=>"letters", 2=>"numbers"}, @next_inner_key=3>

m[1]                                                                       #=> "numbers"

m[2]                                                                       #=> "numbers"

m['a']                                                                     #=> "letters"

m['b']                                                                     #=> "letters"
  

If a key in the initial hash is a non-empty array then each element in that array is made a key of the new MultikeyHash. This means that if you want an actual array to be a key you will have to nest it inside of another array. Unfortunately I haven't been able to come up with a better solution. I'm afraid this might become a nuisance since it's not at all obvious without reading the source for initialize. I'm also considering changing it to accept anything that response to each to make it a bit more flexible.

The MultikeyHash class consists primarily of two hashes. The outer hash is what is exposed to the user. The keys behave like normal hash keys but the values are always just a key to the inner hash. I've chosen to use an integer for simplicity's sake. When accessing a MultikeyHash value we first find the inner hash key in the outer hash. If it exists we use that key to get the value from the inner hash, otherwise we return nil.

Setting a value is a bit more complicated. First we check the inner hash to see if the value exists in the inner hash and if it does we get the inner key for it and set the outer hash value for the outer key to the inner key. If the value was not found we set the outer hash value for the outer key to a new inner key and then set the inner hash value for that new inner key to the new value and increment the inner key counter. The result of all this shuffling is that new values are effectively inserted as normal and existing values are given one more key by which they can be accessed. From the user's perspective hash access occurs like normal, but in reality there are two layers of access, the first mediating access to the second (hence why "double layered hash" is a name I've considered).

The above code works just fine, but it lacks something very important. One of the key features of Ruby's hashes is their ability to be enumerated. The Enumerable module provides a powerful set of methods to any class that implements its own each method. Let's take a look at just how easy this is:

class MultikeyHash
  include Enumerable

  # Omitting the rest of the class for the sake of brevity.

  def each(&block)
    @outer_hash.group_by { |_, inner_key| inner_key }.inject({}) { |acc, e| acc[e.last.map { |i| i.first }] = @inner_hash[e.first]; acc }.each do |key, value|
      block.call(key, value)
    end
  end
end

By grouping the outer hash by the inner key and then collecting those groups into a new hash where the key is all of the outer keys and the value is value the inner key points to we end up with a hash that looks like {[:a, :b, :c]=>"letters", [1, 2, 3]=>"numbers"}. This lets us easily implement an inspect method:

class MultikeyHash
  # Omitting the rest of the class for the sake of brevity.

  def inspect
    "{#{self.map { |keys, value| "#{keys.inspect}=>#{value.inspect}" }.join(', ')}}"
  end

  def to_s
    inspect
  end
end

Because MultikeyHash has an each method it now has all the other goodies like map, select, reject, and inject.

I'm still pretty hesitant to say this data structure is a good idea. I haven't actually used it for anything so I have no idea how it works in the real world. Odds are I never will. Either way, building new types of data structures is always lots of fun! You can find the whole class here.

Chef Resource Condtionals

Lately it seems like all of my posts are about things that are super, painfully, embarrassingly obvious in hindsight. The trend continues!

Over the last week I've been learning to use Chef to set up some servers at work (with the help of the iron_chef gem, which was written by a co-worker of mine). At this point I feel like a real dummy for never having bothered to use Chef before, especially since it's been around for some time now. If you're not using Chef for server management you really ought to look into it. It makes automating your setup easy and having everything that your servers need documented in your scripts is awesome.

Despite quickly becoming a "why wasn't I using this before?" sort of tool there's been a few conceptual hurdles, as there always is with any framework or DSL. The one that really got me is the not_if/only_if conditional guards on resource blocks. The Chef documentation lays it out in what seems like a straightforward manner:

The not_if and only_if conditional executions can be used to put additional guards around certain resources so that they are only run when the condition is met.

Seems simple right? Well, if you look around enough you'll see examples of not_if and only_if used with either a block passed as the argument or with a String passed as the argument.

Here's two quick real and I swear not-contrived examples. One with a block:

bash 'unarchive-lame-source' do
  cwd ::File.dirname(src_filepath)

  code <<-EOH
    tar zxf #{::File.basename(src_filepath)} -C #{::File.dirname(src_filepath)}
  EOH

  not_if { ::File.directory?(::File.join(Chef::Config[:file_cache_path] || 'tmp', "lame-#{node['lame']['version']}")) }
end

And one with a string:

bash 'compile-lame-source' do
  cwd ::File.dirname(src_filepath)

  code <<-EOH
    cd lame-#{node['lame']['version']} &&
    ./configure #{lame_options.join(' ')} &&
    make &&
    make install
  EOH

  not_if 'sudo ldconfig && ldconfig -p | grep libmp3lame'
end

Here comes the embarrassing part. To me, at least, it wasn't clear what each form of the method call did, or really that there is a difference between the two. When passing a block as the argument, the result of the block, truthy or falsy, determines whether or not the resource is run. When passing a String, it is executed as a shell command and the return result of the command is used to determine whether or not the resource is run. Remember, for shell commands a return result of 0 indicates success (or true) and anything else, typically 1, but it can be any non-zero value, indicates failure (or false).

At first I was naively trying to use not_if like this not_if { 'sudo ldconfig && ldconfig -p | grep libmp3lame' } expecting the block to run the command. Instead, the block just returns the string. Since Strings are truthy the block always returns true and always skips the resource for not_if or runs the resource for only_if.

If we take a look at the source for Chef::Resource::Conditional#initialize it becomes pretty clear what's going on.

def initialize(positivity, command=nil, command_opts={}, &block)
  @positivity = positivity
  case command
  when String
    @command, @command_opts = command, command_opts
    @block = nil
  when nil
    raise ArgumentError, "only_if/not_if requires either a command or a block" unless block_given?
    @command, @command_opts = nil, nil
    @block = block
  else
    raise ArgumentError, "Invalid only_if/not_if command: #{command.inspect} (#{command.class})"
  end
end

Here we can clearly see that if the optional command is passed as a String the Chef::Resource::Conditional object is initialized with the command and command options and the block instance variable set to nil (and importantly, ignored if it was passed at all). If no command was passed but a block was given then the command and command options instance variables are set to nil and the block instance variable is set to the block that was passed. And finally an exception is raised if no command or block is given or if something weird is passed as the command.

And if you look a little bit further down in the source you'll find where the conditional is actually evaluated:

def evaluate
  @command ? evaluate_command : evaluate_block
end

def evaluate_command
  shell_out(@command, @command_opts).status.success?
rescue Chef::Exceptions::CommandTimeout
  Chef::Log.warn "Command '#{@command}' timed out"
  false
end

def evaluate_block
  @block.call
end

Pretty much exactly as I described above. If the command instance variable is present, it'll evaluate the command, otherwise it'll call the block. If you're interested in seeing how the cross-platform shell_out method works you can check out the source, it's definitely worth a read.

In fact, I think the takeaway from all of this is, when in doubt, go straight to the source code. It'll save you lots of time and you'd be hard pressed to not learn something new, especially if you're diving into a well-known and properly designed library.

Nth Element of a List in Scheme

Common Lisp and Clojure both provide a built-in nth function for retrieving the nth element in a list. Surprisingly enough Scheme (MIT Scheme at least) doesn't that I'm aware of.

Fortunately nth is super simple to implement in a recursive fashion:

(define (nth n l)
  (if (or (> n (length l)) (< n 0))
    (error "Index out of bounds.")
    (if (eq? n 0)
      (car l)
      (nth (- n 1) (cdr l)))))

After checking to make sure that the index n isn't greater than the length of the list l or less than 0 the function checks to see if n is 0. If it is then it simply returns the first item in l with car. Otherwise nth is called again with n - 1 and the tail of l, retrieved with cdr.

seshbaugh ~% scheme --load nth.scm
...
;Loading "nth.scm"... done

1 ]=> (nth 3 '(1 2 3 4 5))

;Value: 4

I'm reasonably certain that this function is tail recursive; so it should work just fine, albeit slowly, for very long lists.

Recursive Closures in Elixir

File this under "things that are obvious in retrospect". I got to wondering if it's possible to recursively call a closure in Elixir. The answer is of course, yes, but with a small caveat. You simply have to pass the function (so it can't be truly anonymous) as an argument so it can be used as a callback.

Consider the following pathetic first attempt:

lol = fn
  0 -> 0
  x -> x + lol.(x - 1)
end

IO.puts lol.(100)

Looks good, right? Not quite...

seshbaugh ~% elixir lol.exs
** (CompileError) /Users/seshbaugh/lol.exs:3: function lol/0 undefined
    src/elixir_translator.erl:463: :elixir_translator.translate_each/2
    src/elixir_translator.erl:613: :elixir_translator.translate_arg/2
    lists.erl:1339: :lists.mapfoldl/3
    lists.erl:1340: :lists.mapfoldl/3
    src/elixir_translator.erl:620: :elixir_translator.translate_args/2
    src/elixir_translator.erl:80: :elixir_translator.translate_each/2
    lists.erl:1339: :lists.mapfoldl/3

The closure has no idea what we're talking about. Which shouldn't really come as a surprise, we're still in the middle of defining the value of lol (that's my understanding as of right now, if I discover that I'm right for the wrong reason I will be sure to update this post). If we want to call lol from within itself we have to pass a reference to it to itself, like so:

lol = fn
  _, 0 -> 0
  lol, x -> x + lol.(lol, x - 1)
end

IO.puts lol.(lol, 100)

This time around we pass lol has an argument to itself. In the first function clause the callback is ignored (since its purpose is to halt the recursion) and in the second one we use call it, with itself as the first argument again. Now when we run this we get the expected output:

seshbaugh ~% elixir lol.exs
5050

I'm not sure if this is really all that useful, since you could (and probably should) use defp in your modules to define private functions that aren't available outside the module. But it never hurts to know what's possible!

More Elixir Adventures

Last night I went to a surprisingly crowded meetup featuring Dave Thomas (of "pickaxe" fame) giving a talk about Elixir. I had a great time hearing him share his experience and the chance to pick his brain about Elixir was rather exciting. His most interesting take away point: Elixir may or may not be the next "big thing" in programming languages, but whatever it ends up being, it's going to look and feel very similar to Elixir. I think I'm starting to be inclined to agree.

The best part about the meetup was that I left with a better understanding of Elixir's (and Erlang's) concept of pattern matching and why it's so important. About a week ago I was attempting to do Project Euler problem 1 in Elixir. Whenever I encounter a new language I like to re-solve some of the easier Euler problems with it just to get a feel for the language, how to set it up an environment, and how to handle the basic syntax and patterns. Since I've already done most of them in Ruby or JavaScript I have a basic understanding of what I'm trying to do. My first attempt at Euler #1 is, in retrospect, embarrassing. When Dave Thomas said that early on with Elixir he was stuck in a Ruby mindset I totally understood what he meant.

defmodule Euler do
  def naive_euler_1(x) do
    if x <= 1 do
      0
    else
      if rem(x, 3) == 0 || rem(x, 5) == 0 do
        x + naive_euler_1(x - 1)
      else
        naive_euler_1(x - 1)
      end
    end
  end
end

IO.puts Euler.naive_euler_1(999)

I suppose it's not terrible considering I had only read and understood a few parts of the "Getting Started" documentation. At least it's recursive!

After getting home last night I immediately set out to re-write the above but using the awe-inspiring power of pattern matching.

defmodule Euler do
  def better_euler_1(x) when x <= 1, do: 0
  def better_euler_1(x) when rem(x, 3) == 0 or rem(x, 5) == 0, do: x + better_euler_1(x - 1)
  def better_euler_1(x), do: better_euler_1(x - 1)
end

Holy crap. No conditionals. Okay, well, technically there are conditionals, but they're in the form of guard statements on the pattern matching part of the function clause (I really hope I'm getting my terms right here!). Still, this is much cleaner and, from what I understand, much more idiomatic. Dave was rather emphatic in pointing out that by removing the standard if-then-else type of conditional logic you remove the possibility of many of the bugs typically found in software. I can't say I know from experience that that's true, but it's a very intriguing argument and one I hope to explore.

After Dave was done speaking a couple of people from the audience went up front and shared some things they've been doing with Elixir lately. The first person, while giving a quick rundown of his version of Conway's Game of Life pointed out something interesting that had only been briefly touched upon earlier while Dave was speaking (although I'd seen it before while reading up on Elixir); the |> macro. |> takes whatever is returned from the left and passes it as the first argument to a function on the right. It's a simple but powerful expression. Realizing this I set out to shorten my solution to Euler #1 even further.

IO.puts 1..999 |> Enum.filter(fn x -> rem(x, 3) == 0 || rem(x, 5) == 0 end) |> Enum.reduce(0, fn (x, sum) -> x + sum end)

Again, holy crap. This is almost identical to my standard Ruby solution.

puts (1..999).select { |i| i % 3 == 0 || i % 5 == 0 }.inject(:+)

In fact, were I reading it out loud and explaining it to someone I would probably end up using almost the same English words in both cases. To me, that's a good sign.

As soon as I'm done writing this I'm going to pre-order Dave's new book on Elixir. He said he's as excited about Elixir as he was about Ruby many years ago. Color me excited too.

I heard you like Ruby and Erlang so I put Ruby inside Erlang for you

Now I know I'm only just scratching the surface of this whole Elixir thing, but I have a sneaking suspicion I'm going to be feeling lots of deja vu...

~% irb
2.0.0p247 :001 > name = "world"
 => "world"
2.0.0p247 :002 > "Hello, #{name}!"
 => "Hello, world!"
2.0.0p247 :003 >

~% iex
Erlang R16B01 (erts-5.10.2) [source] [64-bit] [smp:8:8] [async-threads:10] [hipe] [kernel-poll:false] [dtrace]

Interactive Elixir (0.10.0) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)> name = "world"
"world"
iex(2)> "Hello, #{name}!"
"Hello, world!"
iex(3)>

Logging in Go

The other day I found myself needing to keep a log file for a small Go web application. I'm pretty new to the language so it actually took me longer than I initially thought it would. Mostly for one reason: I couldn't find a clear example of actually using the log package to do what I wanted. I suspect that because this is such a trivial task no one (that I could find) has bothered to document just how to accomplish this. In retrospect this seems so obvious now I can't help but question whether or not this post is even worth writing. I'm going to stick with my original gut instinct.

To demonstrate, here's a small sample program. All it does is open up a file, log a message, and then quit. Let's take a look at the entire program first and then dive a little deeper.

package main

import (
	"fmt"
	"os"
	"log"
)

func main() {
	logFile, err := os.OpenFile("sample.log", os.O_WRONLY | os.O_APPEND | os.O_CREATE, 0666)

	if err != nil {
		fmt.Println(err)

		return
	}

	logger := log.New(logFile, "Sample Log: ", log.Ldate | log.Ltime | log.Lmicroseconds)

	logger.Println("Logger created!")
}

For the sake of clarity: this isn't mean to be a final working example of logging in Go, just a demonstration of the setup.

The first thing we do is open up a file for writing set to append. If the file doesn't exist the os.O_CREATE flag will ensure it is created if possible. After making sure that went well we create a new Logger. Finally we call Println to append our message to the log.

The final output is something like this:

Sample Log: 2013/03/21 20:39:29.348481 Logger created!

For more information on log.New see the documentation. Since the documentation doesn't really tell you what the available options for the flag parameter are you can find them here.

Ripping Vinyl

I originally wrote this some time ago for a message board. It finally occurred to me the other day to post it here where it can actually be found. When I wrote this I was working on Windows 7. The general steps should be applicable to older versions of Windows as well as Windows 8.

The rest of this is very long...

Why I Program

"It may seem like a strange motivation to you, but sometimes people say things because they want people to hear them, make things because they want people to look at them and use them, and write things because they want people to read them." - Hacker News user pessimizer

I really don't think I could have better distilled the essence of why I program as both a living and as a hobby.

I was once explaining to someone entirely unfamiliar with software development, much less free and open source software, that I put much of my code out on the internet for anyone to freely use as they see fit. This idea seemed appalling and downright dangerous to them. "What if someone uses it without paying you!?" they asked with a stunned look of horror and concern. "That's the point. I would be ecstatic if someone found something I had written to be useful. It's why I do what I do."

Getting paid to program is awesome, but I'd still do it either way.

Configuring Nginx and Unicorn for force_ssl

It turns out that setting up SSL on Nginx + Unicorn + Rails is actually pretty easy. But there's a few pitfalls you have to watch out for. The following guide is based partially on these instructions and assumes you already have an SSL certificate and already have it placed on your server.

Let's take a look at our initial Nginx configuration file. You can find yours in /etc/nginx/sites-available, but if you're reading this you probably already knew that.

upstream unicorn_mysite {
	server unix:/tmp/unicorn.mysite.sock fail_timeout=0;
}

server {
	listen 80;
	server_name mysite.com;
	root /srv/mysite/current/public;
	access_log /srv/mysite/shared/log/nginx.access.log main;
	error_log /srv/mysite/shared/log/nginx.error.log info;

	location ^~ /assets/ {
		gzip_static on;
		expires max;
		add_header Cache-Control public;
	}

	try_files $uri/index.html $uri @unicorn;
	location @unicorn {
		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
		proxy_set_header Host $http_host;
		proxy_redirect off;
		proxy_pass http://unicorn_mysite;
	}

	error_page 500 502 503 504 /500.html;
	client_max_body_size 4G;
	keepalive_timeout 10;
}

As you can see, this configuration makes some assumptions about our setup that are unlikely to be true for yours. However, for this exercise the details of the configuration are largely inconsequential.

In your editor of choice take the above config file and copy the server section and paste it below. Now, make the second server section look something like this:

server {
	listen 443;
	ssl on;
	ssl_certificate /etc/httpd/conf/ssl/mysite.com.crt;
	ssl_certificate_key /etc/httpd/conf/ssl/mysite.com.key;

	server_name mysite.com;
	root /srv/mysite/current/public;
	access_log /srv/mysite/shared/log/nginx.ssl.access.log main;
	error_log /srv/mysite/shared/log/nginx.ssl.error.log info;

	location ^~ /assets/ {
		gzip_static on;
		expires max;
		add_header Cache-Control public;
	}

	try_files $uri/index.html $uri @unicorn;
	location @unicorn {
		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
		proxy_set_header X-Forwarded-Proto https;
		proxy_set_header Host $http_host;
		proxy_redirect off;
		proxy_pass http://unicorn_mysite;
	}

	error_page 500 502 503 504 /500.html;
	client_max_body_size 4G;
	keepalive_timeout 10;
}

The first difference you should notice is the listen port. HTTPS uses port 443 instead of port 80. The following three lines tell Nginx that we want SSL on and where our certificate and where our certificate keys are being stored. /etc/httpd/conf/ssl is a pretty standard location, but you can keep them anywhere.

The next change we make is to the log file locations. The normal HTTP config will write to nginx.access.log and nginx.error.log. Here we're telling the HTTPS config to write to nginx.ssl.access.log and nginx.ssl.error.log instead. If you ever encounter any problems with your SSL setup it'll be pretty handy to have your logs separated out by protocol.

The last difference between the two configurations is the extra proxy_set_header setting. Since we plan on using force_ssl in our Rails application to selectively ensure SSL on different controllers this step is really important. force_ssl relies on the HTTP_X_FORWARDED_PROTO HTTP header to determine whether or not the request was an HTTPS request. If this setting isn't set to https then you will end up with an infinite redirect loop as force_ssl will always think the forwarded request isn't HTTPS.

At this point you should restart Nginx: sudo /etc/init.d/nginx restart. In your Rails app's controller add the call to force_ssl at the top like this:

class ContactsController < ApplicationController
  force_ssl
  before_filter :whatever
  ...

Now, when you go to any action on that controller you should immediately be redirected to the SSL version.

If you get an error similar to "Error 102 (net:: ERR CONNECTION REFUSED)" then this likely means your server is blocking port 443. Odds are you won't have this issue, but I did, so it makes sense to me to include a possible fix. This makes the assumption you're using iptables to manage your ports. Open up /etc/sysconfig/iptables and look for a line similar to this:

-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT

Immediately below it add the following:

-A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT

As usual, if your settings look similar but not quite the same then base your changes off your settings. The important part here is the --dport, we want to open up port 443. After you do this you'll need to restart iptables with sudo /etc/init.d/iptables restart.

At this point your controllers with force_ssl in them should be redirecting to the SSL version of your site. Like most ActionController callbacks you can also specify which actions force_ssl will be run on using the only and except options.

Overriding But Preserving Ruby Methods

Recently I found myself needing to overwrite ActiveRecord's default save method but still retain the ability to call the original method. I know, I know, that's crazy talk, right? What could you possibly need to do that for? Well, in my case it was to provide a way to create "drafts" of my models under certain conditions when save is called. Rather than have all sorts of messy logic repeated over an over my controllers or tucked away in an awkward helper method it made much more sense to me to attach the functionality on my models as I need it. The ever so sublime paper_trail gem does something quite similar with ActiveRecord callbacks. But that isn't quite what I needed. What I really wanted was the ability to prevent a model from being saved in the first place. After all, what good is saving a draft if we've overwritten the original in the process? I particularly had in mind a use case where some users could only save drafts, which could be approved at a later time by more privileged users.

So now that we know the why of doing something that at first seems crazy (and more than a bit dangerous), what about the how? The core of how to override but preserve a method is pretty simple, but I think it might be helpful to provide some context, so bear with me.

Just like paper_trail, and many other gems, we start off with the following to get our module to load whenever ActiveRecord is loaded. This ensures that we don't have to manually include our module.

# /lib/kentouzu.rb
ActiveSupport.on_load(:active_record) do
  include Kentouzu::Model
end

Next we define self.included in or Model module so that when it's included we extend the base class with the ClassMethods module. This provides a slew of class methods to our model, the most important of which for the purpose of this post is the has_drafts method.

# /lib/kentouzu/has_drafts.rb
module Kentouzu
  module Model
    def self.included(base)
      base.send :extend, ClassMethods
    end

The has_drafts method provides us with a nice way of making it so we only include our InstanceMethods when we actually need it. It'd be really bad if we always override a vital method like save! If we just included the code to orverride the method without going through this it would lead to all sorts of disasterous behavior as our earlier hook into ActiveSupport#on_load would include it in every model in our application even when it doesn't make sense.

By providing this method we give a nice clean way to add functionality to our models (or really, any class) in the same way paper_trail's has_paper_trail does. Lots of gems take advantage of this pattern.

    module ClassMethods
      def has_drafts options = {}
        send :include, InstanceMethods
      end
    end

Here's where things start to get interesting (and relevant). In our InstanceMethods module we use the same self.included method as before. But this time we call instance_method(:save) on the base class to get an UnboundMethod for save. This allows us to reuse it later.

    module InstanceMethods
      def self.included(base)
        default_save = base.instance_method(:save)

After getting a reference to the old save method we then override it with define_method, sent to the base class. define_method is important because it allows access to the surrounding scope where default_save is defined. This lets us use it even after its out of scope. Inside the block the key is the if statement. It checks for the conditions for using our new save method. In my particular case I check to make sure that everything is enabled on the model (in pretty much the same way paper_trail does) and that the conditions for saving are met and then create draft from the model and save the draft without saving the model. The details of what happens here are up to you.

        base.send :define_method, :save do
          if switched_on? && save_draft?
            draft = Draft.new(:item_type => self.class.base_class.to_s, :item_id => self.id, :event => self.persisted? ? "update" : "create", :source_type => Kentouzu.source.present? ? Kentouzu.source.class.to_s : nil, :source_id => Kentouzu.source.present? ? Kentouzu.source.id : nil, :object => self.to_yaml)

            draft.save

And now for the magic. If the conditions for using our new version of the save method aren't met we take our unbound reference to the old save and bind it to self which, since this is an instance method on our model now, is our model. Finally we call it with the () method. You could also use call.

          else
            default_save.bind(self).()
          end
        end
      end
    end
  end
end

Now whenever we call the save method on our model so long as switched_on? and save_draft? return true we'll get a copy of the model as a draft. Of course we could strip this down to something much simpler without all the fancy including, but in my opinion all that is what makes this so useful, we only get it when and where we want it. That's pretty important because overriding methods like this can be very dangerous. I strongly suggest that before you do this you make sure you actually need to.

The source for the gem this is from is on GitHub.

Finally Learning to Make Games

One of my earliest aspirations I can remember seriously having was to be a video game developer. Sometime in elementary school it dawned on me that people actually made the video games I played every afternoon. There were grownups who got paid to dream up and create games. I wanted in on that because obviously, I naively thought, it must be as cool as playing video games.

When I was in sixth grade a friend and I decided we would attempt to learn to program and make our own games. He got his parents to buy him a book on C++ (It's been over 15 years so I have no idea which book it was) and we both started to read it. The first chapter was a bit of a dumbed down history of computers and programming which I had no problem comprehending. But with the second chapter the actual programming began. It took less than two pages for me to be completely lost and give up. My friend fared a little better and was able to get a compiler installed on his computer and even got a "Hello World" program working.

Turns out, learning to program is hard. Reading about programming is so easy an eleven year old can do it without much effort. I consider myself lucky to have grown up with parents who encouraged my love of computers if not my love of video games over homework. That encouragement is greatly responsible for my desire to make video games never really going away. It did, however, sort of lay dormant for a few years.

When I was 13 I started playing Ultima Online. Eventually I learned one could emulate Origin's servers and write their own custom scripts for the server. This was around the time I was taking Computer Science in high school and learning C++ (for real this time). I eventually rescripted large portions of the game and in the process learned a ton about interpreted languages.

In college I started to read books specifically about video game programming. I would always get stuck though. Rather than going into how to build a game and really design the whole system of parts that makes a game engine; the books I tried would really just get bogged down in the low level details of how to setup OpenGL or DirectX and drawing things on the screen. Don't get me wrong here. That is tremendously important knowledge if you want to make a game. If you don't know how a simple triangle is rendered and all the steps necessary to do even that it's unlikely the bigger picture will make any sense. But after a few chapters I always felt like I was spinning my wheels so I would stop reading and abandon my dreams for a few months. This has been an ongoing cycle for about 7 or 8 years now, each time with a different book that promised to teach me to make games like a pro(!).

The furthest I ever got was a "shooter" that was really just a camera that let you move in an empty space surrounded by a skybox and shoot projectiles. Sure, it impressed my mom, but I wanted to do more. And the code I'd hobbled together from books and internet tutorials just couldn't be turned into anything resembling an actual game.

All this has been a lead into what I really inteded to write about: a book that finally taught me something I felt was useful. XNA 4.0 Game Development by Example: Beginner's Guide by Kurt Jaegers. This book is exceptionally good for a host of reasons. The least of which is that it delivers on the title's promise to teach by example. Each chapter has well written, easy to understand, and clearly explained code for working games.

The "working" part is really the key here. Instead of examples that touch on one isolated subject, leaving it up to the reader to tie it all together into something that can be played, the author presents complete games. The reason this is such a big deal, for me at least, is that it lets you see all the parts and more importantly where they all fit together. That's what I had always felt I was missing. The bigger picture. The overall flow of the entire system. Yes, knowing how to normalize the camera's view matrix is important, but without knowing how you're going to keep track of the state of the things in your scene it's close to useless. XNA 4.0 Game Development actually addresses those things. And because it uses the XNA framework it skips over the boilerplate stuff.

I really can't emphasize enough how important that boilerplate stuff is. I don't want to be misunderstood as saying it should never be learned. However, abstractions exist for a reason. Every game ever made has a main loop and some sort of timing mechanism and a way to draw stuff to a screen. Really, the implementation details of those things aren't important to the actual logic of the game and they really shouldn't be too important to the actual game play. After spending a few weeks with XNA I feel like it's let me do what I've always wanted, focus on the game, not the graphics library. This is the same reason I love Ruby on Rails. It abstracts away the boilerplate code that nearly every web application needs and lets me focus on my application, not reimplementing the wheel.

I hate reimplementing the wheel. Mostly because I know that no matter how good of a programmer I become, unless I become a dedicated wheel maker, I will never be as good at making wheels as the people who think about making the wheels all day every day because that's their job. And by wheels I mean the lower level stuff that any useful program sits on top of, whatever it may be. Most of the time the stuff that sits below my level of concern is only incedental to the problem I'm trying to solve. Without it I can't get anywhere, but that doesn't mean I have to be the one to build it.

Failure to recognize this is the reason all the other game programming books I've tried haven't clicked. Drawing a series of polygons to the screen that makes a cube or moving a 3D character model from A to B isn't what I'm trying to do. But they're all written as if it is the problem I've set out to solve. Very little of the code in XNA 4.0 Game Development is even XNA code. Rather it's video game code that happens to use XNA as its means of getting things done. I'm here for the high level architecture and logic, and the fine people at Microsoft responsible for XNA and Mr. Jaegers seem to understand that very clearly.

I'm admittedly pretty late to the XNA game, but I suppose I wouldn't have appreciated it as much earlier in my programming career. I've been a fan of C# for years now though, it's probably the best thing to come out of Microsoft. It's not my favorite language, but if I'm on Windows it's my first choice by a mile. XNA supporting C# is a big part of why I've been enjoying learning how to use it.

When I finish the book I plan on making a game of my own. For the first time in my life I think I just might be able to do it.