Blog


Jul

The Business Value of Conferences


In our business—like many others—there are two things that you need more than anything to succeed: the right skills and the right people. The best place I know to get both is at conferences.

The development team here at eLabs—Anders, Jonas and I (CJ)—just got back from the absolutely amazing FutureRuby conference in Toronto. It was hands down the best conference I've ever been to. The presentations were great, the people were great and everything was organized with such an amazing attention to detail by the good folks at Unspace. While the total cost (including the conference fee, hotel rooms, flight and loss of income) probably exceeded US $20,000 for us, here's why I think it was worth it:

The Right Skills

The most obvious benefit is of course the things you learn from the presentations at the conference. Perhaps somewhat counterintuitively, I find that I get more out of the presentations that aren't really trying to teach something. For learning the ins and outs of a technology I prefer a good book or online tutorial. No, the presentations that I think help me the most when it comes to improving my skills are the higher-level, more philosophical ones that make me think in new ways. That's much more important in the long run.

FutureRuby got this right. While there were some presentations that were too focused on some technology, the great presentations by Jon Dahl, Brian Marick, Dr. Nic Williams, Foy Savas, Matthew Knox and others more than made up for it. These talks will inspire us to improve and expand our skill sets for a long time.

The Right People

While getting the right skills is important, getting the right people is much more so. The right people can always acquire the right skills. The inverse is not necessarily true. So how does going to a conference help you get the right people?

First of all, you can of course meet the right people at the conference. If you're looking to hire developers a conference is a great place to be looking (though I think a user group is even better). And even if you're not hiring at the moment you're bound to meet a bunch of great business connections, helpful peers, and friends. We certainly did.

Getting the right people at a conference is also about keeping the right people. Going to a conference as a team is a great way to strengthen your company. Having inspired and motivated colleagues is just as important as being inspired and motivated yourself.

CJ, Anders and Jonas of eLabs enjoying Toronto

So far, I've only been to technology conferences—such as FutureRuby, WWDC, Scotland on Rails and RailsConf Europe—but next month I'll be traveling to Florida for something different altogether:

BizConf is an exclusive conference for entrepreneurs, consultants and people who run web development companies. The presenters they've lined up sound fantastic, but I'm just as excited to meet the other attendees. A couple of people I met at FutureRuby—Dr. Nic Williams of Mocra and Gustin Prudner of Entryway—are going to BizConf too. It was great talking with them at FutureRuby and sharing ideas on running a web development business, and I can't wait to meet them again along with all the other attendees.

BizConf is limited to 75 attendees and the early bird pricing ends today (July 15th) so make sure you register now! I hope to meet you there, or at another conference.

/ CJ

PS. FutureRuby also inspired us to dust off our old dream of NordicRuby again. We'll see what happens.

Jul

Continuous Integration Testing for Ruby on Rails with Integrity


Doing test-driven development usually means you have a lot of tests in a project. While this is almost entirely a good thing, running the thousands of Cucumber features and RSpec examples in a large project takes a couple of minutes. If you run your entire test suite every time you commit this will easily eat up a large chunk of your day. Offloading some of this to a continuous integration server will allow you to save time by running your tests asynchronously, in addition to its other benefits.

At eLabs we usually run our unit tests locally—as well as the Cucumber feature for the story we're currently working on—before checking in. Then we let our CI server run the rest of our Cucumber features and notify us if something goes wrong. Here's the setup we use:

Integrity

At eLabs we've looked at a number of different CI servers, such as CruiseControl.rb and Run Code Run, but our favorite by far is Integrity.

Screenshot of our Integrity site

Integrity suits us perfectly. It fetches our code from our private GitHub repositories, can run any testing command and notify us in a variety of ways such as email and Campfire. It also has a very nice and clean interface. Its one major shortcoming is its complete lack of error reporting. If there's something wrong with your setup it will silently fail, which makes troubleshooting a nightmare. Hopefully the instructions below will help you avoid some of the pitfalls.

Installation

We installed Integrity on a server running Mac OS X and Passenger under Apache. Here's a quick guide.

First we installed the gem:

$ sudo gem install integrity

Then set it up in your chosen directory using the --passenger option:

$ integrity install --passenger /Library/WebServer/Sites/integrity

Next, set up a virtual host in Apache, pointing its DocumentRoot to the public folder in your Integrity installation.

DocumentRoot "/Library/WebServer/Sites/integrity/public"

One absolutely crucial step that we missed at first is to make sure that the system user that runs the Integrity passenger processes has git in its PATH. The simplest way to do this is to set the PATH in the virtual host configuration:

SetEnv PATH /opt/local/bin:/usr/bin:/bin:/usr/sbin:/sbin

After configuring Apache you have to configure Integrity by editing config.yml in the root directory of your Integrity installation. We used SQLite for the database (couldn't get it to work with MySQL). If you want to use a hash password for the admin user, here's a simple way to get the SHA1 of a password:

$ ruby -r 'digest/sha1' -e 'puts Digest::SHA1.hexdigest("password")'

The final step is to create the database:

$ integrity migrate_db

You should now be able to log in to your Integrity site and add your projects.

Setting Up a Project

The most important part of setting up a project for CI is the build command. This is the command that Integrity runs to test your app, and it can be anything that exits with a status of 0 when successful. We use a simple rake task that prepares our project by copying a database.yml file and runs RSpec and Cucumber tests.

namespace :ci do
  task :copy_yml do
    system("cp #{Rails.root}/config/database.yml.ci #{Rails.root}/config/database.yml")
  end

  desc "Prepare for CI and run entire test suite"
  task :build => ['ci:copy_yml', 'db:migrate', 'spec', 'features'] do
  end
end

With that committed to our repository (along with a database.yml.ci file) we add the project to Integrity. The important parts here are the Git repository and Build script settings.

Add a project to Integrity

You must also make sure that the Integrity user can access your repository on GitHub. There are a couple of different ways you can do this, but we created a separate free GitHub account that we add as a collaborator to our projects.

After you add the project you should be able to request a manual build from the Integrity web interface. Note that the build is done synchronously—so you'll have to wait a while—but if the build succeeds you're ready to set up the Post-Receive hook for GitHub to have Integrity run your tests whenever you push your code to GitHub.

GitHub Post-Receive URL settings

Go to your project's page on GitHub and click the Admin link in the top menu, and then Service Hooks in the sub menu. Enter the push URL for your Integrity project as Post-Receive URL. The URL has the following format:

http://username:password@hostname/project-name/push

After you've updated the settings, click the Test Hook link and Integrity should start a new build. If that works, you're all set for having automated builds on every push to GitHub.

Notifiers

While Integrity's interface is nice, you probably don't want to visit your Integrity site after every commit to check the status of your build. The point of asynchronous tests after all is to get notified when somethings goes wrong. Integrity has a bunch of different notifiers you can use. We use the ones for email and Campfire. Find more and installation instructions on the Integrity site.

In addition to Integrity's own notifiers we also use CCMenu, a Mac OS X Menu extra built for showing CruiseControl build status. It works with Integrity as well with the gem integritray.

We also use GitHub's Campfire service hook that posts a message to our Campfire room every time someone pushes new code. This makes it very easy to keep track of what other people in the company are working on.

Campfire screenshot

Not having to wait for our entire test suite to run before each commit saves us a lot of time. But we can still feel confident knowing that Integrity has our backs and will alert us if something goes wrong.

Jun

Relieving the Pain of Controller Tests


Lately we've been embracing Cucumber as the preferred way of testing our Ruby on Rails applications. Cucumber is awesome, both for communicating with the customer and for getting thorough, full-stack tests of the application. We like Cucumber so much, we basically thought that it could replace both view and controller tests. It turns out we were wrong.

While our policy of Cucumber over view tests has been working out great so far, controllers are a different story. There is simply too much logic in the controller that is very hard to test (in a sane way) with Cucumber. It makes sense to have a cucumber feature that specifies that, for a non-admin user, a certain link should not be there, however that doesn't test the security of the application, despite the link not being there, the action may still be freely accessible for the user. Cucumber is not well suited (nor is it intended) to test these kinds of things.

But writing controller tests is a serious pain, so we tried to find a stack that felt natural and pleasant to work with. After some experimentation, we've settled on a slightly odd and interesting stack, consisting of the following:

  • Remarkable's descriptions and steps
  • RSpec's normal mocking syntax
  • Macro-style methods for different user contexts

We first tried using Remarkable on its own, but quickly found that we did not like the mocking syntax:

mock_models :data_point

describe(:post => :create, :data => "params") do
  expects :bulk_create, :on => DataPoint, 
          :with => proc { [@current_account, "params"] }, 
          :returns => proc { [mock_data_point] }

  it { should set_the_flash(:notice) }
  it { should render_template('data_points/new')}
  it { should assign_to(:data_points, :with => [mock_data_point]) }
end

The fact that it uses a "class-method" level for the DSL presents a lot of problems, it is impossible to simply use instance variables, methods need to be wrapped in procs, etc... It also, for some reason, does not seem to support stubs, which is very inconvenient in some cases. In the end we realized that there is absolutely no advantage to Remarkable's DSL over simply doing:

mock_models :data_point

describe(:post => :create, :data => "params") do
  before do
    DataPoint.should_receive(:bulk_create).with(@current_account, "params").and_return([mock_data_point])
  end

  it { should set_the_flash(:notice) }
  it { should render_template('data_points/new')}
  it { should assign_to(:data_points, :with => [mock_data_point]) }
end

One sore point though was that there was a lot of setup required in each controller spec for getting the logged in user right. We thought that with some block trickery we might be able to take care of this tedious setup:

module LogInContext

  def as_user(params={}, &block)
    describe "(as a logged in user)" do
      before do
        @current_user = mock('current_user')
        controller.stub!(:current_user).and_return(@current_user)
      end

      describe(params, &block)
    end
  end

  ...

  def deny_access_to_visitors(params={})
    as_visitor(params) do
      it { should redirect_to(new_session_path) }
    end
  end

end

Spec::Rails::Example::ControllerExampleGroup.extend(LogInContext)

Now we can use these contexts in our controller tests:

mock_models :data_point

as_user(:post => :create, :data => "params") do
  before do
    DataPoint.should_receive(:bulk_create).with(@current_account, "params").and_return([mock_data_point])
  end

  it { should set_the_flash(:notice) }
  it { should render_template('data_points/new')}
  it { should assign_to(:data_points, :with => [mock_data_point]) }
end

deny_access_to_visitors(:post => :create, :data => "params")

But we can do one better:

module LogInContext
  ...

  def as_user_only(params={}, &block)
    as_user(params, &block)
    deny_access_to_visitors(params)
  end
end

Now it is as simple as:

mock_models :data_point

as_user_only(:post => :create, :data => "params") do
  before do
    DataPoint.should_receive(:bulk_create).with(@current_account, "params").and_return([mock_data_point])
  end

  it { should set_the_flash(:notice) }
  it { should render_template('data_points/new')}
  it { should assign_to(:data_points, :with => [mock_data_point]) }
end

And this single test checks both that the post action is accessible to users, and also that it is not accessible to visitors. Of course these contexts can get a lot more advanced once different roles come into the picture. Here's something we're doing in our upcoming app KiNumbers:

module LogInContext
  ...

  def as_admin_or_user(params={}, &block)
    as_logged_in_user(params.dup, &block)
    as_admin(params.dup, &block)
    deny_access_to_visitors(params.dup)
  end

  def as_anyone(params={}, &block)
    as_admin(params.dup, &block)
    as_logged_in_user(params.dup, &block)
    as_visitor(params.dup, &block)
  end
end

This way there is no overhead in testing that a particular action is accessible to several different groups of users. Note that we had to call #dup on params, before passing it along, since Remarkable seems to use destructive operations on the Hash (it turned out to be empty after having been used in a describe block).

We ended up with a controller test that looks like this:

require File.expand_path(File.dirname(__FILE__) + '/../spec_helper')

describe DataPointsController do

  mock_models :data_point

  as_admin_or_user(:get => :new) do
    it { should respond_with(:success) }
  end

  as_admin_or_user(:post => :create, :data => "params") do
    before do
      DataPoint.should_receive(:bulk_create).with(@current_account, "params").and_return([mock_data_point])
    end

    it { should set_the_flash(:notice) }
    it { should render_template('data_points/new')}
    it { should assign_to(:data_points, :with => [mock_data_point]) }
  end

end

Short, easy to read, yet also very thorough. Controller tests are sexy again! Spread the word!

Jun

VTD Billing Support System


Client :Västsvensk Tidningsdistribution (VTD)

Year :2009

Web site :http://www.vtd.se/

To keep track of what to bill each month, newspaper distribution company VTD used to rely on spreadsheets emailed back-and-forth between the regional managers and the billing department. The process ment a lot of manual work for everyone involved, and was also very prone to errors. To fix both of these issues, eLabs developed a simple web application where the regional managers report their billing data each month. The billing department then has access to all the data in one location.

Using an agile approach and test-driven development allowed us to discover and verify the complex business rules. If we had tried to map them all out beforehand, some would inevitably have slipped through the cracks. The automated tests made sure that the end result - the billing data - was always correct and consistent.

Are you currently managing one of your important business processes by manually sending around Excel spreadsheets? We can help you automate the process to cut down on errors and save time. Contact us.

VTD Billing Support screenshot

May

Mittpostnummer Hyperlocal Portal


Client :ICE House AB

Year :2009

Web site :http://41104.se/, et al

On behalf of our client ICE House we developed a platform for hyperlocal portals. The portals are accessed through nearly 7000 domain names, one for each Swedish zip code. Visiting the portal for your local zip code will show you all kinds of things available near you, such as classified ads, houses on the market, current offers of local companies, etc. Think of it as the common bulletin boards at your local grocery store, on steroids.

The sites use Google Maps to give the visitors an intuitive interface to browse the site. The posts you see change as you pan and zoom around the map. The sites' content is mainly provided by agents - people local to each portal's area. We also created a simple API that the developers at ICE House can use to post things that their spiders find while they crawl around the web looking for geocoded data.

This was also the first project where we used CarrierWave, a Ruby on Rails plugin for handling file uploads developed by our very own Jonas Nicklas. We'll talk more about CarrierWave in an upcoming post on this blog.

Mittpostnummer screenshot

ICE House has created a film describing Mittpostnummer (in Swedish). Check it out below. The sites are currently in a public beta while ICE House tune their spiders and agents to fill the sites with content.