Let Conversations Write Your Tests

I consider myself an early adopter of Cucumber and have spent a lot of time using, thinking about, writing about and discussing cucumber and, at a higher level, behavioral driven development. Lately however, I’ve really been rethinking how I use Cucumber and why. Am I getting enough benefit from the overhead and abstraction of plain text features? Could I do the same thing with RSpec if I just approached RSpec differently? Am I cuking it wrong?

This shift in thinking is due in part to Liz Keogh’s Step Away from the Tools and Dan North’s Who’s domain is it anyway?. Both of these got me thinking about how I’m using the term BDD (rather loosely) and how much of an investment I’m making in a specific tool (Cucumber).

More Meaningful Planning

Having meaningful planning meetings with their customer/product owner is one thing with which many teams struggle. Too often, we go too fast, don’t uncover enough detail, use the wrong language, don’t understand “done” and leave too many loose ends.

To combat this we draw screens, discuss workflows, ask leading questions and a variety of other techniques. I felt that while I was doing these things, I was still frustrated with the other part of the planning process. I’ve never liked traditional tasking and I’ve never like the idea of having to translate all of the data gathered with the customer into some other form only to express it later in a test in yet another form.

What if I wrote my tests during the planning meeting?

I decided that changing the way I gathered conditions of satisfaction, defined “done” and discussed workflows with my product owner would give me the biggest boost in value delivered, so one day I did just that.

For the following examples, assume that we’re adding a feature to an e-commerce website.

As a user I should be able to manage addresses so that I don’t have to enter my information more than once

As I’m discussing the feature with my product owner, I will discuss with them the possible scenarios, including workflows, required to use the feature. Some scenarios for this feature might include:

  • When I am on my account page
  • When I am creating a new address
  • When I am deleting an address

These scenarios might have scenarios of their own:

  • When I am editing an address
    • When I successfully edit my address
    • When editing my address fails

So far I’ve been able to ask the product owner something like “So when I’m editing my address, and I miss some required fields, what happens? What do I see? Where do I go? What are the required fields?”. I can also draw pictures to explain the workflows and ask more questions “What’s on this page? Where is the error message displayed? Do I see error messages for the fields that are missing?”.

For each scenario, I can capture assertions that come from answers to the questions I’m asking:

  • When I am on my account page
    • I should see a link to “My Addresses”
  • When I click on the “My Addresses” link from my account page
    • I should be on my addresses page
    • I should see “My Addresses” in the page heading
  • When I am on my addresses page
    • When I have existing addresses
      • I should see each address listed in format xyz
      • I should see an “edit” link for each address listed
    • When I don’t have existing addresses
      • I should see help text explaining how to add an address

If you’re an RSpec user, you might be thinking, “Hey this looks like RSpec!” and it does. During the planning meeting I can capture these scenarios and outcomes and then use them nearly verbatim for my RSpec acceptance tests. Even better, when I run my tests, I can use the “documentation” format that RSpec gives you to output that’s almost identical to the scenarios above.

The conversation required to really define these scenarios and outcomes is challenging, but at the same time, very rewarding. I have also found that it’s pretty powerful to be able to sit with the product owner and view two nearly identical documents side-by-side knowing that one is automated test output.

Cucumber, Culerity and Bundler Errors

I had recently switched a rails 2.3.10 project, that makes heavy use of cucumber and culerity for javascript testing, to bundler for dependency management and immediately started receiving an error like this when I tried to run a javascript test which invoked jruby.

  Scenario: Ability to un-attend an event               # features/events.feature:22
JRuby limited openssl loaded. http://jruby.org/openssl
gem install jruby-openssl for full support.
    Given I am logged in as the admin                   # features/step_definitions/admin_steps.rb:12
      (eval):1:in `process_result': compile error
      (eval):1: Invalid char `33' in expression
      (eval):1: syntax error, unexpected tIDENTIFIER, expecting ']'
      Could not find rake-0.8.7 in any of the sources
                ^ (SyntaxError)

I searched and searched and couldn’t find a solution. I finally ended up using selenium to avoid this and that has been working okay for a while. Then today I decided to start looking again, figuring that something might have happened in the last few months, and I was lucky enough to stumble across this gist. The author was having a similar problem to mine, except that he was using rspec and steak. His solution was to add some code to his spec_helper.rb file. I added mine to my features/support/custom.rb file so that it would be loaded by cucumber. The fix is:

# Add this to features/support/custom.rb
  ENV['RUBYOPT'] = ENV['RUBYOPT'].gsub(%r{-rs*bundler/setup}, '')

I don’t know exactly what is happening, the only comment in the gist near the fix is suppress '-rbundler/setup' from RUBYOPT, but what’s important is that now my cucumber tests pass using culerity just like they did before.

The Secret to Awesome Agile Development

With a little hard work and my secret development ingredient, you can be a better Agile Developer

Recently my fellow developers at Integrum and I took a survey that helped us assess our team with regard to our Agile practices. When taking the survey, and now reviewing it later on, I was struck by how many of the questions were related to a single concept. Many of the problem areas that can be uncovered by the survey, along with examples of one’s successes, come back to this one theme.

Are programmers nearly always confident that the code they’ve written recently does what it’s intended to do?
Consider the following questions:
  • Is there more than one bug per month in the business logic of completed stories?
  • Can any programmer on the team currently build and test the software, and get unambiguous success / fail result, using a single command?
  • When a programmer gets the latest code, is he nearly always confident that it will build successfully and pass all tests?
  • Are fewer than five bugs per month discovered in the teamʼs finished work?
  • After a line item is marked “complete” do team members later perform unexpected additional work, such as bug fixes or release polish, to finish it?
  • Are programmers nearly always confident that the code they’ve written recently does what it’s intended to do?
  • Are all programmers comfortable making changes to the code?
  • Do programmers have more than one debug session per week that exceeds 10 minutes?
  • Do unexpected design changes require difficult or costly changes to existing code?
  • Do any programmers optimize code without conducting performance tests first?
  • Is there more than one bug per month in the business logic of completed stories?
  • Are any team members unsure about the quality of the software the team is producing?

What’s the common theme among these stories, and the secret to better agile development? Testing, testing and more testing.

The negative outcomes implied by some of these questions can be solved by testing. Spending time fixing “completed” stories? Probably something you could have tested. Conversely, the positive benefits implied by other questions can be had via testing. Want to make your code more inviting and easier to deal with for new team members or people unfamiliar with the project? Give them robust and well-written tests.

How To: Setup RSpec, Cucumber, Webrat, RCov and Autotest on Leopard

RSpec, Cucumber, Webrat, RCov and Autotest are a powerful combination of tools for testing your Rails app. Unfortunately getting them to all work nicely together can be a bit of challenge. I recently configured a development environment from scratch on OS X 10.5 Leopard and kept track of all of the little details.


I’m assuming you’ve got the following installed:

  • ruby
  • ruby gems 1.3.1
  • Apple development tools
  • git
  • rails >= 2.3.2
  • You’ve added github to your gem sources (gem sources -a http://gems.github.com)

RSpec & RSpec-Rails

First let’s grab the rspec1 and rspec-rails2 gems.

sudo gem install rspec
sudo gem install rspec-rails


Next we’ll install the cucumber3 gem

sudo gem install cucumber


Webrat4 is used by cucumber to simulate a browser for your integration tests. Webrat will also install nokogiri5.

sudo gem install webrat


I thought RCov6 would get installed with RSpec, but it wasn’t for me. You might not need to do this, but just to make sure…

sudo gem install rcov


Autotest7 comes from ZenTest8 and allows you to have a kick ass workflow where you are constantly running relevant tests and less-constantly automatically running your entire test suite.

sudo gem install ZenTest

Optionally, Thoughtbot’s Factory Girl

Factory girl9 is a really helpful fixture replacement (and more) gem to use in conjunction with cucumber, checkout their much better explanation

sudo gem install thoughtbot-factory_girl --source http://gems.github.com

Optionally, Carlos Brando’s Autotest Notification

While autotest normally runs in a terminal window, it can be setup to hook into applications like growl or snarl. The Autotest Notification9 gem helps make this setup a lot easier.

You will need growl installed and configured for this step the installation instructions on this gems github page are very easy to follow.

sudo gem install carlosbrando-autotest-notification --source=http://gems.github.com

Next you need to turn autotest notifications “on”


A Sample Rails App

Let’s create a sample rails app for the rest of this guide.

rails sample-app

Configuring Environment Variables

Autotest relies on some environment variables to run all of your features and specs correctly. If autotest “hangs” after you try to run it, or it just never seems to be watching your specs or features, this will most likely solve your problem.

Open the test.rb environment definition file in sample-app/config/environments/test.rb and add the following.

ENV['RSPEC'] = "true"

These lines will test autotest to run, and look for changes to, your specs (rather than test unit tests) and your cucumber features.


If you don’t want to add these environment variables to every rails project you’ve got on your machine, you can also choose to set them as environment variables in your .bash_profile or .bashrc (or whatever shell you’re using) files.

export AUTOFEATURE=true
export RSPEC=true

Unpacking Gems

Next let’s freeze (unpack) some gems that we’ll be using in our app. I’ve run into problems trying to use the system gems with cucumber, rspec and webrat, especially when I have multiple versions of any of them installed. Unpacking them into my rails app solves this problem for me.

mkdir sample-app/vendor/gems
cd sample-app/vendor/gems
gem unpack rails
gem unpack rspec
gem unpack rspec-rails
gem unpack cucumber

Because webrat (and nokogiri) are native gems, that is, they are built locally on your machine based on its architecture, we won’t unpack those.

config.gem support
The current accepted practice, when using rails 2.3, and as suggested by the rspec guy(s) is to use rails’ config.gem functionality.

Open sample-app/config/environments/test.rb and add the following lines:

config.gem "rspec", :lib => false, :version => ">= 1.2.0" 
config.gem "rspec-rails", :lib => false, :version => ">= 1.2.0" 
config.gem "cucumber", :lib => false, :version => ">= 0.2.3"
config.gem "thoughtbot-factory_girl", :lib    => "factory_girl", :source => "http://gems.github.com"
config.gem "webrat", :lib => false, :version => ">= 0.4.3"
config.gem "nokogiri", :lib => false, :version => ">= 1.2.3"

Your version numbers may be different, but these are all current at the time of writing.

Boot Strapping RSpec and Cucumber

Before you can get very far with rspec or cucumber you need to run the bootstrapping scripts to give yourself the default files and directories.

# From inside your rails app sample-app/
script/generate rspec
script/generate cucumber

Depending on where you’re going to use your factories the most, you might want to save your file in either spec/ or features/. I chose the latter. Only complete this step if you plan to use the FactoryGirl gem.

touch sample-app/features/factories.rb

Getting Accurate RCov Data

By default RCov is setup to only use your specs when calculating code coverage. If you’re using Cucumber and RSpec, you’ll obviously want to include both types of tests to calculate your project’s true code coverage.

I picked up this rcov rake task from my co-worker Jay McGavren it does all of the heavy lifting for you, we’ll just need to make a couple of changes.

Drop this file into sample-app/lib/tasks/rcov.rake and use it by calling rake rcov:all from your terminal.

require 'cucumber/rake/task' #I have to add this
require 'spec/rake/spectask'
namespace :rcov do
  Cucumber::Rake::Task.new(:cucumber) do |t|    
    t.rcov = true
    t.rcov_opts = %w{--rails --exclude osx/objc,gems/,spec/,features/ --aggregate coverage.data}
    t.rcov_opts << %[-o "coverage"]
  Spec::Rake::SpecTask.new(:rspec) do |t|
    t.spec_opts = ['--options', ""#{RAILS_ROOT}/spec/spec.opts""]
    t.spec_files = FileList['spec/**/*_spec.rb']
    t.rcov = true
    t.rcov_opts = lambda do
      IO.readlines("#{RAILS_ROOT}/spec/rcov.opts").map {|l| l.chomp.split " "}.flatten
  desc "Run both specs and features to generate aggregated coverage"
  task :all do |t|
    rm "coverage.data" if File.exist?("coverage.data")

The important part here is on line 7, we want rcov to exclude our features directory. We obviously don’t need or want rcov telling us that our feature files are not “covered”. To solve this problem we’ve simply excluded the features directory from rcov’s processing.

We also need to slightly modify sample-app/spec/rcov.opts to get the full rspec + cucumber coverage data.

Your rcov.opts should look like this:

--exclude "spec/*,gems/*,features/*" 
--aggregate "coverage.data"

We again want to ignore our cucumber features and we also want to tell rcov to aggregate data in a file called coverage.data. This is used in the above rake task.

Write Some Specs and Features!

Act like you know what you’re doing and write some models, controllers whatever. Add some specs and features too.

Autotest Workflow

Open a terminal and make your way to your sample rails app and fire up autotest. You might see something like the following, depending on how many specs and features you’ve got.

$> autotest
loading autotest/cucumber_rails_rspec
Finished in 0.06276 seconds
3 examples, 0 failures
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby /Library/Ruby/Gems/1.8/gems/cucumber-0.2.3/bin/cucumber --format progress --format rerun --out /var/folders/Aq/Aqp06i3dFnqse+tQgQA+1++++TI/-Tmp-/autotest-cucumber.75956.0 features
4 scenarios
17 passed steps
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby /Library/Ruby/Gems/1.8/gems/rspec-1.2.2/bin/spec --autospec spec/models/intern_spec.rb -O spec/spec.opts 
Finished in 0.062995 seconds
3 examples, 0 failures
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby /Library/Ruby/Gems/1.8/gems/cucumber-0.2.3/bin/cucumber --format progress --format rerun --out /var/folders/Aq/Aqp06i3dFnqse+tQgQA+1++++TI/-Tmp-/autotest-cucumber.75956.1 features
4 scenarios
17 passed steps

The REALLY important stuff

  1. make sure you’ve got “ENV['AUTOFEATURE'] = true” in your test.rb otherwise autotest won’t run your features automatically
  2. make sure you’ve got “ENV['RSPEC'] = true” in your bash profile or else autotest won’t run your specs automatically
  3. make sure you’ve got “–aggregate = ‘coverage.data’” in your spec/rcov.opts file if you’re going to use the above rake task and hope to get combined rcov coverage data between rspec and cucumber
  4. make sure you’re excluding the features directory from rcov where required or else you’ll end up with misleading rcov data.

Gem Versions

Here’s a list of the current gems and their versions that I used in preparing this guide.

*** LOCAL GEMS ***
actionmailer (2.3.2, 1.3.6, 1.3.3)
actionpack (2.3.2, 1.13.6, 1.13.3)
actionwebservice (1.2.6, 1.2.3)
activerecord (2.3.2, 1.15.6, 1.15.3)
activeresource (2.3.2)
activesupport (2.3.2, 1.4.4, 1.4.2)
acts_as_ferret (0.4.1)
addressable (2.0.2)
builder (2.1.2)
capistrano (2.0.0)
carlosbrando-autotest-notification (1.9.1)
cgi_multipart_eof_fix (2.5.0, 2.2)
cucumber (0.2.3)
daemons (1.0.9, 1.0.7)
data_objects (0.9.11)
diff-lcs (1.1.2)
dnssd (0.6.0)
extlib (0.9.11)
fastthread (1.0.1, 1.0)
fcgi (0.8.7)
ferret (0.11.4)
gem_plugin (0.2.3, 0.2.2)
highline (1.2.9)
hpricot (0.6)
libxml-ruby (
mongrel (1.1.4, 1.0.1)
mysql (2.7)
needle (1.3.0)
net-sftp (1.1.0)
net-ssh (1.1.2)
nokogiri (1.2.3)
polyglot (0.2.5)
rack (0.9.1)
rails (2.3.2, 1.2.6, 1.2.3)
rake (0.8.4, 0.7.3)
rcov (
RedCloth (3.0.4)
rspec (1.2.2)
rspec-rails (1.2.2)
ruby-openid (1.1.4)
ruby-yadis (0.3.4)
rubynode (0.1.3)
sources (0.0.1)
sqlite3-ruby (1.2.1)
term-ansicolor (1.0.3)
termios (0.9.4)
textmate (0.9.2)
thor (0.9.9)
thoughtbot-factory_girl (1.2.0)
treetop (1.2.5)
webrat (0.4.3)
ZenTest (4.0.0)

El Fin

Hopefully this guide was useful or had that one little step that you needed to get everything working. I’m sure this will all be out of date in the coming weeks, but I’ll try to keep it as up-to-date as possible. If you see any errors, or can better explain some of the missing pieces, please post a comment. Thanks!

1 http://github.com/dchelimsky/rspec/tree/master

2 http://github.com/dchelimsky/rspec-rails/tree/master

3 http://github.com/aslakhellesoy/cucumber/tree/master

4 http://wiki.github.com/brynary/webrat

5 http://github.com/tenderlove/nokogiri/tree/master

6 http://rubyforge.org/projects/rcov/

7 http://www.zenspider.com/ZSS/Products/ZenTest/#rsn

8 http://www.zenspider.com/ZSS/Products/ZenTest/

9 http://github.com/thoughtbot/factory_girl/tree/master

10 http://github.com/carlosbrando/autotest-notification/tree/master

2009-12-08 – Removed “sudo” when describing how to unpack gems (h/t xdotcommer)

RSpec Shared Example before(:each) Gotcha

Shared example groups are a great feature of Rspec that help you simplify your tests and keep your code DRY. You setup shared example groups almost exactly like you would a regular set of specs, but these similarities can be slightly misleading.

Below we have an example model, spec and shared example group. Our Dog model has its own set of functionality, but as a mammal it should still have some aspects of being a mammal. We’ve got some specs in a shared example group that we use for testing all of our mammal models to make sure things don’t get too out of whack in the universe.

Our Example Model

class Dog
  attr_accessor :name, :mammal
  def initialize
    self.mammal = true
  def greet
    "Hi, I'm #{self.name}, woof woof!"

Our Example Spec

describe Dog do
  before(:each) do
    @animal = Dog.new
    @animal.name = "Bruno"
  it_should_behave_like "a mammal"
  describe "Greet" do
    it "should respond with its name and a greeting" do
      @animal.greet.should == "Hi, I'm Bruno, woof woof!"

Our Shared Spec

describe "a mammal", :shared => true do
  it "should really be a mammal" do
    @animal.mammal.should be_true

A Typical before(:each)

Typically, when you’ve got a describe block, you might use before(:each) to setup some scenario that is used for each spec in that describe block, pretty normal RSpec stuff. We’re using it above in our example spec to create a new Dog object and set that dog’s name.

Using before(:each) in a shared spec

What if you wanted to use a before(:each) in your shared spec? Expanding on our example above we can do something like this.

describe "a mammal", :shared => true do
  before(:each) do
  it "should really be a mammal" do
    @animal.mammal.should be_true

Based on typical RSpec behavior, one would think that the stubbing of the has_by_hair? method on the instance of an animal, would only apply to the specs inside of the describe block of the shared example group. However, by specifying in the Dog spec that a dog “should behave like” a mammal, and thus using the shared spec, that stub will apply to all subsequent “it should” blocks in your model spec.

What if, for example, we had the following in our Dog spec.

  describe "Mutate into Lizard Dog" do
   # dog.mutate will remove body hair and make the dog cold blooded
    it "should mutate into a new species" do
      @animal.has_body_hair?.should be_false

If we include this in our Dog spec, below the inclusion of the shared example spec, our test will fail. We’ve already stubbed out the has_body_hair? method as part of our shared example group, when we call it down here in this completely separate describe block, RSpec is just using the stub we setup previously.

It might be a design problem if…

Now while I’m considering this a gotcha, it may be that this is expected behavior, I couldn’t find anything specifically when researching this “bug” originally. It is also possible that stubbing behavior in shared example groups is frowned upon, and I’m just “doing it wrong”.

Ultimately, I tried using patterns that made sense to me and seemed to be in line with how RSpec works in general. A stubbed method inside the before(:each) of a describe block is usually only applicable to the specs and nested describes contained within. When I realized that this is not the case with shared example groups, it seemed like a gotcha.