… or When Your Customer Doesn’t Share Your Framework’s Opinion

Lately, I’ve been writing a lot of syndication APIs. It’s all about formats, and I happen to agree with Rails that ATOM is the primary XML-based syndication container I want to support. Recently, I had a customer who did not share Rails’ enthusiasm for ATOM, and asked for RSS.

Now clearly, I really didn’t relish making one show.rss.builder for every show.atom.builder I had. Since I’d already used Rails’ atom_feed helper everywhere I had a feed, and since the mappings between ATOM and RSS are reasonably close, I set about trying to make an RSS-ified version of atom_feed.

There are a few issues with this approach - chief amongst which, ATOM has concepts which RSS does not. For these, I've adopted a similar approach to .NET's SyndicationItem class: drop into the ATOM namespace when you've reached the limits of RSS's capabilities.

All you need to do is make sure this helper is required somewhere. The easiest no-code solution is to drop it into /app/helpers. Wherever you require an RSS representation, use

    atom_feed :rssify => true

and continue to refer to ATOM tags and concepts as you would normally. Whistle loudly to yourself while doing this. See? You can almost pretend RSS doesn't exist.

This post is a bit of a waymarker for people who are having a very specific problem (I tried adding this as a comment to this post but was knocked back by an over-zealous spam filter). If you’re using Thinking Sphinx 2.0.1 and acts_as_taggable_on 2.0.6, are looking to index your named tag collections, and are finding your phrasing to the index building gods is resulting in errors or broken SQL, read on.

If you’re using something other than the base :tags collection, you’ll need the following form:

  acts_as_taggable_on :keywords
  define_index do
    indexes keyword_taggings.tag.name, :as => :keywords

Note that the indexes builder requires that you prepend your collection name to _taggings and also that you dereference the text for the tag through tag.name

You *used* to be able to use the indexes keywords.name, :as => :keywords
form above, but as of 1st Dec 2010 it’s not working (see https://github.com/freelancing-god/thinking-sphinx/issues/#issue/167)

In Rails 2, I used to use

    controller, action, nil)

to get a list of routes that matched a hash. With the advent of Rack-based routing in Rails 3, this approach went the way of the dodo.

I spent a bit of time nosing around in the source for the Rails 3-compatible way of doing this. Like a lot of other useful stuff you can find what you’re looking for under Rails.application, in this case Rails.application.routes, which gives us an ActionDispatch::Routing::RouteSet. This isn’t actually an enumerable or a Set as you’d expect. It has an Array property routes, however. Now if I want a list of routes that use a controller and an action, I can write

    def routes_for_controller_and_action(controller, action)
        routes = Rails.application.routes.routes.select {|r| r.defaults[:controller] == controller && r.defaults[:action] == action}

This is really useful in a usage logging scenario, as we can put a report in front of admins to say which areas of the site are being used most and use a familiar-looking URL rather than the MVC implementation-specific “controller” and “action” nomenclature.

A typical DBA concern in larger shops is that once stored procedures no longer figure (or at least, are used where they are genuinely needed rather than as a mandate), they have no way of knowing what kind of NH-generated SQL might cause problems in production. A while ago I bought Ayende’s NHibernate Profiler (hereafter NHProf) as part of a way to try to alleviate these concerns.

As I’m sure everyone knows by now, NHProf is a nice tool which helps you spot when you’re doing something boneheaded when using NHibernate and gives you handy suggestions on what to do to fix it. It’ll also show you a every SQL statement you’re throwing at your database, nicely formatted for your convenience. It groups these statements into individual ISessions. So if I have three integration tests based on Northwind that look like this (InTransactionalSession is just shorthand for using(ISession) ... using(ITransaction)):

public void CanLoadCustomer()
    InTransactionalSession(session =>
         var v = session.Load<Customer>("ALFKI");
         Assert.That(v.CompanyName, Is.Not.Null & Is.Not.Empty);

public void CanGetCustomersByRegion()
    InTransactionalSession(session =>
         IList<Customer> customers = session.CreateQuery("from Customer c where c.Region = 'OR'")
         Assert.That(customers.Count, Is.GreaterThan(0));

private const string TestCustomerId = "FLURB";
public void CanDoRoundTripPersistenceTest()
    InTransactionalSession(session =>
            string.Format("from Customer c where c.CustomerId = '{0}'",

    InTransactionalSession(session =>
        new PersistenceSpecification<Customer>(session)
            .CheckProperty(c => c.CustomerId, TestCustomerId)
            .CheckProperty(c => c.CompanyName, "Flurb's Jelly")
            .CheckProperty(c => c.ContactName, "Mr. Flurby")
            .CheckProperty(c => c.Region, "OR")

I can see four InTransactionalSession calls in those three tests, and the output from NHProf gives us what we might expect:

NHProf Numbered Sessions

NHProf Numbered Sessions

Perfectly accurate, but if I had a hundred tests I’d struggle to notice which test had caused a problem and I’d lose time tying a session back to a test (ok, not too much time – NHProf has a stack trace tab which lets you double-click jump back to your code, after all, but I like “at-a-glance”). Post-NHProf v1.0, Ayende asked for feedback on what should go into future releases. Since he’d already covered showing DB plans, I thought I’d take a punt on named sessions.

A week or two back Ayende mailed me to say he’d added support (great customer service!). Now all you’ve got to do is call

NHibernateProfiler.RenameSessionInProfiler(session, sessionName);

and the next session that NHProf hears about will take that name. Combining this with a stack trace gives me exactly what I wanted in just a couple of overloads:

        private string lastTestMethodName;
        private int sameMethodCount;

        protected void InTransactionalSession(Action<ISession> action)
            string currentTestMethodName = new StackTrace().GetFrames()[1].GetMethod().Name;
            sameMethodCount = currentTestMethodName == lastTestMethodName ? sameMethodCount + 1 : 1;

            string methodName = string.Format("{0} #{1}", currentTestMethodName, sameMethodCount);
            InTransactionalSession(methodName, action);

            lastTestMethodName = currentTestMethodName;

        protected void InTransactionalSession(string sessionName, Action<ISession> doDataAccess)
            using(ISession session = SessionFactory.OpenSession())
            using(ITransaction tx = session.BeginTransaction())
                NHibernateProfiler.RenameSessionInProfiler(session, sessionName);

This results in something that lets me tie sessions to methods at a glance:

NHProf Named Sessions

At the moment, this is great for people new to NH who want a quick way of demonstrating concepts, but it’s also great as a quick way of running a few tests at once and getting an idea of where your problem areas are. Cheers Ayende.

In my last post, I bemoaned the fact that WCF and ReST are still strange bedfellows and that the ReST starter kit, while promising, was still in no way ready for prime time. Although a new preview release was uploaded last October, the license is still for preview software and requires that you upgrade to a commercial version at the time such a thing might be released. In any case, this post is mostly about why I ended up not using it (way back in July last year), and why I ended up going with a relative unknown that turned out to be so much more suitable – OpenRasta.

WCF’s support for ReST

WCF is designed as a transport-agnostic support framework for services based on Operations (which have a natural mapping to methods) and Contracts (which have a natural mapping to behaviour-free data transfer objects). This means that your design is inherently RPC in nature. Initially, we thought this’d be a good fit. We knew we needed ReST (or at least, POX over HTTP). This being the Syndication arm of NHS Choices, we also thought that in order to conform to government standards we might need SOAP as well. WCF seemed to fit the bill – we could write our operations as needed and just host a couple of endpoints: one SOAP, one ReST. To start with, the development model also seemed simple and familiar.

Well, we had articles to syndicate, we had images, we had some videos and audio clips. And it’s at this point that WCF started to let us down. We also knew we needed various representations of each thing we were syndicating (JSON for digital set-tops and handset consumers, XHTML for a discovery channel), and WCF out of the box didn’t support multiple representations.

For a while we used the open source library WcfRestContrib, which goes a long way to providing some kind of content negotiation for WCF. Based on the HTTP Accept header, WcfRestContrib will switch content ENGINE according to its own formatters. Even this has a limitation, though. For example, a JSON representation of a video makes almost no sense, and yet we could only supply a fixed set of representations which had to apply to all URIs. At this point it started to become clear that we were asking too much of a service-based technology with strong RPC overtones. While we could have extended WCF, the learning curve was steep and unpalatable. In addition, because we were primarily focussed on the usability of our ReST URI set, our “Operations” had become so ReST-centric we wouldn’t really even have had the mooted “SOAP-in-parallel” benefit anyway.

Stumbling across OpenRasta

The comments in this StackOverflow post put me onto OpenRasta. Two things put me off in the middle of last year, though: there wasn’t much documentation and there appeared to be only a single committer. However, a number of things encouraged and intrigued. The fluent configuration gave me warm fuzzies immediately, both from the point of view of meeting our conneg requirements instantly and also being vastly easier to read than the WCF XML configuration:

        public void Configure()
            using (OpenRastaConfiguration.Manual)

Score! Two representations that are attached specifically to that URI, with some class called CustomerHandler providing some kind of output. Wondering what kind of output that could be, I was won over by the absolute simplicity in the handlers:

    public class CustomersHandler
        public Customer Get(int customerId)
              return CustomerRepository.Get(customerId);

This simplicity is entirely brought about by OpenRasta’s built in, convention-based approach to method matching. No configuration files were harmed during the mapping of this URI to its method; OpenRasta simply looks for a method starting with “Get” and with a parameter called the same thing as you put in the squiggly braces in the URI, and ends up returning a POCO Customer object. Another beautiful thing was that no handlers need to inherit from any special ENGINE of object, leaving your one shot at inheritance still available to you.

Note: this attention to preserving individual developer productivity appears again and again in OpenRasta. You’ll go to do something that you thought you might have to write (creating URIs for other resources, or HTTP digest authentication, for example) and you’ll find it baked into the framework.

So if that handler could return a straightforward POCO, what was responsible for wrapping that up in the format that the HTTP GET had asked for with its Accept header? The answer lies with codecs, and while OpenRasta ships with most that you’d need (for (X)HTML, XML and JSON), again the extensibility model wins with its simplicity – you simply implement a couple of interfaces with one method each to handle encoding and decoding.

I’ve stated many things I like about OpenRasta but I’ve barely scratched the surface. I’ve not mentioned extending the pipeline (which meant we were able to bolt API key-based resource governance right in) or even the core developmental nicety that is OpenRasta’s built in support for IoC containers. Windsor and Ninject support are there out of the box, though there’s a basic implementation that’ll serve 80% of small to medium-sized projects anyway. When you start to put together HTML pages, you can do that with the familiar ASPX markup model – though even that has had a spruce-up and has extended the ASP.NET build provider in some useful ways.

In Summary

Why we found WCF isn’t a great fit for ReST

  • WCF is designed as a transport-agnostic support framework for services based on Operations and Contracts. This means that your design is inherently RPC in nature. For simple cases, this might be enough.
  • ReST is an afterthought in WCF (hey, we *can* do this quite easily and we’ll service the needs of 80% of developers and they can continue to think about methods in services as the One True Way). However, given that it’s so different to, say, SOAP, it requires that you structure your WCF app in such a way that it can barely reap any of the rewards of transport agnosticism anyway
  • Content negotiation is not a given in WCF. It requires use of third party extensions such as WcfRestContrib, and even then you cannot negotiate content on a per-URI basis. For example, I can’t say that at /videos/1 I will have a text/html representation and at /conditions/cancer/introduction I will have text/html and application/json. Even with WcfRestContrib you will only have a fixed set of representations which will have to apply to all URIs.

Why we found that OpenRasta is

  • OpenRasta – like HTTP since the mid 90’s – focuses on resources and their representations, not RPC.
  • A resource is just a POCO which is addressable by one or more URIs
  • A representation is just an encoding of the resource (JSON, XML, byte stream) negotiated on what you said you’d Accept in your HTTP headers
  • A URI can have as many or as few representations available as it requires.

WCF is designed around the concepts of Services and Contracts. HTTP, and by extension ReST are not – they are about resources and representations of those resources. OpenRasta understands this and has been designed from the ground up to support ReST-based architectures simply and elegantly – it’s a natural fit with HTTP whereas WCF is a slightly incompatible mismatch.

I needed two WCF books from Safari Online and WcfRestContrib to even begin to implement what we needed in WCF and it still fell short of what we wanted to achieve and compromised the design at the same time. OpenRasta not only freed us from WCF’s overbearing and config-heavy complexity – its elegant MVC model (with IoC at the core) made our code easy to write, easy to read, and above all a pleasure to maintain.

And if a project is only to have a single committer, better it be a “self-proclaimed, egotistical doofus” who also happens to be a borderline genius.

© 2014 ZephyrBlog Suffusion theme by Sayontan Sinha