Good Grief People, stop with the local gems

From Err The Blog: Vendor Everything

For hosted environments? sure.

But if you are responsible for the application AND the server? (or your shop is?)


Not just no. But HELL No. And I’d really like to write “HELL No” in an <h1> but I’m going to avoid that for the sake of sanity

I’ve yet to figure out why the rails community has this inbred desire to cause harm to their reputation in organizations that aren’t pure dev shops. I’m not even talking about the enterprise, I’m talking about small business, non-profits, companies they contract with, academic shops…

I don’t disagree with Chris’s reason here for using vendor for WayCoolFunkyGemThatYouThinkIsTheBeesKnees (WCFGTYTITBK) and not being “That Person” for breaking the build (and your peeps) is laudable. But really – I don’t buy it. If you are small Rails shop and you plan on using test/spec or any other WCFGTYTITBK – for goodness sake you communicate that with the rest of your team (hello? IM? email? even that ringy thing on the hip or desk we all hate to use?)

If you think that someone else’s code is so great that it ought to be in your application – well then it ought to be in everyone’s install too. Go get up and install it for them (take the train if you can’t fly there) That’s what good developers do. They have sane development environments set up and they are completely proficient at “gem install blah” – and makes them completely aware that a brand-new third-party dependency just showed up in the application. I dare say that “gem install blah” is a lot less intrusive than “why in the sam heck did 1000 lines of crap just show up in vendor – I evaluated that third-party code last month and it was crap then and is crap now”

Local copies of every gem is madness – especially gems that are core to your application (and would break builds) It creates situations where the whole team (and often the people that run the servers and are ultimately responsible for the application) is not fully aware of the dependency needs of the application. Let me repeat that again – EVERY DEVELOPER ON A SMALL TEAM SHOULD KNOW EXACTLY WHAT AN APPLICATION DEPENDS ON, WHAT VERSION, AND SHOULD TASK THEMSELVES WITH CHECKING UP ON THOSE VERSIONS.

One app? not a big deal either way. 5 or 6 apps running in the same environment? It’s a Big deal. (of course it’s probably a complete architectural failure to have your 5 person team working on 5 or 6 apps at the same time – but that’s another post)

We had pinned rails in our applications – at least until the “Upgrade Your Rails NOW NOW NOW” event – and going through multiple applications on multiple staging servers and multiple versions was a complete pain in the ass. Okay, so that’s a little hyperbolic – but it was more trouble than it needed to be. You upgrade the server – when you control the server and your application – and you know that that the dependencies are handled.

I had these arguments a few months ago with a developer that was contracting with us – and it was like imposing a little (certainly not anything like some waterfall corporate development shop) structure on the process (“Hey – tell us exactly why you are using edge rails so that we all understand the issues”) was like we were impeding progress (“no, we are trying to make sure we understand what you’ve done when you get bored with us”). I know that new software introductions are disruptive. But that’s what developers (and I’m counting myself here for the sake of that sentence) do. Things break, we tell others, and we fix them. (and some of my other colleagues think WE are the lack of planning ones – you have no idea)

While every application should have a definite lifecycle – you know, and I know, and everyone else knows that in many, many, many environments apps get written, and they live well beyond the developers, the systems people, and everyone else that every had any responsibility for it – and local copies of everything creates a maze of having to upgrade the third-party dependencies all over the place when some script kiddie decides to take advantage of that 2-year old failure to sanity check POST.

Rails developers have to start figuring out that someone beyond them is going to be responsible for inheriting what they’ve done – and they have to start thinking more seriously about dependencies, third-party code, add-ons, and the lifecycle of what they do. It’s like two-bytes for the year value all over again. Seriously people, no amount of “unit tests,” “syntactic sugar,” and vendor kung-foo will ever trump communication and documentation (I don’t mean constantly out-of-date systems analyst documentation – I mean documentation about decisions and why something was done, or why it was added, etc.)

Rails 1.2 Route Changes Are a Pain in the Arse

So, the rails routing changes that have apparently happened in Rails 1.2? Yeah – not a fan.

Thankfully, last night, the night before we plan a production transition to Rails 1.2, we got spidered by crawlers looking for old, vulnerable copies of phpMyAdmin – they hit a systems testing server.

I don’t think I’ve ever said “thankfully” with regard to a spider ever. This is a first.

That generated well over a hundred emails – where Rails oops with:

A ActionController::RoutingError occurred in application#index:      no route found to match "/whatever" with {:method=>:get}

We were able to find some information about using a catchall route at the bottom of routes.rb:

map.connect '*path', :controller => 'application', :action => 'show404', :requirements => { :path => /.*/ }

With a happy little trees show404 method to show a 404.rhtml and return a 404.

This really isn’t the annoying part, I like the routes idea, I like how easy it is to fix this.

What I don’t like is that between Rails 1.1.6 (or even the “edge rails” for 1.2) and 1.2.x – the routing changed such that requests to URLs that are not handled with a default route (e.g. :controller/:action/:id) result, not in a nice, happy, proper “404 – Not Found” but the “500 Internal Error”

I’m sure that this can all devolve into semantics, but if the URL isn’t handled – return not found, don’t crash.

Upgrading your Rails 1.1 application to Rails 1.2

So, I’m in the process of getting our servers upgraded to Rails 1.2.

( We don’t freeze rails, because contrary to the cult of Rails, freezing is not the way to go in small, controlled shops with multiple applications. If say, you have to upgrade your Rails, NOW NOW NOW – it’s much better to upgrade the system, and not every copy of your application everywhere, but I digress)

That process, of course, involves making sure our applications work with Rails 1.2 and eating my own dog food – this is what a cursory run through of the application that I’m responsible for had me running into:


The biggest issue – which really isn’t an issue, which was cool – seems to be the deprecations coming in Rails 2.0. But for anyone playing along with the home game – deprecations are errors. (maybe I should start a “Cult of the Clean Log”).

The deprecations I had:

  • multiple references to @request, @params, @session…
  • references to @flash, including a really odd error that resulted from the fact that I had a partial named “_flash.rhtml”. Thankfully I wasn’t the only person that ran into that
  • a lot of start_form_tag and end_form_tag references. And while I to this day get a little squicked out by phrases like “syntactic sugar/vinegar” – I do very, very much like the block form for <% form_tag(:action => :blah) do %> ... <% end %>

Aggressive Unloading

In the famous words of Keith Jackson:

“Whoa, Nellie!”

So we are probably doing something incredibly stupid and against every known convention of Ruby and Rails – but, hey, it works. We have an class for application configuration that we use a class variable to store persistent configuration information (with a default set of values, merged with values loaded from a configuration file where appropriate). Prior to now, we’ve happily loaded this up on application startup in environment.rb to load the configuration – and even in development mode, the class stayed loaded throughout the lifetime of the mongrel process.

Well, apparently in development mode in Rails 1.2 – the automagic dependency management has significantly changed (ob. ref. Jonathan Weiss and the RoR weblog. That’s cool, it’s probably what it should be, and what I’m doing in this application probably isn’t “the right thing.” But the AppConfig class would unload, and be reloaded and because the load_config method was only called in environment.rb, it was Oops’ing all over the place. My hack was to load the config within the body of the class, so it would do it when the class was loaded (loaded might not be the right word here). If I continue with this AppConfig thing – what it probably should be doing on accessing the configtable is to have checks that when it’s nil – reload the config. I’ll solve that one later, thankfully for now, it works again.

If any Rails person that actually reads this has any kind of visceral – “why the heck do you guys do that?!?” – reaction – and know a better way of doing this (the whole persistent configuration-defined-at-run-time-not-in-code-or-the-db problem) please do share.

A HUGE “thanks!” to the Rails team for including Dependencies.log_activity = true as part of dependencies.rb. That helped a lot – and helped provide a glimpse into the automagic dependency management of Rails too.

Dear RubyGems

The fact that you create ~/.gem and stick a source_cache file in there pisses me off royally.

This made me curse you bitterly today. And made Daniel and James laugh at me for doing so.

Of course that made me laugh, but that’s beside the point. I still dislike you royally.

Just want you to know, Jay

require_gem deprecation warnings, redux

So I’ve written about this before – and it turns out that a lot of the “require_gem” deprecation warnings come from tasks in your gem builds that build convenience scripts in /usr/bin (or wherever on your system it places these). So if these convenience scripts (like /usr/bin/mongrel_rails) are left around from installs with previous versions of rubygems – just reinstall the gem (or change all the require_gem statements to just “gem”) yourself.

A grep require_gem /usr/bin/* is pretty convenient to get an idea for the gems you need to reinstall.

However, feedtools 0.2.26 has a bunch of require_gem statements in lib/feed_tools.rb (and littering my crons with the deprecation warnings), so I ended up building and distributing my own replacement with require_gem replaced with gem.

Anatomy of a screw up

So, because you’ll learn far more from screwing up, then getting it always right – here’s the latest screw up from yours truly. Documented for all the world to see 🙂

So, I have been making slow progress the last few weeks in updating our account registration application to be a bit more normalized, and collect a few data elements that we weren’t previously collecting. This took surprisingly far longer than it should have taken.

(it actually turns out to be a bigger pain in the ass to implement a selection form with a defined set of options – and include “other field” and have that other field create new, user defined option and have that live properly across submits, than I thought, but I digress)

A lot of the work went into doing things like changing object names from “University” to “Institution” because well, we are associating folks with groups that aren’t Universities, and maintaining the University names makes the app semantically incorrect. Of course, that then creates a fairly healthy rename nightmare, that search and replace doesn’t really fix. I managed to handle that okay.

And I managed to handle the infamous Rails “nil-error-in-views” problem. For those that aren’t familiar with this, inevitably you will run into a situation where you are going to print out something like user.county – where .county is the automagic accessor created for the county field in your user database. Well, when you start normalizing county to actually be a reference to an entry in the county db, and not a copied string – you have to start doing things like – where .county is the automagic accessor created to get at the county object that associated through a belongs_to: or similar – and then name is the accessor for the name column in the county db for the county associated with the user.

The dreaded nil error is that you have to make sure that “county” is an instantiated object. If the user doesn’t have a county – when you go to access – county might be nil and Rails goes “Ooops!” I’m sure there’s a highly elegant idiomatic way to solve this, but if statements normally work for me. As long as I use them.

I managed to check most of all the nil places in my views. Where I missed it though is in the XML response to a third-party authentication request. Our other rails applications and our wiki environments proxy-authenticate via a POST to our account application and get back success/failure and profile data in a xml stub block.

I sort of tested this. I tested one application against it, and watched it authenticate just fine. However, I never checked the local log. As a system administrator first, I should know better. always, always, always check the logs. But I didn’t. I would have know the thing was oopsing then.

Well maybe. Test screw-up #2 was the fact that in order to test the new data elements – I actually put myself in a county. So for my test account, there was no oops – county existed and wasn’t nil. For a good portion of any of the users that would be migrated, they were going to be missing a county. But noooooooo, I didn’t test that.

So armed with what looked to be a highly functioning application – I deployed. And I tested all the authenticating apps right after deployment. With a valid county. “Works for me!” “Whoo!”

Five minutes later – I get an IM, from the director. Who couldn’t login.

I look at the logs that are in-app. hmmmm…. seems to login in just fine. logins from the rails apps work for folks. I hadn’t yet seen the application errors emails that we email ourselves when our Rails apps go Ooops! (yay for exception notifiable) (mistake #3) I go and look at exactly what the auth code is doing from the wikis and it dawns on me at that moment that I left the IF statements out of the .rxml I knew it was the dreaded nil

I can’t remember at that moment if I said “Oh shit!” or not. But I did change my IM status to “Yes I know. Yes I’m working on it”

Then I see the app error emails and see that they confirmed the nil. And 5 minutes after that, James pipes up and says “just ignore the passwords”

(the passwords? yeah, apparently there’s a second bug that was highlighted, our auth code had a debugging feature left over that included the necessary POST params in the querystring. When the wiki auth occurs, it was passing those in addition to the POST. So when the exception mails were sent, while the POST passwords were filtered, the query string really can’t be. Whooops. At that point I definitely said “Oh shit”)

Mistake #4 was ignoring the logs on that one for the last several weeks, which now my web access logs have all kinds of passwords in them – thankfully that’s restricted access to just me (and they are going to get a gigantic search and replace soon).

Well, anyway. Being system administrator has its privileges. So I managed to fix the problem, create a test environment to test the fix, deploy the fix (twice), touch base with Rafe about fixing the discovered wiki authplugin feature – who gets it checked in PDQ, and I got that deployed too.

Report, to fix, to test, to deployment, to fix side effect bug, to the emails to the staff, to recommendations for the group that triggered the exception notifications to change their passwords – 40 minutes. That is definitely the silver lining in this. It’s really hard to be more responsive and faster than that.

The summary?

  • check the logs
  • watch your rails nils
  • check the logs
  • test with missing data
  • test from everywhere
  • check the logs
  • clean up debugging code
  • check the logs
  • know your code so dang well, that you can fix it faster than butter on the morning toast

My, my those deprecation warnings are annoying

Sometimes the interpreted language equivalent of “compiler warnings” get really quite annoying (this might explain why subconsciously I have always been incredibly pedantic about compiler warnings).

Anyway, with advent of RubyGems 0.9.0, the “require gem” command is deprecated.

And as of 0.9.1 of RubyGems (to which you might want to update because of a security hole) – you know get lovely little warnings scattered all over the place

Warning: require_gem is obsolete. Use gem instead.

And of course things like the mongrel_rails command include “require_gem” – as does feed_tools – which of course is kicked off by cron jobs in our environment, and is now happily filling my root mail with the warning messages.


Man, I like good framworks

In the end it’s all code – but this is the first time I’ve done this and I just think this is the coolest thing that I don’t have to futz with this.

(and by the way, being able to test this in a console – great stuff)

$ script/console Loading development environment. >> newposition = => #<Position:0x25489f8 @attributes={"name"=>"", "entrytype"=>0}, @new_record=true> >> getuser = User.find_by_login("jayoung") => #<User:0x2540668 @attributes={_...da-da-da..._}> >> getuser.position = newposition => #<Position:0x25489f8 @attributes={"name"=>"", "entrytype"=>0}, @new_record=true> >> => true

And the fact that it saves newposition too. This is good stuff

A better rubygems lister

I’m in the process of teaching myself ruby – first by dealing with the language core and stdlib by just writing ruby (no frameworks) to replace my myriad of crappy shell scripts that I’m using for various things. I can do a lot more quickly in a ruby (or perl or even php) than I can in any of the shell languages. And it’s a great way to learn ruby.

One of the first things I’m doing is fixing a huge annoyance I have with rubygems – namely that the

gem list

command has no terse output. A standard gem list gives you something like:

*** LOCAL GEMS ***    actionmailer (1.2.5)    Service layer for easy email delivery and testing.    actionpack (1.12.5)    Web-flow and rendering framework putting the VC in MVC.    actionwebservice (1.1.6)    Web service support for Action Pack....

And I could give a flying rip what each does after I’ve read the descriptions the first time. So I’m taking advantage of a cool thing in rubygems – that it’s a modular library implemented as a rubygem itself – and reverse-engineering things a bit with it to give me something like:

$ ./gemver.rbactionmailer: 1.2.5actionpack: 1.12.5actionwebservice: 1.1.6...

Here’s what I ended up with:

require 'rubygems'    if ARGV[0] then  @searchgem = ARGV[0]else  @searchgem = ''end    # get full local list of gems@gemversions = {}searchresult =    # walk through returned gemspecs and build a hash of found gems and version(s) in GEM::Version formatsearchresult.each{  |gemspec|  if @gemversions.key?( then    @gemversions[].push(gemspec.version)  else    @gemversions[] = [gemspec.version]  end}    # walk through the hash and print out the results@gemlist = @gemversions.keys.sort@gemlist.each{|gemname|  if @gemversions[gemname].size <= 1 then    print "#{gemname}: ",@gemversions[gemname][0].to_s,"n"  else    # for gems with multiple versions, sort the versions in reverse order, GEM::Version implements a sort_by method    print "#{gemname}(multiple): "    versionsarray = @gemversions[gemname].sort_by { |arrayitem| arrayitem.version }.reverse    printlist = []    versionsarray.each{|eachversion| printlist.push(eachversion.to_s)}    print printlist.join(",")    print "n"  end}

Not completely bad for only my third day or so poking at ruby for replacing my system/service scripts (I’m actually using this in a comprehensive script to mail me periodic information about the configuration for each of my servers. This is actually an offshoot of a script to compare installed gems with a expected list of gems and versions – which I’ll post later)