Thursday, 11 April 2013

De rol van de lead developer: verslag van een onderzoek

Het afgelopen halfjaar heb ik een aantal presentaties gedaan voor mijn collega developers bij OGD. Het OGD motto is "Samen slimmer" waar ik mijn steentje aan bij wil dragen door mijn kennis over een aantal onderwerpen te delen. Zo heb ik een presentatie gedaan over het toevoegen van real-time communicatie aan een Ruby on Rails applicatie (check hier voor een testproject op Github). Recent heb ik ook een vergelijking gemaakt tussen twee javascript frameworks: Ember en Angular. 

Beide presentaties gingen over nieuwe technieken, een belangrijke drijfveer voor mij als developer, maar niet de belangrijkste. Wat mij het meest motiveert, is mijn rol als lead developer. Het werken in een team en daar leiding aan geven is het mooiste dat er is. Mijn ambitie is om andere developers die deze rol hebben of deze rol ambiëren naar een hoger niveau te brengen. Om dit mogelijk te maken, heb ik een onderzoek gedaan naar de verwachtingen rond de rol van lead developer. Ik had de indruk dat lead developers een ander beeld hadden van hun rol dan mensen die deze rol niet hebben, deze stelling wilde ik graag onderzoeken. Het doel mijn onderzoek was een beeld schetsen van de verwachtingen rond (de rol van) lead developers in een software ontwikkelproces.

Het is erg prettig dat er tegenwoordig tools zoals Google Docs beschikbaar zijn, waarmee het opzetten van een enquête een fluitje van een cent is. Dankzij de hulp van mensen uit mijn netwerk op Coconut en op LinkedIn, waren er ook snel resultaten binnen. Tot mijn vreugde hebben 30 mensen de enquête ingevuld. Helaas bleek het aantal lead developers dat de enquête had ingevuld te laag om conclusies te kunnen trekken over de stelling ik wilde onderzoeken.


 Resultaten onderzoek

De belangrijkste verantwoordelijkheid van de lead developer is de kwaliteit van het product:













Op geen enkele andere vraag werd zo'n eenduidige respons gegeven. Dat deze verantwoordelijkheid ver gaat, blijkt uit de antwoorden op de vraag of de lead developer ook verantwoordelijk is voor de user experience.
Over dit onderwerp zijn de respondenten het niet eens, echter vindt een significant deel (44%) dat het een verantwoordelijkheid is van de lead developer. Ik ben het met deze stelling eens: kwaliteit van het product omvat alles en de user experience is misschien wel het belangrijkste dat een product kan hebben.

Een aantal verantwoordelijkheden zijn duidelijk voor iedereen:
  • technische opzet van het product
  • dagelijkse voortgang 
  • taakverdeling
  • tijdige escalatie van problemen
  • de ontwikkeling van teamleden
  • de planning
Met de laatste ben ik het niet eens: het maken van een planning moet een team-effort zijn, waar het hele team voor verantwoordelijk is. De rol van de lead developer ligt meer bij het halen van de planning, hij of zij moet developers helpen om de juiste keuzes te maken en voorkomen dat teamleden teveel tijd besteden aan onbelangrijke zaken. 

In de enquête had ik ook een aantal taken en verantwoordelijkheden opgenomen die ik absoluut niet bij de rol van lead developer vind horen. Een lead developer is per definitie geen product owner kan dus niet verantwoordelijk gehouden worden voor het product backlog. De respondenten zijn dit in meerderheid niet met mij eens:

Ook het budget of het business plan zijn geen zaken waar een lead developer zich mee bezig moet houden. Dit is de verantwoordelijkheid van de klant, vaak voert een project manager deze taken uit. De respondenten zijn wat betreft het business plan ook verdeeld:
Bij meer taken blijkt dat de respondenten verdeeld zijn. Ik snap deze ambiguïteit, want hier komt het aan op de 'developer' kant van de rol. En iedere developer heeft zijn eigen invulling, sommige zijn breed georiënteerd en doen alles, van het schrijven van user stories tot programmeren en anderen houden zich het liefst bezig met de meest ingewikkelde technische problemen. Als lead developer heb je de keuze wat je doet, zolang je maar niet alles zelf probeert te doen. Geef je teamleden vertrouwen, geef ze de belangrijkste klussen en geef ze de ruimte om te falen. Zorg dat je proces goed ingericht is, zodat je fouten op kunt vangen. Zodat je toch die kwaliteit kunt leveren die mensen (klaarblijkelijk) van je verwachten.

Het karakter van de lead developer

In het onderzoek werd ook gevraagd naar karaktertrekken die goed bij een lead  developer passen en naar karaktertrekken die een lead developer beter niet kan hebben. 



Kundig of perfectionistisch vind ik niet per se negatieve eigenschappen, maar voor de rest kan ik mij heel goed vinden in dit resultaat. 



De belangrijkst eigenschap van een lead developer is volgens mij het hebben en houden van overzicht. Als lead developer moet je het development team leiden. Je moet verder kunnen kijken dan vandaag: hoe staat je team er over een maand voor? En over een half jaar? En over een jaar? Daar ligt de meerwaarde voor de lead developer. 

And the winner is:

Tot slot: onder de deelnemers aan de enquête heb ik een Raspberry Pi verloot. De winnaar is Jurgo Touw. Ik wil iedereen graag bedanken voor de respons en de enthousiaste reacties. En mocht je je als lead developer willen ontwikkelen: neem contact op. Ik help je graag een stukje op weg.

Thursday, 1 March 2012

Why Gerrit is awesome

Since a few months we've been using Gerrit as our code reviewing tool. In this article I will share my experiences with Gerrit. I will tell you why Gerrit is awesome, but also what is not great about it.

What is Gerrit and how does it work?

Gerrit is a web-based code reviewing tool, developed by Google for the development of the Android project. It acts as an intermediate between the developers and the remote Git repository. All commits are submitted to Gerrit, where each commit must be reviewed in order to send it upstream to the remote repository. Each commit must also be verified. In our case, a build is run automatically on Jenkins, our continuous integration system. If Jenkins doesn't find any problems the commit is set to verified. Before we can submit it to the remote repository, it also needs a positive review.
The main Gerrit page looks like this:

In this page you can find all commits that have not yet been submitted to the remote repository. The last two columns indicate if a commit has been verified and if a commit has received a positive code review. Gerrit has a system where you can rate a commit from -2 to +2:
How you implement the code review process is up to you: you can configure rights for which users are allowed to approve commits and if users are allowed to review their own commits, for example.

Why Gerrit is awesome

Most of the reasons why Gerrit is awesome have nothing to do with Gerrit as a tool, but have to do with code reviewing.

Code reviewing is awesome It promotes craftsmanship. Due to the fact that team members know their code will be reviewed, everyone wants to create good code. We've had lots of discussions about what good code is and how we can improve.
Code reviewing fosters the sharing of knowledge in your team Each team member is forced to review code that he or she did not write. If the commit is not clear, the reviewer communicates with the author of the commit to discuss it. Most of the time the discussion leads to improving the commit, which is a learning process for both developers.
Code reviewing is fun If you have a team like mine, you will sometimes find beautiful code. Code with brilliant, simple solutions to complicated problems. Solutions that I couldn't have thought of myself. Finding stuff like that really brightens a day.
Code reviews prevent bad commits No developer creates only good commits. Code reviewing prevents bad commits from other developers to become your problem. It also has the effect that developers think twice before submitting a commit.
View changes and place inline comments This is what Gerrit is good at. You can view changes in an accurate diff and specify how much context around the change you want to see. As reviewer, you can place inline comments, exactly at the spot where you question the code and share them with the author. For the author, this information is invaluable.
Fine grained access and authorisation The code review process is what it is, but you have a fine grained control over what users and groups are allowed to do. Per project you can also override these settings.

What is not great about Gerrit

As with the advantages, most disadvantages of the usage of Gerrit have to do with the process of code reviewing.

It slows down the time-to-market All this reviewing, verifying and resubmitting of commits slows down your time-to-market by at least 5%. You will win time eventually. The main reason for this is that the quality of code will be better. This leads to less bugs, better maintainability and easier adaptability. However, if outside pressure dictates that speed is more important than quality, you should not introduce code reviewing.
Getting all commits upstream is complex Commits are not sent upstream exactly in the same order they were submitted to Gerrit. Combine this with having multiple commits that are dependent on each other or commits with different authors that change the same lines of code and you will understand how you can get into trouble. Sometimes you will even have to rewrite commits of other authors. Git is awesome in the way that it can handle all kinds of these challenges, but you will have to have the skills to tell Git how to do this. To introduce Gerrit succesfully, your developers must have a deep understanding of how a Git works (and it's history) and have at least some experience with an interactive Git rebase.
Who is going to review the code and when This is a recurring question, probably most annoying when your team members are not at work. In Gerrit you can assign a reviewer, who then receives an email, but that's it. While it's a crucial part of an efficient code reviewing process, it's not a part of Gerrit.

As you can see, code reviewing with Gerrit has it's advantages and disadvantages. If you're considering to try it out: be aware. Although Gerrit is awesome, it's not about Gerrit. It's about introducing code reviewing into your process. Once you know how to get that right, you can also master the tool itself. No doubt.

Friday, 4 November 2011

jQuery UI dynamic drag & drop behaviour

In Coconut, we have a feature where you can drag and drop files in folders. What we wanted to do, was drag a file to a folder. When a file is dragged close to a folder, it should open up the subfolder tree, so you can also drop the file on a subfolder. But this was not possible, because the subfolders didn't become droppable. The code to show subfolders when dragging over a folder is similar to this:
jQuery(".dropTarget").droppable({
  greedy: true,
  tolerance: 'pointer',
  hoverClass: 'dropHover',
  accept: function () {
    //code left out
  },
  drop: function (event, ui) {
    //code left out
  },
  over: function (event, ui) {
    this.folderTimeout = window.setTimeout(function () {
      var dropObject = jQuery(this);
      dropObject.find("ul.folderSubList").show();
    }, 500);  
  },
  out: function () {
    window.clearTimeout(this.folderTimeout);
  }   
});

The over and out events are used to open up the subfolder tree. They use a timeout to make the transition less snappy, more smooth, which works fine. When inspecting the HTML with Firebug (after the code above was run), we could not find anything wrong. The hidden subfolders did get the class "ui-droppable" which droptargets receive when the jQuery ui droppable is initialized. So even though the folders are hidden (by css style "display:none;") they were initialized correctly as droptargets.

I couldn't figure out why, so it was time to pair program. My colleague noticed that it looked like the position of the droptargets were cached when a drag starts. A very sharp observation, because this actually is the case! So what we wanted to was refresh the cached positions once a subfolder tree was opened. The jQuery UI draggable and droppable API does not have a method to force a refresh of the positions. I've found a feature request ticket for the refresh method but it seems the method will probably not be introduced until jQuery UI 2.0 (we're at 1.8.16 at this moment).

Most developers have been at this point: you have to implement certain behaviour into your application, but the plugin you're using does not support that behaviour. At this point you have some options:
  1. use another plugin which does support it
  2. find alternative behaviour and ask customer if that's acceptable
  3. convince the customer it's not possible and abondon it
  4. dig into the plugin's core and find a workaround or hack to enable the support *
Of course, the last option is your last resort. You don't want to do this, because you cannot upgrade the plugin and assume your workaround or hack still works. But if options 1 - 3 are not an option (as it was in my case), here's a solution that works with jQuery 1.6.3 and jQuery UI 1.8.13:
jQuery(".dropTarget").droppable({
  greedy: true,
  tolerance: 'pointer',
  hoverClass: 'dropHover',
  accept: function () {
    //code left out
  },
  drop: function (event, ui) {
    //code left out
  },
  over: function (event, ui) {
    this.folderTimeout = window.setTimeout(function () {
      var dropObject = jQuery(this);
      dropObject.find("ul.folderSubList").show();
      jQuery.ui.ddmanager.prepareOffsets(jQuery.ui.ddmanager.current, null);
    }, 500);  
  },
  out: function () {
    window.clearTimeout(this.folderTimeout);
  }   
});

Trick is to call prepareOffsets on the current jQuery UI drop & dragmanager. Now you'll be able to drop on the dynamically shown elements.

*if the plugin is open source, you can also (or maybe should) try to fix it and do a pull request to get the fix to become a part of the plugin

Saturday, 22 October 2011

Ahoy matey, let's plunder 'tis camp

Friday Oktober 7th I visited Arrrrcamp in Gent, Belgium. Arrrrcamp is a conference about Ruby, Rails, Radiant and Rum with a pirate theme, yarrr. The conference offerred a broad spectrum of topics and some well known speakers, like Jonas Nicklas and Corey Haines. A celebration of open source Ruby development with a serious hint of craftsmanship. Here the outline of my day.


First up was Jim Gay, the lead developer for Radiant CMS. Most of his talk was about the new features of Radiant CMS. Since I've never used Radiant, most of his talk was wasted on me. The features he showed were quite impressive. Radiant is a CMS created in Ruby on Rails (of course) and you can use cool RoR features like Sass, Scss or Coffeescript in Radiant. You can create pages and define your CSS and javascript seperately. What's really nice is that they created different editors for this, like an IDE which supports different languages. Jim talked a lot about their upcoming 1.0 release and their struggle to migrate to Rails 3. Migrating a product which has many open source plugins is quite a challenge, since most plugins will not work in Rails 3. Jim explained they cannot just convert to Rails 3: you will have to give users and plugin makers a clear path to migrate along with you. This is why the 1.0 version will not be a Rails 3 version, but will remain on Rails 2.3. However, they are already working hard on the Rails 3 version and in the future, it will be possible to add Radiant as a Rails engine to your project. Wow.

A speaker which I really looked forward to seeing, was Jonas Nicklas, and he certainly didn't disappoint. Jonas is the creator of Capybara (an acceptance test framework for webapplications), Carrierwave (a solution for file uploads in Rails) and during his talk he introduced Evergreen, which integrates Jasmine javascript tests into Rails applications.
Here some of Jonas' quotes:
  • Rubyists love testing. Rubyists love programming. Rubyists love Ruby.
  • We want beautiful, well structured and tested code.
  • More and more code moves to front-end. Backend is more and more a CRUD application with JSON responses.
  • jQuery is the PHP of javascript.
Jonas demonstrated testing javascript using Evergreen, which integrates Jasmine javascript testing into your Rails application. This has some advantages: a route is mounted in your application where you can see if your Jasmine tests succeed and you can use Coffeescript to write your Jasmine tests.

He also offered some advice on using Capybara. Capybara is not an integration test framework, it's a tool to simulate user behaviour which you can use to create automated acceptance tests. His advice was:
  1. add your own selectors
  2. avoid asserting on URL, session state, cookies or application state (!!)
  3. avoid too specific selectors
  4. make generic asserts
Sound advice, if you ask me. To summarize: use Capybara to test what a user can do and can see, not what the application does. You should test that elsewhere in your test suite.

After a great lunch, I joined Julian Fischer's talk about "three ways to scale towards happiness". Julian had a disadvantage, since the room was next another one where they cranked up the sound so loud we could almost follow that talk instead of Julian's. Beforehand, I was afraid of having to listen to a commercial promotion of Railshoster.de, since he is involved in that company. But Julian's talk was incredibly informative and educational. He started talking about the high hosting prices in the U.S. and the legal issues you face when you host your webapp at a U.S. company. I've never realized that hosting my data in the U.S. can mean that the U.S. government might be allowed to demand access to that data. Do we really want to risk that?

The three ways he compared to scale towards happiness were:
  1. cloud hosting
  2. cheap datacenter hosting
  3. high end datacenter hosting
For each way, he made a detailed comparison of the advantages and disadvantages. Did you know that cloud hosting costs upto 20 times more than other hosting solutions?
At the end of his talk, he explained how to choose between the three options. Each application has it's own characteristics, which, when examined properly, will lead to a logical choice of one the options. You should do a requirement analysis of budget, load (normal and peak load), response time, availability, geographic targeting, estimation of growth rate, legal constraints, technical constraints and competence of inhouse support.Check out his presentation here on how to choose the correct Ruby on Rails hosting.

Next up was Andrew Nesbitt, who showed how to do A/B testing with Split, a Rack based A/B testing framework. He asked us the question: are you testing your users? (eehm, no...) How do you know that you are succeeding, then?
He demonstrated how to do A/B testing using the Split gem. Split uses Redis to save test data, which of course is very fast and scalable. It uses cookies to differentiate users and you can test almost anything, you can swap css, render different templates or partials, set different controller variables or even hack up your javascript.
Using Split looks easy, you can just create an experiment with alternatives and view via a CRUD controller how the experiment is doing. Split also has a dashboard to show results and you can even combine this with Google analytics to get a more detailed view. Disadvantage is that you will code the A/B tests in your product's release code. Andrew also had two memorable quotes:
  • Be a good Ruby citizen and contribute to your community!
  • If you're not prepared to fail, you're never going to be original.

A pretty long break followed, in which the organisation served free mojito's (yarrrr). To wrap up this enjoyable day, Corey Haines did a talk about fast Rails tests. A phenomenon which most of us only hear about, but never experience. I've heard Corey last year on the RubyAndRails conference in Amsterdam. He talked about software craftsmanship and that was the most insipiring talk I've ever seen. Corey has a great stage appearance, he's relaxed, funny, provocative, opinionated and inspiring.
He showed his way to get fast tests for Rails projects and why you should bother. You should bother, because slow tests will finally stop your developers running tests and it distracts them from the work they should be focussing on. To get faster tests, his main point is to NOT load Rails in your test unless it's absolutely necessary. You can create seperate modules or classes to defer behaviour from the Rails context and use stubs or mocks to test the behaviour outside of Rails.

Corey talked about Test Driven Development, here a summary of some of his statements:
  • The fundamental difference between test first and test driven development is, that in test-first you change your tests until workable, where with test-driven, if you discover that something's hard to test, you change your design.
  • TESTABILITY = GOOD DESIGN
  • Better design is one that is easy to change
  • The 3 A's of testing: Arrange, Act and Assert. Use 1 assert per test. Try it out for a while and you'll discover when to use it and when not to.
Especially Corey's opinion on how to create your Rails test suite and his opinion on integration testing led to much discussion. Which for me, actually continued until we were almost back home. Thanks to Corey for another inspiring talk.

When all was said and done: the Arrrrcamp conference is a properly organized event. The venue is small and has small disadvantages, but the pirate theme and excellent food and drinks more than made up for those disadvantages. I haven't seen an uninteresting talk all day, so I will return next year without a doubt. Yarrr!

Friday, 21 October 2011

Talk about TDD @ Coolblue

Today I did a talk about Test Driven Development @ Coolblue in Capelle. They invited me because they are continuously improving their development process (yes, they are doing Scrum!) and wanted to know more about Test Driven Development.

Coolblue has 4 shops and 99 specialized webshops. I was impressed by their order system, especially by how they were able to handle an enormous increase in sales volume and still maintain real-time services. And maintaining 99 webshops with just a few Scrum teams, hats off to these guys. I really enjoyed a peek into their kitchen and it was nice to be able to speak to other developers about stuff that I am really passionate about. You can view the presentation here: Note: the presentation is in Dutch!

Friday, 14 October 2011

Ruby on Rails surprise: conditional active_record callbacks

Sometimes you discover hidden features in Rails by trial and error. This week I was refactoring a method into an after_create callback. The method would create an empty user profile when a user was created, thus using an after_create callback would simplify this code. Using the handy create_ method which active_record provides when a belongs_to or has_one relationship is used I've managed to refactor this (quite horrible) piece of code:
class User < ActiveRecord::Base
  has_one :profile

  def create_from_ldap(login)
    #some code left out
    create_personal_belongings
    save
  end

  def create_personal_belongings
    profile = Personal::Profile.new
    profile.id = self.id
    profile.save!
  end
end
into these lines of code:
class User < ActiveRecord::Base
  has_one :profile

  after_create :create_profile

  def create_from_ldap(login)
    #some code left out
    save
  end  
end
Consequence of this change was that a user profile is always created when a user is created. Which is great, because users without a profile should not exist anyway. But it broke some specs, because some specs relied on specifying a certain profile for a user, so we can test some aspects of the user / profile relationship properly.

So what I actually wanted to do, is create a profile after creating a user, but only if the user hasn't got a profile yet.
When using active_record validations I've discovered you can do a conditional validation like this:
class User < ActiveRecord::Base
  validates :email, :presence => true, :email => true, :if => :has_email_enabled?
end
This validates the user's email attribute only if the has_email_enabled? method returns true for the specific user object. (see the active_record_validations documentation for more info) I couldn't find if I can also use conditionals on active_record lifecycle callbacks, so I just tried:
class User < ActiveRecord::Base
  has_one :profile

  after_create :create_profile, :unless => :profile

  def create_from_ldap(login)
    #some code left out
    save
  end  
end
Hey, that works! I believe it's not a documented feature, but it's really nice. I love Rails' adagium "use convention over configuration", because once you know how some stuff works in Rails, you can deduct how the other stuff works. And even better, the chosen conventions usually make sense. To me, at least.

Thursday, 8 September 2011

Internet Explorer 8 crashes with jQuery 1.6.2

Can browser tabs crash? Sure they can. If you're using a buggy plugin or add-on, you can probably crash tabs in any browser. But crashing a tab when using only javascript is probably limited to the ones that have a Microsoft brand on them.
Since august, we've been having problems when our users were accessing Coconut on IE8 in a Citrix environment (and in rare cases also outside a Citrix environment). Which is hard to solve: first of all, I did not have a Citrix account. After the IT department fixed that problem, I was able to reproduce the bug, but had no way to analyze it. In Citrix I couldn't access log files, change IE settings or even view the advanced properties tab. My Google results told me I should try disabling all add-ons, but I wasn't allowed to do this either, even though the IT guys made me local admin on Citrix. It seems that Citrix has all kinds of seperate rules which are enforced on all users, and you have to make specific exceptions to the rules. Just telling Citrix: "hey, this is an admin" is not enough.
Fortunately, my colleague discovered that the error message was the same for each user and he found a stackoverflow topic which had a reference to this error message

If you encounter this error: (this the Dutch version of course)
Or this error:
then you should upgrade your jQuery javascript, version 1.6.4 works fine.

Tuesday, 12 July 2011

jQuery's .html() and Internet Explorer 8

Sometimes it's easy to blame IE for unwanted behaviour. This case is a bit different, in the sense that I think IE's behaviour is understandable, or even correct. I was trying to fix a bug where the contents of an overlay screen would not load in IE8. The contents were fetched using an ajax GET request, after which this piece of code would be executed:

show: function(html) {
  var box = $("#dialog_box");  
  box.html(html);
  box.show();
}

In Firefox or Chrome, this works fine. I verified that the show method was actually called with the html from the GET request. The box.show() call was executed correctly, but somehow the box.html(html) was not. But the method didn't fail either.

After much fiddling around, I've found out that:

Internet Explorer will only insert a HTML string using jQuery's .html() method if every tag in the HTML string is opened and closed properly.

The response from the GET request wasn't valid, it contained one closing div tag too many. IE then refuses to insert the HTML into the DOM. Other browsers seem to have no problem inserting the invalid HTML. So IE8 is correct? Remarkable.

Friday, 8 July 2011

Redis and Phusion Passenger: reconnect on fork

We're nearly ready for a new Coconut production release. Of course, this is the moment when bugs start coming in from the beta stage that are difficult to reproduce. One bug report stated: "sometimes, my widgets do not load". I couldn't reproduce this bug, but it was in the back of my mind the whole week. Suddenly, this afternoon, when I was clicking through a review build, the widgets didn't load. So I immediately pulled the log files and found this Redis error:

Got '1' as initial reply byte. If you're running in a multi-threaded environment, make sure you pass the :thread_safe option when initializing the connection. If you're in a forking environment, such as Unicorn, you need to connect to Redis after forking.

After some research, I've found out that the error was caused by the combination of using Redis and Phusion Passenger. We use Redis as a chained backend store for i18n (see Railscast 256 for  our inspiration), so we can have custom translations for each Coconut instance. Which is a very cool feature, because every customer has his own domain language and we can tweak the translation messages accordingly.

As the error states, Phusion Passenger is a "forking environment" like Unicorn. Phusion Passenger spawns new worker processes when it thinks more processes are needed, it uses what they call a "smart spawning method". Basically, it forks a new thread to increase capacity to handle requests. Normally, the newly created thread will try to use the same Redis connection, which causes problems. What you need to do, is create a new connection when the current process is forked.This is done by creating a new Rails initializer and adding some code for handling Phusion Passenger fork events.

Adding a new Rails initializer is simple: just add an .rb file to config/initializers. Our initializer looks like this:

if defined?(PhusionPassenger)
  PhusionPassenger.on_event(:starting_worker_process) do |forked|
    if forked
      # We're in smart spawning mode. If we're not, we do not need to do anything
      Rails.cache.reset
      I18nRedisBackend.connect
    end
  end
end

You might recognize the Rails.cache.reset from Dalli, which has the same issue if used with Phusion Passenger. The I18nRedisBackend.connect creates a new connection with Redis, like this (note: this code was simplified, to make it more readable):

module I18nRedisBackend
  @@default_i18n_backend = I18n.backend

  def self.connect
    redis = Redis.new
    I18n.backend = I18n::Backend::Chain.new(I18n::Backend::KeyValue.new(redis), @@default_i18n_backend)
  end 
end

To summarize, when Phusion Passenger forks a process to create a new worker thread, it automatically creates a new Redis connection. Problem solved!

thanx to the Phusion Passenger users guide, Appendix C which provided me with the correct code example

Ruby on Rails 3: chaining scopes with lambda's

Today we ran into a really strange bug: the bugreport stated that a blog post was shown in the blog list approximately 15 minutes after creation. Which is really weird, because we've dealt with time zone misery before, but that always applies to hours, not a quarter of an hour. After analysis, we've found out that a chained scope caused the trouble (note: the scopes were simplified for better readability):

class Blog::Post
  scope :published_posts, lambda { where("publication_time < ?", DateTime.now) }
  scope :published_non_rotator_posts, published_posts.where("rotator_position IS NULL")
end
Scopes are evaluated at the moment you call them. The published_posts scope uses a lambda to ensure that DateTime.now is evaluated on each call, instead of being evaluated to the same DateTime value for every call. For the published_non_rotator_posts it's the same. This scope is also evaluated on the first call. Since it doesn't use a lambda expression, the outcome of the chained published_posts scope will have the same value on every next call! The correct code is:
class Blog::Post
  scope :published_posts, lambda { where("publication_time < ?", DateTime.now) }
  scope :published_non_rotator_posts, lambda { published_posts.where("rotator_position IS NULL") }
end
So: when chaining a lambda scope you must also wrap it with a lambda! thanx to this slash dot dash article.

Friday, 1 July 2011

Software has a new quality standard: ISO 25010

You might have missed the news that the ISO 9126 quality standard has been replaced recently by the ISO 25010 quality standard. Since the ISO/IEC wants at least 110 euro's for a PDF containing the new standard (WTF????), I thought I'd summarize the new standard and compare it to the old one.

The 'old' ISO 9126 model described six main characteristics, with a set of subcharacteristics:
  1. Functionality - A set of attributes that bear on the existence of a set of functions and their specified properties. The functions are those that satisfy stated or implied needs.
    • Suitability
    • Accuracy
    • Interoperability
    • Security
    • Functionality Compliance
  2. Reliability - A set of attributes that bear on the capability of software to maintain its level of performance under stated conditions for a stated period of time.
    • Maturity
    • Fault Tolerance
    • Recoverability
    • Reliability Compliance
  3. Usability - A set of attributes that bear on the effort needed for use, and on the individual assessment of such use, by a stated or implied set of users.
    • Understandability
    • Learnability
    • Operability
    • Attractiveness
    • Usability Compliance
  4. Efficiency - A set of attributes that bear on the relationship between the level of performance of the software and the amount of resources used, under stated conditions.
    • Time Behaviour
    • Resource Utilisation
    • Efficiency Compliance
  5. Maintainability - A set of attributes that bear on the effort needed to make specified modifications.
    • Analyzability
    • Changeability
    • Stability
    • Testability
    • Maintainability Compliance
  6. Maintainability - A set of attributes that bear on the effort needed to make specified modifications.
    • Analyzability
    • Changeability
    • Stability
    • Testability
    • Maintainability Compliance
  7. Portability - A set of attributes that bear on the ability of software to be transferred from one environment to another.
    • Adaptability
    • Installability
    • Co-Existence
    • Replaceability
    • Portability Compliance
    The new model has eight characteristics, instead of six, which are quite similar to the old model:

    1. Functional suitability - The degree to which the product provides functions that meet stated and implied needs when the product is used under specified conditions
      • Suitability
      • Accuracy
      • Interoperability
      • Security
      • Compliance
    2. Reliability - The degree to which a system or component performs specified functions under specified conditions for a specified period of time.
      • Maturity
      • Fault Tolerance
      • Recoverability
      • Compliance
    3. Operability - The degree to which the product has attributes that enable it to be understood, learned, used and attractive to the user, when used under specified conditions
      • Appropriateness
      • Recognisability
      • Ease of use
      • Learnability
      • Attractiveness
      • Technical accessibility
      • Compliance
    4. Performance efficiency - The performance relative to the amount of resources used under stated conditions
      • Time Behaviour
      • Resource Utilisation
      • Compliance
    5. Security - The degree of protection of information and data so that unauthorized persons or systems cannot read or modify them and authorized persons or systems are not denied access to them
      • Confidentiality
      • Integrity
      • Non-repudiation
      • Accountability
      • Authenticity
      • Compliance
    6. Compatibility - The degree to which two or more systems or components can exchange information and/or perform their required functions while sharing the same hardware or software environment
      • Replaceability
      • Co-existence
      • Interoperability
      • Compliance
    7. Maintainability - The degree of effectiveness and efficiency with which the product can be modified
      • Modularity
      • Reusability
      • Analyzability
      • Changeability
      • Modification stability
      • Testability
      • Compliance
    8. Transferability - The degree to which a system or component can be effectively and efficiently transferred from one hardware, software or other operational or usage environment to another
      • Portability
      • Adaptability
      • Installability
      • Compliance

      In the new model, security and compatibility were added as main characteristics. I've always wondered why security wasn't that important for software quality measurement, but now it is. Some subcharacterics were added to the model and a number of them were renamed to more accurate terms. The 25010 quality standard also works a bit different than the 9126 standard. The software product quality model describes the internal and external measures of software quality. Internal measures describe a set of static internal attributes that can be measured. The external measures focuses more on software as a black box and describes external attributes that can be measured.
      Besides the software product quality model, the 25010 standard also describes another model, the model of software quality in use:

      1. Effectiveness - The accuracy and completeness with which users achieve specified goals
        • Effectiveness
      2. Efficiency- The resources expended in relation to the accuracy and completeness with which users achieve goals
        • Efficiency
      3. Satisfaction- The degree to which users are satisfied with the experience of using a product in a specified context of use
        • Likability
        • Pleasure
        • Comfort
        • Trust
      4. Safety - The degree to which a product or system does not, under specified conditions, lead to a state in which human life, health, property, or the environment is endangered
        • Economic damage risk
        • Health and safety risk
        • Environmental harm risk
      5. Usability- The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use
        • Learnability
        • Flexibility
        • Accessability
        • Context conformity
      I like the fact they've created a seperate model to emphasize how important quality in use is for a software product. Their motivation to do so might be different, they probably assume the characteristics in this model are "in the eye of the beholder" and thus harder to measure. And harder to agree on a common standard for these characteristics.

      To summon it up, the new model has a broader range and is more accurate. I believe this is an improvement to the old model, but why someone would pay more than 100 euro's to be able to study the new model is beyond me.

      Tuesday, 28 June 2011

      Highs and lows of "Test automation day 2011"

      Last thursday I attended the "Test automation day 2011" in Zeist. Most of the day was a big disappointment. Only the foreign speakers had put some real effort into making an enjoyable presentation. This day proved again that testing is still a boring subject and most speakers did no effort whatsoever to disprove that statement.

      Some interesting stuff

      Jamie Plower from Bearing Point presented their automated test process, which used some tools like Jenkins (open source automated build server) and Sonar (code metrics for Java, but also PHP, C and C#). An impressive setup, especially how they could create a screencast of a failing functional test. The screencast would capture every click in the browser until the error, which can be an enormous help when hunting down bugs.
      Last year an intern in my team created a similar setup, using Cucumber, Capybara, Selenium RC, Selenium Client and Selenium Webdriver. We couldn't get it stable however, although I probably should have tried harder. I am looking forward to try out the RSpec acceptance testing DSL, which was just released in the 1.0 Capybara gem. This development looks exciting, Jeff Kreeftmeijer blogged about this earlier this year.

      Another interesting presentation was the closing keynote by Scott Barber. Scott is a performance tester, he shared some of his personal experiences, some of which were quite funny. In performance tests you might, for example, discover that your application is performing excellent. But if you don't use the proper error detection, you might discover that what you have been testing is how fast the 500, 404 or 401 error page loads. Which in most applications (I've confirmed this already in Coconut), is blazing fast!
      He gave us ten very good tips on automating performance tests. I'm sure they can help you, so you can download his presentation here.

      Some embarrassing stuff

      Nearly all native speakers showed up with a boring or less than convincing story. Most embarrasing was the day's host speaker, Mr Bob van den Burgt. He started his keynote by stating that he didn't really know much about test automation. Well Bob, I believe that's probably true, but you shouldn't have told me. It ruined your credibility.
      From his acting on stage, I could also conclude that he doesn't have much talent for presenting either. And to top it off, he spoke English with an awful Dutch accent, we call it "steenkolen Engels" in Holland, which can be translated to "broken English". This reminded me of Wim Kok, our former prime minister. Wim Kok was famous around the globe for two things: a) his last name and b) his incredible Dutch accent.

      Another presentation @ test automation day was called "Agile and the cloud - the impact of modern IT megatrends on testautomation". Sounds interesting, right? Well no.
      First of all, the presentation would be given by Mr Wolfgang Platz, CEO of Tricentis Technology, but he didn't bother to show up. Instead, he left us with his Dutch employee, who didn't actually understand what "Agile" or "Cloud" means. Instead, he presented Tricentis' product Tosca, which you could probably use for test automation, but please don't. They're idiots. I know, the presenter probably did not have a choice when his CEO called him and he wasn't very well prepared for such a gig, but come on. This is insulting.

      And why do conferences forget to mention that the presentation is for a product these guys sell? At Microsoft DevDays 2010 I experienced the same thing, having to sit out 45 minutes of product advertising by people who refused to be honest about the shortcomings of their own product. That really pisses me off and next time I will leave the room.

      To end on a positive note

      There was one Dutch speaker who showed up with an interesting project. Professor Arie van Deursen from the Delft University presented Crawljax, an open source Java tool for automatically crawling and testing modern (Ajax based) web applications. It sounds very promising. Also refreshing to see what a scientific approach to testing web applications can deliver.

      After a day of conference I can conclude that test automation is boring and that this conference showed very few new developments. But hey, to me it was comforting to know that my knowledge of automated testing is up-to-date.

      Saturday, 18 June 2011

      Jeff Sutherland seminar

      Last week, I was lucky enough to attend to a Jeff Sutherland seminar (read his blog here) at the Dialogues Technology House in Amsterdam. I've been a fan of Scrum since I've encountered the Agile movement at the end of my study in Information Technology. So for me, it's exhilirating to hear someone speak that has been at the cradle of the Agile movement. This man has had an enormous influence on how we create software today and I think it has been a positive influence as well.

      Jeff talked about why you should do Scrum, but if you've read Mike Cohn's excellent book 'Succeeding with Agile: Software development using Scrum' you know what he's talking about (and more). I won't bore you with why you should do Scrum (maybe later :-) ), but I have written down some notable statements from Jeff:
      • developers should be having fun!
      • timesheets reduce productivity by 10 %, throw them out, they're not true anyway
      • even a bad scrum is better than a good waterfall
      • scrum is like martial art, you first have learn the exact basic moves and if you've got those worked out, you continuously improve your skills and adapt your own style
      • if you want high performance, communication in your team is key
      • if it's not working, STOP DOING IT!
      • your team needs a goal transcending the usual day to day struggle
      • scrum is based on truth, transparency, commitment and trust
      • key performance indicator is how fast your team fixes the build
      • turnover destroys productivity
      • in the future there will not be a company in the top 100 who isn't using Scrum
      • specialisation will not only slow you down, it will eventually kill you
      A statement like "if it's not working, stop doing it" seems quite obvious, but think about your own environment. I'm sure you can find some examples of ineffective behaviour which you are continuing anyway.

      The most interesting part of the talk is that Jeff suggests Scrum is also succesfull because it follows human nature. We like helping each other out, it makes us feel better. We need to have a feeling of being useful, having some significance. And we need to have some fun in order to get the right motivation. It made me think about a book I'm reading at the moment, The Art of Happiness by the Dalai Lama. In this book, the Dalai Lama discusses several themes which relate to becoming happy. For the Dalai Lama, the meaning of life is the struggle to become happy. At some point in time, everyone will wonder "is this going to make me happy?" Doing Scrum will enhance the chances of your employees to say "yes, this job will make me happy!"

      Friday, 17 June 2011

      Capistrano deployment: Cap shell != Bash shell

      Somehow everytime I use an open source plugin, I get to the point were stuff just doesn't work and the documentation doesn't contain an answer to my problem. This time I was releasing Coconut to a fresh Redhat 6 server, which was just created in our private cloud.

      During cap:deploy, it suddenly stopped while executing the bundle install task. One gem couldn't compile, it threw the much dreaded "incompatible character encodings: UTF-8 and ASCII-8BIT" error. (which you probably recognize if you're using Ruby 1.9.2) The error baffled me, since it doesn't occur on any of our other Redhat servers, nor on any local development or test machine. It seems my new Redhat installation somehow got confused about the default locale or encoding settings. After some google research, I've found out that setting the environment variable LANG to "en_US.UTF-8" would fix this problem. This sets the default encoding for Ruby to UTF-8. Some resources state that setting the LC_CTYPE variable to the same value might also be necessary.

      So, I've added the environment variable to the .bashrc of the capistrano user. But to no avail. After digging further (much further than I really wanted) I've found out that the shell which Capistrano uses to deploy is NOT the same as a normal SSH bash shell!. Which was a bit surprising to me, since Capistrano uses SSH. But they have provided a way to configure the cap shell to use the correct environment variables. Just add this to your deploy.rb file:

      set :default_environment, {
        :LANG => 'en_US.UTF-8'
      }
      

      You can add as much environment variables here as you need. Sweet.