Note: This page describes a proposal in progress. In it, I am not attempting to set any kind of formal policy or require anyone else to do anything at all. — Andrew Moore 2008/08/08 07:05
There is a website (built with an application called smolder) that shows graphical representation of results of runs of the test suite. I have a modified t/Makefile that will bundle up the results of a “make test” and send them to smolder. Smolder then takes that output and formats it into pretty pictures that help us see how much of the test suite is passing. I'm hopeful to eventually set up a process to run the test suite each night and send the results to the smolder site. I'll also add the code to the t/Makefile to allow you to submit results from your own platform. This may help us diagnose problems that crop up in your environment, but not on some other people's.
You can see the results of some recent runs of the test suite at http://arwen.metavore.com:8000/app/public_projects/smoke_reports/1. Or, head to the main smolder page and chose “Koha” from the list of public projects near the upper left.
This lets you see the results of a few recent runs of the test suite. You can click on any of the reports to see pretty charts of which tests passed and which failed. Currently, the last test of each report fails because we're running our tests in a bit of an odd way. I'm hoping to cure that soon.
You can also hang out in #kohanews on Freenode and the results are reported when they are added.
I'll soon submit a patch to t/Makefile to allow you to make smoke reports. You can currently fetch the patch at http://arwen.metavore.com/~acm/patches/smolder/20080808-092324. With this patch to the t/Makefile, you can run “make report” from the “t/” directory to make a t/kohatests.tar.gz tarball. You can submit that through the web interface on smolder.
Soon, I intend to distribute a program that automatically uploads these smoke reports for you so that you can do it directly from a “make” target without having to use your browser.
I believe that adding tests to your code and running them regularly helps measure and demonstrate the quality of your code. By setting up this site that aggregates, displays, and reports the success or failure of our test suite, I hope to be able to keep all of the developers more informed of progress or problems with our application. It helps me to know that a new feature I've added or a bug that I've fixed has not adversely affected another part of the application. I feel that getting a report of a completely passed test suite after I've checked in some code can build our confidence in the addition, and that receiving a failure report will help me solve problems before they adversely affect users.
If you don't feel that way, or don't want to play along, that's OK. You can ignore all of this.