Test Competition Results!

It was a quiet Friday Morning in the United States.  Or, I mean, Friday afternoon in India and Singapore.  Later in New Zealand, and in Europe …

The point is, there were a lot of people playing. Seventeen teams registered four different continents for three hours of aggressive testing, followed by a weekend where teams rotated through performance testing.

Now, the winners in each category.


Most Accurate Bug Reporting: Was a tie between Team GTC and Team Hylander

Team GTC, represented the Barclay’s Global Test Center in Singapore, with staff including Nitin Sharma, Amit Pandey, Alok Maharaj, and Rajesh Pai, all in Singapore.  Also testing was Jasraj Shintre who, through the wonder of the internet, might have been in New York City and might have been in Singapore; he didn’t tell me.

Team Hylander represented Hyland Software in Westlake, Ohio. The team included Adam Koehler, Bob Loper, Nick Stefanski, David Romel,Mike Schaefer, Aaron Duncan, Rob Jones.

Best Overall Functional Testing: Team Ocean’s Three came from three different continents and collaborated entirely through web technologies.  The team included Chris Kenst in the United States, Ashwin Kumar in India, and Jaime Boland in the United Kingdom.

and the “big one” …

Best Overall Software Testing, Functional & Performance: Team Four Musketeers – Alina Avadani in Romania, Brindusa Axon United Kingdom, Tobias Geyer, Germany, Katharina Gillmann, Germany.

The Four Musketeers did especially well in the competition, tying with Ocean’s three on functional and turning in the best performance report.  They also wrote up a brief after action blog post that shows what the experience was like from their perspective.

Congratulations, GTC, Hyland, Ocean’s Three and Four Mustketeers!  

Let’s talk about what we learned.

The Scope 

With three hours to run the testing and creating the after-action report, we put the test teams under a fair amount of time pressure.  We also ambiguity, allowing the teams to test any (or multiple) of four different websites, any one of which they would be hard pressed to test in three hours.  We provided no requirements for what a “good test” report looked like, nor what to focus testing on.  

We did give the teams one break, giving them access to the bug tracker tool to enter bugs a week before the competition – Telerik Team Pulse.  Still, we didn’t provide any input into what a “good bug report” looked like.

This means that the team had to make tough decisions about what to do and when to do it.  A few teams focused on creating a few, important, bug reports, and doing them well, perhaps on only one system, while other teams created twenty or thirty bugs that were all over the map.

We did add one layer of realistic help: A “customer” to collaborate with.

Requirements and The Customer

The judges and volunteers were collaboratng by Google Hangout Video.  It turns out there is a backdoor way to publish the video from a Hangout as streaming YouTube show, with a roughly five-second delay.  Immediately after I published the competition blog, we began the video, and allowed any team member to ask any question, which I would try to answer, both by speech and typing.

It was awkward.  It was tough.  The questions came from multiple directions, I had to multi-task, it was tiring.  At a few points the audio cut out for a few seconds, but we tried … just like a software project.  Believe it or not, YouTube saved the video, so you can watch if it you’d like, but honestly, I think you might find the comments more educational.

The Bugs And The Reports

Then the bugs came in.  Most of the bug reports tried to cover the basics – what the user did, what the user expected, and what actually happened. However, in the rush to get everything filed as soon as possible, there was a fair amount of “unable to edit survey” bugs with no details.  My timeframes haven’t been that compressed, but I have felt similar issues myself — two weeks ago I was trying to knock out my work so I could have time for test competition and a conference to follow, and I have to admit, a review of my bugs filed shows them less than stellar.

I have no time machine, and I have to admit, taking time to file the bugs takes time away from time searching for the next bug.  What I can tell, you, though, is that teams that filed fewer, better bugs tended to score more highly in the competition.

The reports are another interesting story.  As customer, I explained that my desire with the report was to help understand the status and critical risks – that I wanted the story of the quality and risks, not the numbers. This is very different than a test report that talks about the find/fix/reopen/bug discovery “rate” in order to predict the end date of the project.  Also, in an age moving toward agile-testing, where status is a two-minute topic in a daily standup, the concept of a test report may be unfamiliar.

I have an impression that most teams thought of the report as an after-thought. (A few teams didn’t even bother to write a test report, opting to spend all their time on bugs, hoping to sweet that category!)  As a result, there’s plenty to talk about in the test reports, perhaps even do training.  I have to get permission from the teams to distribute the reports, but I’m working on it!

One team, Team India, did an analysis of the graphic rendering in their performance report, to provide insight on how to make client-side loading faster.  That was interesting!  I may have more about the performance reports in a different post.

The Wrap-Up

We had a large volunteer team, and I’m certainly missing some people, but Nimesh Patel and Lalit Bhamare took a great deal of time to carefully score as judges, which Smita Mishra and Jason Coutu did the back-end technology work to really make the competition happen.  These folks worked entirely as volunteers, out of the goodness of their hearts, a desire to get to know each other better, have a little fun, and learn something.  If they had half the experience I did, then I’m sure it was worth it.  (Special thanks to Telerik, for offering the bug tracker, and NRGGlobal, the company whose web site you are reading, that provided staff hours, the prizes, the website, and a lot of resources to make this possible.  Really.  NRG makes performance testing products with a human feel, they are much more than a blog!)

 Earlier I mentioned the blog retrospective from Tobias Geyer on the Four Musketeers team. Chris Kenst, of Ocean’s Three, wrote up his as well, and a few other people tell me they are interested. 

Right now, we’re in rest and regroup stage, but somehow, I expect there may be more test events to come.  

Subscribe to our newsletter

Get the latest from the world of automations.