Showing posts with label testing. Show all posts
Showing posts with label testing. Show all posts

Friday, February 08, 2013

Discovering the value of QuickCheck

I like tests. I like the confidence that you get when you have an extensive test suite so you know you'll be notified when things break. But so far I've used mainly frameworks like HUnit to explicitly state my assertions. I know what the code should produce given the inputs, and I encode that information in tests. So I didn't really see what QuickCheck brought to that. I'm still not that comfortable with the random aspect of it, but I understand that it can find corner cases for you, whereas with assertions the corner cases you need to think of beforehand, or add them when your users encounter them (ouch). But I'm starting to see the value of QuickCheck.

I was working inside the HTF source code, and the goal was to produce a textual representation of a diff between strings similar to what the diff utility does on Unix. I just wanted to have a fallback for machines that don't have diff installed. And Stefan had written a QuickCheck property to verify that the outputs of the pure Haskell code and the utility matched. I used the property to fall back to my comfortable way of working: fire QuickCheck, let it find a failing case, add it to a HUnit TestCase, and try to understand and fix the issue without breaking the rest. So QuickCheck was just a test case generator.

But then Sterling, the maintainer of Diff, convinced me that 100% matches between the code and the utility was an utopian goal, since diff does some tradeoffs for speed and compactness, and rewriting diff with all its quirks wasn't really worthwhile. And that's when I started to like QuickCheck. I thought: we want exact matches most of the time, but it's fine to have differences as long as what we generate is not too big compared to what the diff utility outputs. This is what QuickCheck classify function can do: I first verify if the size of the output is ok (less than 10% bigger that the utility output), and then I classify the test case as an exact match if the two outputs match. And then I use the cover function (not, ahem, covered in the manual ) to ensure that I get at least  90% exact match:


cover (haskDiff == utilDiff) 90 "exact match" $ 
  classify (haskDiff == utilDiff) "exact match"    
    (div ((length haskDiff)*100) (length utilDiff) < 110)


So the tests are random, yes, but I've pretty good confidence that the output function does what's it's supposed to do. 90% of the time :-).

Thursday, December 06, 2012

EclipseFP and HTF: demo video

The next version of EclipseFP will integrate with the upcoming version of HTF, a great Haskell Framework for automated tests. As a preview of things to come, I've uploaded a video on YouTube.
In this video, I create a small library project, using the tutorial code from the HTF tutorial, then use the HTF Test Suite wizard in the Cabal editor to generate automatically the cabal test-suite stanza, the main module and a test module per library module I want to test.
The I paste in the actual HTF code, and run the test suite. The results show in the new Haskell Test Results View. I correct the mistakes, wait for the rebuild, and the tests pass!

This will be part of the EclipseFP 2.3.3 release (or maybe I'll call it 2.4), that will be released probably as soon as HTF version 0.10 is released.

I'm still experimenting with screen captures and YouTube video, so apologies if the video is not crystal clear. Best viewed on the large viewer, I guess.

Sunday, November 25, 2012

Improved test support in EclipseFP

Last year during his GSoC, Alejandro Serrano did a great job of integrating test-framework results in the JUnit view. This has served me well in my own endeavors (buildwrapper has 50 something HUnit test cases).

But there were a few limitations:
- using the JUnit view added a dependency on JDT plugins
- the view only gave an error when double clicking on a test result because there was no link between a test and the source code it was in
- the test results only appeared once the full test suite had run

I have reviewed the options, and the upcoming version of EclipseFP will have the following changes:
- A specific view for Haskell test results. Basically a rip-off of the JUnit view, without the progress bar because it's a custom control the JUnit UI guys wrote and life's too short. It's got history so you can get back to previous runs, and shows errors or failures in the bottom text area.
- Integration with HTF. While the test-framework integration will still work, using HTF gives us:
   - Jump to location (from test result to test definition location or failure location), thanks to the HTF preprocessor
   - Running updates thanks to HTF generating JSON output after each test and not in one big file at the end.
   - Automated discovery of tests based on naming conventions
   - Easy reduction of test cases to run from a command line argument
   - Elapsed time of tests
   - while still retaining integration with HUnit and QuickCheck

Thanks Stefan Wehr for your great work! Make sure to read his tutorial on HTF.

I've ported the buildwrapper test suite to HTF, and voila!

Since HTF relies on a few pragmas, I'm now going to work on a couple of wizards to simplify creation of the HTF Cabal test-suite and a HTF module. Watch this space!

Thursday, October 20, 2011

GUI Testing using OCR: project Sikuli

I had a great idea this morning... and then realized that there was an open source project already doing it.
I thought: instead of using specific testing frameworks for each UI technology I have (a tool for the web, a tool for SWT UIs, etc) what about a tool that could recognize the text on screen and click or type in the proper places. The same technology for everything type of UI, and something that doesn't rely on pixel perfect positioning to work! Turns out (surprise!) I'm not the first one that had the idea, and looking around I stumbled on Project Sikuli. This looks great: it uses images to find "interesting" parts of your UI and there's also support for text recognition. So potentially you could say "click on the button that says 'Submit'"! It integrates with Java code so you can easily have JUnit tests using it.
Unfortunately, for my own purposes I couldn't get very far easily because of lack of support for transparent images (or, more precisely, it takes into account transparent areas for images, so it doesn't recognize an image if the background changes, which happens if your desktop theme changes, if you use the same icon in different contexts in your UI, etc). But it still looks like a promising tool!