I’ve realised why it is that people on all sides of the development process hate testing. Testing should be done by those who actually USE the system. Now, users get rather annoyed if you give them things to test that aren’t really quite close to correct. They find problems within seconds and you have to go away and fix things, leaving them frustrated at having made time for testing and annoyed that you hadn’t got it right. So users hate testing early on.

The result is that testing is carried out by developers (of one type or another) until quite late in the process … who are largely testing what they built. They won’t realise that the doohickey shouldn’t allow selection of the flobbit, that the order of screen elements is both stupid and clumsy, or that the design makes the process far too slow. Only real users will realise this. So developers hate testing, because they essentially regard it as useless. They know it works how they told it to, so why should they run tests??

The theory, then, is that tests should be specified at the outset, by the users. If they write the tests, then the developers can just follow them and identify the same problem that the users will. Right? But how do you convince users to write tests that a) someone with limited domain knowledge can follow and b) can expose problems that only a real user would find? The answer is that you really really can’t. In my own experience, often the users are also not anywhere near technically/logically minded enough to write tests that cover the various scenarios, never mind expose deep dark flaws in the operation of the software!

So what ends up happening is that the developers develop & test, then eventually pass the “working” product to the users, who quickly find flaws. Then the developers have to go back into their development & testing cycle. This repeats until developers hate testing, users hate testing and developers & users also both hate each other.

There are some things that can mitigate this hatred. Dynamic languages can make collaboration between users & developers more of a reality. If the time between the user pointing out that something is wrong and the developer fixing it and offering it up for retesting is measured in minutes rather than days, this can ease the burden greatly. Good requirements gathering is invaluable in making sure the right thing is being built in the first place. User- or task-centred design can make a massive difference.

But at the end of the day, everyone still hates testing.

So I was thinking, regression tests are a brilliant idea when test can be automated. Is there any way of automating user testing? It occurs to me that you can easily record macros of what the user is doing — so presumably it should be possible to record a macro of a user testing and then be able to rerun that at a later date. Does anyone know of a way of doing this already out there?