Thursday, October 2, 2014

On Exploring - Where and How I Began

I was recently asked if I thought Exploratory Testing was "oversold."  I thought about it for a bit.  I suspect it has been oversold by unscrupulous charlatans as has every other approach to testing or tools for those approaches.

I've read stuff from people on Exploratory Testing (E.T.) that made me wonder if I was as absolutely clueless as I could not recognize what they were calling E.T.  Then I thought for a moment and reread some of their arguments.  It struck me that the problem was that I had no way of reacting as I kind of wanted to.  Perhaps it was because of how I came to E.T.

Many years ago I was working at a company where development managers, directors, etc., insisted that well formed and well defined tests had detailed steps to follow, with detailed "expected results" and detailed criteria for "Pass" or "Fail."

These of course were tied to the detailed requirements that were defined for the project.  These were the same requirements the design was supposed to be based on. 

Since there were not nearly enough testers to actually test the changes, the decision was made to have developers who did not work on those portions of the system execute the test scripts.  They would be familiar with the systems and be able to execute scripts quickly and efficiently.

Sure enough - they pounded through the scripts like lightning.  One minute none of them had been run, in a few hours, there were two or three left.  That was kind of a surprise since I expected executing the scripts, even with 3 or 4 people working on them, a full day and more.

Somehow, they blew through them in slightly more than three hours.

The Beginning...

Then it dawned on me - instead of looking for specific items from the "expected results" they were - I'm not sure.  So, I changed the rules.  I removed the "expected results" column from the test steps and with the next build, sent them off to be tested.

This time, it took... longer.  The people doing the testing were a bit flabbergasted - I had "changed the rules!"  And, this was not what was intended - and this "was not the right thing!"  Why, HOW were people supposed to know if it was right or not?

That was an interesting response.  It also highlighted something important - many folks are comfortable with being told what to do and what to think and when to think it.  I am not.

The issue was, as long as someone (like, me) was willing to review the "observed results" against what was supposed to be done, most people were pretty comfortable with it.  I noticed some things here though that made me wonder if there was something more I could do to introduce some variables into the process we were exercising.

I thought about mentioning it at the next project meeting.  Having coffee with a development manager, who had similar views as I did, we agreed that the "control" issues certain people had over what "good testing" looked like would never reconcile such a radical concept.

So the next round of testing, instead of detailed instructions, the developers doing testing work were given general instructions.  "Create a transaction with these characteristics..." or "Create a new warehouse item in one of these categories..."

They were instructed to write down precisely what it was they did, the sequence they did it in and what results they observed.  This worked really well.  There were detailed notes around each person's activity.  There were many areas they noted that "should be looked into."  And, most importantly in my mind, they were looking into how the system reacted.

I caught no end of grief when bosses realized I was not adhering to the standard practices. (See the blog post On Folly, I mention the company in there...)

Instead, many bugs were found that would not have been found.  We had precise "steps to reproduce" as well as snapshots from the logs and the database(s) at the time the tests were being run.  We had loads of data showing things that worked well and things that were horribly flawed.

And still, I was catching grief over this "sloppy" work that was unprofessional.  So, one day as I was feeling a bit testy, I began searching the web.  I found a reference to a form of testing that a couple of guys were advocating.

This involved not having detailed, planned scripts.  It also involved careful observation on the part of the testers and how they tested and what they did and looking at how the software as a whole behaves.

This made sense to me.  A great deal of sense.

I began reading as much as I could.  I found that this wild and crazy idea that I had, where people were not told precise steps to follow or specific formulas to determine if something is "right" each and every time they test a piece of software, had a name.

These people whose ideas I was reading called it "Exploring."  They called their approach "Exploratory Testing."   I read what I could - first of James Bach and Cem Kaner, then Michael Bolton and others.

I put "Managing the Testing Process" on the shelf, and looked at "Testing Computer Software" and "Lessons Learned in Software Testing."


Where I Stand Now...

I found one that was focused on controlling the process model and artifacts around testing, and the others to looked at testing. The two books looked at the actual, reality of testing.  That rang of truth to me.

From that point, I found less and less value in the rituals of testing.  The idea that the only thing that can be tested is "code" is one of the silliest ideas I can recall.  Yet that seems to be what some people want testers to "test." 

That is the saddest thought I can recall for some time around testing.

Has ET been "oversold"?  By some unscrupulous charlatans, yes.  By people who work with it on a daily basis?  No - Not at all.

No comments:

Post a Comment