Wednesday, August 28, 2013

Testing Ideas CAST 2013 pt 2

Wednesday morning dawned, well, early in Madison.  Warm already, but nothing to deter the hard core from heading out to Lean Coffee.

There is a Keynote address by Dawn Haynes this morning, more track sessions, ice cream in the afternoon, SIG meetings in the evening.  Did I mention there will be ice cream in the afternoon?  :)

And here we go!  LeanCoffee is underway. Fewer people than the last few days - of course it has been some late nights.  First question is related to "motivating" people to do more.  Ohhhh, my. 
Problem is, of course, it is really hard to "motivate" someone.  A better question might be "is there a problem with that person's work?"

Next question - Loyalty to test or loyalty to a product?  Getting people to "engage" in something that goes against the company culture?  In silo'd orgs, can we count on people to look at the craft of testing beyond what we need them to do?  Should we call on people to consider more than is needed for their job - right now?  Forcing change or "cross-team communication" by fiat is not likely to work unless there is an underlying interest - and the directive is essentially a nudge to move forward.  If there is no real interest, then mandates won't work.

Next question - How to "sell testing" to job candidates?  Oh - did you sit in Erik Davis' talk yesterday? Yeah, but I wanted to talk about it anyway.Interesting question.

How often do we do Lean Coffee?  As often as we can?  Really - that was kind of the answer we came up with.  Talked about LeanCoffee.org for ideas.

How do you interact with "customers"?  Hah!  What's a customer?  What is the difference between a customer and a consumer?

===

OK -  In the main room.  Loads of announcements - Education initiatives supporting non profits (Summer QAmp and Per Scholas); micro-conferences - one day affairs to give "a taste of CAST" and build the testing community; webcast training sessions - for members and the broader community; things we are already doing: mentoring; grant program for local groups; bbst...

Election.  I was honored to be announced as having been re-elected to the Board of Directors of AST.  Thank you.

Dawn Haynes gave an extremely personal "retrospective" on success, failure, what is good, what is not good.  Her "I'm not an expert but I'm going to give you advice."  OK - I can live with that.  The curious thing was the relationship she drew between introspection, retrospectives and what we do with software.  In her case, it was learnign to skate as an adult - and compete as an adult in skating contests.  She ALSO had some video from 1995 of her skating in the "US Adult Figure Skating" championships - or some such.

What counts as failure? Is it falling down failure? Is having no idea what happens next failure? Is having no plan or no concept of what you can do failure?  It kind of depends on what is going on and what you are expected to be doing, no?

In reviewing these ideas, what does that do for us?  When we look at testing, what can we draw from that.  The thing is, msot people find it really hard to examine themselves.  It is really uncomfortable for many people.  (Pete - yeah, it is extremely hard to do.)

OK - this is too good to write and listen at the same time.  Read Michael Larsen's live blog here: http://www.mkltesthead.com/2013/08/larger-than-live-cast2013-day-3.html


---

Break and then in Cindy Carless' presentation on lessons for software testing from the book The Elephant Whisperer (Lawrence Anthony).  Game preserves in South Africa tend to be wide open expanses of ... wide open expanses.  If you're like me you've some idea from shows like "Wild Kingdom" and things of that ilk.  The interesting thing is that there are areas within the reserve that are set aside as "safe" places - fenced/controlled areas that are perfect for rehabilitating injured or traumatized animals, like juvenile elephants.  These closed areas, called "boma," are bits of reserve within the reserves.

The thing is, for them to work, the environment needs to be handled and managed carefully or the problems you are trying to fix.  This includes animals not part of the herd - as in there are critters that are not elephants present but other, broader part of the eco-system.

TO make things work, the matriarch (dominant female) of the herd needs to trust Lawrence for his work to succeed.  There were a huge variety of elephants involved and in the boma that needed help.  That would not happen if the matriarch did not trust Lawrence.

Relationships are funny things.  We see problems or some terms around the relationship that we have not anticipated.  This gives us potential for some level of conflict on projects even though we are dealing with people and not elephants.

One way to build trust, and consider how relationships are being established, is a debrief.  Cindy used a daily debrief - even when she was the only tester on the project.  This was done through journal-ing and logging.

Giving feedback is important -really important.  PMs are looking for feedback all the time, as are devs and other project participants.  Anthony's book describes how he encountered various forms of feedback from the elephants - and struggled to learn to give feedback to the elephants.  This is really similar to a testers (or new tester/test lead) need to do with project participants. 

Her model is similar to what others have described, yet is worth reiterating here.:

Get Context;
Describe/stabilize environment;
High Level Sanity (definition);
Areas of Interest (tracking progress through the known area);
Risks/Opportunities.

Cindy's background includes a degree in commerce - translated - she did not intend to be a tester.  Hey - that is fairly common at CAST!  She also found ET to make sense.  The question of blending structure with application understanding/knowledge

"Bush Lore for Testing"
* The Boma is a starting point - the closed environment is only a safe place to begin, you must move out from there.
* Poaches are and will be a problem.  (She refers to Political Poltergeists)  Same idea.  Something happens you did not anticipate - sometimes this is done intentionally by others.  Learn to deal.
* Change creates discomfort - Change means setting things aside and doing something else, the familiar is replaced by unfamiliar.
* Whisperer - openenness to joint discovery - partly controlling, partly trusting.
* Respite is often found in unusual places - brilliant example of this, the buck chased by cheetahs that jumped into the back seat of an SUV w/ its window open. (Any port in a storm)

Afterward: On Anthony's death, the herd of elephants he had protected and rehabilitated, went to his settlement and paid homage (in elephant fashion) similar to how they do when an elephant dies.  They stood around his domicile, his home quietly.

Open Season: Sometimes we don't know what is happening.  We see behavior that is wrong and fail to understand what the cause of that was.   In Anthony's case, a rehabilitated elephant had a change in personality for unknown causes.  The elephant became violent and extremely aggressive.  Decision was made to put the elephant down.  After that, in the process of dealing with the body, they discovered an abscess under a tusk.  This was the cause of the elephant's change in behavior.  They (Anthony) had not identified it in time to save the elephant.
==
LUNCH TIME!
==

Peter Varhol's session on software failures was the session I went to after lunch.  Its an interesting walk through some signficant failures over the last 20+ years, including the Mars Climate Orbiter, the MS Azure failure, the 2003 Powere Outage.

Central lessons from these -
* Testers are an essential part of the prohject team;
     - thanks to a unique perspective software projects need;
     - as long as they exercise their skills in that pursuit;
*  Testers must try to break the system
*  Testing against requirements is needed (my take is that
*  Test like people may die (in some contexts, they might.)

Nice summary.
===

Next up - Testign when software must work by Barbara Streiffert.  Barbara is with JPL - yeah - the Jet Propulsion Labratory.  They do Space stuff.  How do you test stuff and not rely on simulators?  Wehn it goes in space.  Yup.  One shot - it better be right.  How do you do that?

She's starting with a way cool computer generated simulation of the deployment of "Mars Science Labratory" - the Rover Curiosity.  (Which, by the way, is a great name for an exploratory device.)  She's explaining the functions in the rover, which is really pretty cool.

After explaining the basics of terms for their context, she's off into the hard stuff.

Right - this is the classic formal review process - extremely rigorous.  They have a variety of software classifications.  Class A softeare is human related - stuff people interact with.  (Her group does not work on that.)  Class B is Mission Critical stuff - as in if this fails, the craft crashes.  Seems pretty straight forward to me. there are others, but you get the idea.   Test software is Class D - its related but not critical.

In their environment - if a critical bug is found in the course of a mission, everything stops except work on fixing that bug.  Time is truely of the essence.  Their core rule is to fly as you test and test as you fly.

There are some things that are constant tho - each project is unique.  Software is developed as a service for each mission.  There are unique rules for each mission based on the nature and characteristics of the craft - the platform its running on.

Fun fact, commands sent from JPL / Flight control to an in-flight craft is sent binary, with appropriate handshakes, etc.,

Another fun fact - JPL's "test scripts" are SOFTWARE - Yeah - code.  This is stuff that doesn't have a UI, right?  Testers are considered an integral part of the team.  They are developing their code alongside the devs and working together.  (This is something I belive Jerry Weinberg would recognize from his days at IBM & NASA.)

Quote - "Code that is dependent on unstable third party software is extremely difficult to test, if not impossible." Somethings are always true.


OK, this is interesting.  The development methodology is Scrum.  Yeah, who'd have thought?  OK, for all the Scrum-iphiles, guess who the Scrum Master is.... come on, guess.  If you said "Barbara Streiffert" you'd be 100% right.  Yeah, a tester who is scrum master.  What's not to love?

Now, they are not "pure" scrum, so the whole hard-core scrum folks are screaming "You're doing it wrong!!!!!" - No, sorry - They are serving their context.  Lives can be lost if a critical error is not addressed.  That means a 2 to 4 week sprint can be interrupted to fix a much bigger problem.

The solution is really simple to that - schedule people at 50%.  No really - it works.

There are some really important ideas she is giving way too fast for me to type.  "Testing is a continually changing process and updating that process is crucial to its being successful."  "Test task isn't to do testing but to plan testing."

Understanding software and its purpose - the application - is crucial.  Without that you will not succeed.  (Pete note:  That applys to so many places, including every place I have ever worked - EVER - not just as a tester.

AND - she's wrapping with a video of the physical test of the skycrane.  This is the thing that lowered the rover to the Martian surface.  Pretty cool.

Open Season: HUGE comment: OK, Process weenies - pay attention.  "We talk about bugs every day.  When there is a critical one, there are phone calls or people find you.  There is a formal reporting communication system but it takes too long."  (Emphasis Pete's)  When something is important enough, you MUST drop the process and fix it NOW.  Really. waiting for the bug report to cycle through will take too long. 

===
Rob and Scott Summary
===

A fun bounce through their favorite bits of the last couple of days.

Ilari's presentation - via Rob - You do not have to fight for every bug.  Lack of priority not time. Absence of Evidence

Michael Hunter's presentation (via Scott) Describing tests at a high level (vs scripting them at a detail level) is a defense against inattentional blindness. 

Michael Larsen - via Rob - What we see vs what we think we see ; terms we take for granted -

Justin Hunter - via scott - I'm not the only one who believes that "Design of Experiments" is a more valuable model for testing than QA.  (I may be wrong but I'm not alone.)

Geordie Keitt - via Scott - Alternate model for "Know your mission 2 levels up':
*  What is context;
*  Does my context encompass the task;
*  Does my boss' context encompass my context;
*  Hoe can I better match my context, my bosses contest & Tasks;


Heather Tinkham - via Rob - Build Bridges ; Dare to disagree ; Being wrong ; Build on Limitations ; Seeing as forgeting the names of things ; Testers make mistakes


Dawn - via Scott - Dawn can deliver one hell of an experiential talk with far less prep than she believes - and so can you.
Dawn - Via Rob - Life can be understood backwards but can be lived forward;  I wasn't a tall child  ;  Judge less explore more. 

Rob and Sabina - via Scott - *True* mentoring is a 2-way lifetime contract predicated on trust, respect and a mutual commitment to continuous improvement.  Cool point - Mentor & Mentee side by side without

Cindy - via Rob - Learn in a safe haven ; model testing on ecosysytem ; regression as a swear word ; be vigilant like a sponge ; build trust w/ matriatrchs ; daily debried even when alone

Dee Ann and Manuel - via Scott - CEO - Don't throw excel spreadsheets at me.
Tell me what I should be concerned about.
Don't whine.
Articulate your strategy or replace yourself.

Anna - via Rob - Use focus groups w/ users to support testing both UX and functional;
Role of quality leader in agile transition;
"My Theory" - anne elk - Monty Python;
Agile Teams should have quality advocate ;
If I tell a man what to do, he freaks out ;
Whole team should be on the same page across the board ;


Markus - via Scott - Schools concept continues to offend many ;
Discuss culture instead.
















No comments:

Post a Comment