Mind the Ps and Cues

Last week I had the pleasure of attending the inaugural Mind The Product conference. To my knowledge it was the first ever London conference created for Product Managers by Product Managers.

My day started with a long cue of product managers at 9am for a 10am start. This eagerness highlighted the quality of a speaker line up featuring the likes of Marty Cagan and Tom Hulme.

I ended the day with a refreshed sense of excitement around creating great products, some new ideas and new friends. Thinking over what stuck with me a few days later, there seems to be a “P” theme going on.

All the speakers, organisers and attendees seemed to being genuinely interesting smart, fun people. That doesn’t always happen when you bring 500+ people in one space.
Charles Adler, Co-Founder of Kickstarter spoke passionately about how important trust has been in building a successful team at Kickstarter. It reminded me of the “People over Process” facet of the Agile movement, a mantra I try to live by, but one that remains harder to follow than it should be.

All the attendees I spoke to were blown away by Google’s Tom Chi’s prototyping prowess. A working Google Glasses prototype on day one, followed by countless rapid iterations to reach a working form factor, was an inspirational story. Tom’s message to maximise the rate of learning by reducing the time to try things really struck a chord for me.

Tom Hulme from IDEO talked about how all great companies have a purpose. The concept of a company’s “Purpose” resonates with me much more than “Vision” or “Strategy” ever has. Tom proposes that your product should be a a vehicle for your purpose and that everything you do should come back to the purpose. I think I agree with him.

Hanna Donovan and Matthew Ogle from This is My Jam showed, with a bit of live vinyl music and beer drinking, how performance can really bring a talk to life.

Yep, people talked about pivots. They also talked about how people talk too much about pivots.
I was going to discuss pivots, but I changed my mind.

That’s what stuck with me the most, all the talks were of a high standard and I recommend checking mindtheproduct.com to watch the videos when they posted.

Posted in Uncategorized | Leave a comment

Warning!! Your Product May Be Less Attractive Than You Think It Is!

I recently discovered an interested fact. Studies show that we perceive ourselves to be 20% more attractive than we actually are.

This was one of a handful of fascinating studies described by Robert Trivers during an RSA talk about his latest book Deceit and Self-Deception: Fooling Yourself the Better to Fool Others

Trivers has spent the last few years studying the peculiar phenomena of self-deception. The basic premise of his book (which I plan to read asap) is the hypothesis that deceit and self-deception are linked. From an evolutionary perspective, it is logical that we can better convince others of our superior status, strength, attractiveness etc. if we believe them to be true. I won’t attempt to outline the arguments, you would be better off reading the book or listening to the audio of the RSA event.

Let’s assume that the hypothesis, “We deceive ourselves to better deceive others” is true. What does that mean for Product Managers?

Most Product Managers are the biggest champion of their product. I don’t know of a Product Manager who would work on a product they didn’t believe in. That belief needs to be infectious for your product to grow. Colleagues, customers, friends, strangers should all be left with a great impression of your product after speaking to you.

But what if, we are fooling ourselves? What if we view our products as 20% more appealing than our customers do? All that confidence is great when pitching a product or when writing convincing copy, but does it help us build products that people want to use? Are we biologically predisposed to seek out vanity metrics relating our products? How do we deal with this potential misperception?

My gut reaction is to be even more driven to back up my vision with facts.  To always strive for a good balance of quantitative and qualitative data about my product. Make sure I make a habit of getting out of the building.

From now on I will try to always challenge myself whenever I big up my product to myself, especially when it seems automatic.

I could of course be wrong. Maybe truly believing your own hype makes the difference between success and mediocrity?

Posted in Product Management | 1 Comment

Does released equal done?

Thankfully, more and more people are defining when a new piece of functionality is “done” as when it gets released to a user. This is huge progress from previous definitions such as dev complete, test complete or business signed off.

Lessons from the Lean Startup movement show that we can do better. “Done” should mean used. It’s great having software out there but we need to know if it’s being used and providing value.

When defining requirements for a feature they should include business acceptance tests, e.g. this feature should drive this much traffic or n users keep returning to this feature.
If you are doing truly iterative development you need to keep iterating, pivoting and releasing until your business tests pass. That is when the feature is really “done”.

Posted in Agile, Product Management | Leave a comment

Mashup Your Tests.

Mashups are ubiquitous these days. Over the last ten years music mashups have gone from underground clubs to prime-time TV, the rise of web 2.0 saw software mashups bring together disparate data and functionality in meaningful ways, bookshops are now full of literary mashups with crazy titles like “Sense and Sensibility and Sea Monsters”. But have you ever tried to mashup your tests?

Why not take your testing artifacts and mix them together in new and thought provoking ways. It doesn’t matter if they are scripts, automated tests, heuristics, checklists or whatever.

What happens when you:
Splice together your performance tests with your UAT?
Cut your smoke tests in two at random and test your upgrade process in the middle?
Pair with a tester in another team, with a different product and mix your tests together?
Do it with someone from a different company, in a different timezone?
Go completely crazy and run your automated test suite in the “wrong” order!
Just be creative and have fun, the possibilities are vast. If anyone tells you your testing the wrong thing, your doing it right.

I don’t know what will happen, but I can guarantee you’ll learn something new about testing and your application.

I often hear about “Rockstar Developers” but what praise for the great testers?

Superstar DJs.
Here we go.

Posted in Agile, Exploratory, Testing | 1 Comment

All our testing should be Exploratory

Exploratory testing, that’s a manual activity, right? Wrong.

Exploratory testing is an approach to testing that allows you the freedom to learn about an application, improve your skills and enhance your tests in an intelligent way based on feedback from execution. If you want to benefit from the full value of exploratory testing you should be taking a conscious exploratory approach across your entire test strategy and not just some manual stuff you do around the edges.

I find that in strong teams, most testing, including automated regression testing, is exploratory. Being mindful of this will make your testing better and help avoid the pitfalls of scripted testing (pitfalls? that’s a whole other post that’s already been covered by people much smarter than me).

For years I have been promoting the use of exploratory testing on Agile projects. Unfortunately, I don’t think I really understood what that meant. I forget how many times I have presented slides or written proposals stating that automating acceptance tests and regression tests will free up testers to do the valuable work of “manual exploratory testing”. Hey, you can even put some structure around it, maybe session based or story based. Lots of others seem to agree, so it must be the right approach, right?

However, something never sat quite comfortably with me. I have always used a cobbled together set of tools to help me in my testing.  A batch file here, a perl script there, the odd bit of sql that I’d tweak each time I ran it. But this is lost when talking about exploratory testing in a way that it implies it is manual. I have heard the term “assisted exploratory testing” but that sounds wrong to me; somehow against the exploratory spirit.

Having moved back into the testing space after a couple of years embroiled in the world of project management, I’ve begun thinking about testing with a fresh perspective . A few things have caused me to re-evaluate my understanding of exploratory testing and what it means within the context of an Agile project.

In particular, Michael Bolton’s excellent post http://www.developsense.com/blog/2010/09/exploratory-testing-and-review and James Bach’s much more blunt and arguably more impactful post http://www.satisfice.com/blog/archives/496 lead me to re-evaluate Cem Kaner’s synthesis/definition of exploratory testing:

Exploratory software testing is a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the value of her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project.

At no point has anyone trying to define Exploratory testing inferred that it had to be manual, but this seems to be an assumption many people make as it is the easiest form to understand.

Within the agile community I hear a common debate about who should automate acceptance tests, when they should be automated and what the benefits are.  Usually someone makes the argument that the investment in automation frees up skilled testers to do “manual exploratory testing”.  I now feel this is missing the point. Why can’t all our testing be exploratory? There is no reason to think of the automated test suite, often the cornerstone of an agile project, as being outside the realm of exploratory testing.

Dan North made a great point in a recent talk about his incubating ideas around deliberate discovery http://skillsmatter.com/podcast/agile-scrum/keynote-deliberate-discovery-code-like-you-mean-it. He is right to state that exploratory testers are some of the few people currently following this model of deliberate discovery, but wrong, to describe these individuals as “skilled manual exploratory testers”.

I am not proposing automating your exploratory testing. I am proposing that if you value exploratory testing you should ensure that you take an exploratory approach to all your tests including those you have automated.

So, how does exploratory testing compare with a more linear scripted approach to testing on an Agile project?

Exploratory testing on an Agile project might look like this:

  • The assumption in the testers mind is always “there is something we don’t know”.
  • Testers and the rest of the team come up with ideas to test the specified functionality and the proposed implementation.
  • Some tests are written down as acceptance tests, this is how we think the system should and will behave.
  • When valuable these will be automated to allow developers to quickly validate if what they have built is what the team expects. If it isn’t, either the code changes to match the tests or the tests change to match the code.
  • Once the requirement is a real thing that can be manipulated, the testers will explore the new code and how it interacts with the rest of the code, feeding back what they learn to the team.
  • Automated tests are updated to represent what is currently known about the application. Kept as lean as testers need it to be without any explicit links with the original stories they were developed from.
  • A continuous integration server executes these automated tests, providing a feedback loop that informs exploratory testing.
  • Testers also utilise all or parts of this suite to help them explore the system alongside manual execution of tests.

Scripted testing on an Agile project can look like this:

  • The assumption in the tester’s mind is always “we will identify what we need to test and automate as much of it as we can”.
  • Testers and the rest of the team come up with ideas to test the specified functionality and the proposed implementation.
  • Some tests are written down as acceptance tests, this is how we think the system should and will behave.
  • Requirements are not completed until all acceptance tests pass.
  • Acceptance tests are automated and added to the CI build as per the story and must continue to pass for the life of the project.
  • Every time an automated test fails, huge amounts of energy are exerted to make the application pass a possibly outdated test scenario or seek permission to change.
  • Your automated regression suite can end up driving what the team does instead of informing what a team does.

Why all our testing should be exploratory.

One of the most valuable aspects of exploratory testing is that tests change based on feedback from the system under test. Some of these tests may be written down, some may be coded and some may only exist in the head of the person exploring the software. The feedback loop might be the time it takes to manually execute a test or the time between a code change and a CI server running a suite of tests.  Automated tests are one model of how a system works or should work at any point in time and not a concrete set of criteria that have to be met.

It’s the learning from test execution and adapting to new information that can make testing truly powerful.  If you limit that learning to manual testing, you are limiting your testing.

Posted in Agile, Automation, Exploratory, Testing | 2 Comments