Thursday, 29 March 2018

An outline of testing

Recently I needed a quickly digestible "outline of the testing we are doing”. This was to be shared across the sprint teams and stakeholders, which included technical and non-technical roles.
We have tech docs, wikis and strategy documents but we were after was a high level snap shot that could be put up on our walls as a reference point for anyone working in or visiting our teams.

We wanted to show:
The [testing types] we are undertaking,
… using [tools] and [techniques] we’re using,
… in test environments which use [data],
… executed by people / tools / pipelines,
… at the points in the [SDLC/Story Development/Pipeline],
… in order to [proove to quality criteria], [mitigate risk], [speed up delivery].
Which leaves us with [risks] and [further investigation activities].
Ultimately we were addressing 'six W's and an H'.
The [testing types] we are undertaking, (What)
… using [tools] and [techniques] we’re using, (How)
… in test environments which use [data], (Where)
… executed by people / tools / pipelines, (Who)
… at the points in the [SDLC/Story Development/Pipeline], (When)
… in order to [proove to quality criteria], [mitigate risk], [speed up delivery]. (Why)
Which leaves us with [risks] and [further investigation activities]. (Worries)
Running through these questions on our project resulted in a grid like this:

The audience (both in our sprint teams team and around them) liked it. They found it a clear and simple, and it helped both answer and inspire questions.

As a bonus the exercise of making the grid helped the team refine and clarify their understanding of the testing happening in the project. We discussed the pros and cons of what we were doing. It became a sort of retrospective.
The headings worked well to focused us, and the question prompters reminded us of what we were trying to achieve.
Overall, the exercise for our team took less than 45 mins to run through.

I’m a big fan of clean, lean and context driven documentation, and this grid fits that definition.
I’ve gone back to our (short) strategy document and incorporated a version in to it.
At the moment the version in the strategy is just showing the What, How and Where. The Who, When, Why, and Worries are covered in other sections of the strategy.

Going forward I’m hoping to use these "six W's and an H" as part of my toolkit when working with teams to develop new test strategies and strategy reviews.

Tuesday, 25 April 2017

Cliff climbers and Flow

Our industry values 'cliff climbers'; strong highly skilled people who can over come tough problems and take us to new heights.

In the IT world you see praise and idolisation for the actions of cliff climbers; "...overcoming that production issue.",  "... meeting the unrealistic deadline.", "...redefining our processes.", "...increasing sales to our toughest client.", "...enabling us to deploy even faster.", "...leading our industry in to a new age."

The proverb (and 1985 Billy Ocean song) says 'When the going gets tough, the tough get going'.
Meaning that when a situation becomes difficult, the strong are the ones who become engaged.
Cliff climbers are strong both in resilience and in skill, and step up in tough situations.

You want cliff climbers in your team. They keep our companies ahead of our competitors, at the fore front of our industry, and able to respond rapidly to our fast changing environment.

Rana Betting wrote an interesting blog post in 2010 about how actual rock climbers who climb actual cliffs are "addicted to finding the flow".
She talked about how rock climbers get their most happiness from challenging their skill level, and that happiness is the reward which drives them to keep going for bigger climbs.
The concept of ‘Flow’ Rana referenced is the seminal work of Psychologist Mihaly Csikszentmihalyi, it's also known as being 'in the zone' or 'in the groove'. 
"The flow state is an optimal state of intrinsic motivation, where the person is fully immersed in what they are doing. [...] characterized by a feeling of great absorption, engagement, fulfillment, and skill..." - Mihaly Csikszentmihalyi

You can probably think of cliff climbers you've encountered. They look for challenges to over come, and are driven by the potential benefits that they, their team and their company will experience after achieving the climb. The climb brings with it an intrinsic motivation from matching their high skill levels against the toughest challenges. While it can come with heightened stress, like the rock climbers in Rana’s blog; they have an addiction to climbing these cliffs.

So, what happens when the going gets 'good'?
A state of Flow will give all people satisfaction. Everyone reaches Flow when the challenge to skill ratio is right. But the cliff climbers in your company have a higher skill level, and therefore require harder or bigger challenges to be satisfied and happy.

"To achieve a flow state, a balance must be struck between the challenge of the task and the skill of the performer. If the task is too easy or too difficult, flow cannot occur. Both skill level and challenge level must be matched and high; if skill and challenge are low and matched, then apathy results." - Mihaly Csikszentmihalyi

Mental state in terms of challenge level and skill level, according to Csikszentmihalyi's flow model.


Cliff climbers become disengaged when things become less challenging – even if to they seem challenging to those less skilled than them. When leaving companies, cliff climbers often refer to being “too comfortable” or seeking “new challenges”.

There are three things I’ve learnt from working alongside and mentoring cliff climbers;
  1. Keep your cliff climbers challenged. Looking for a cliff where they can test their skills, even if it doesn’t add immediate value to your team, will show them their skills are valued and result in higher job satisfaction.
  2. Cliff climbers will be drawn to bigger cliffs, and they may leave us. We shouldn’t feel this reflects badly on us as employers or team leads. The pull of Flow is powerful and there is benefits to both them and your company by avoiding being disengaged, bored, and apathetic.
  3. Strive to be a cliff climber in Flow. Flow is where work satisfaction comes from. Look at the skills you have and how to challenge and utilise them to the utmost. Look for the cliffs to climb, and importantly realise there are benefits of not only achieving the climb, but from being in the flow while climbing.

Thursday, 1 December 2016

Strong test communities; establishing castles vs growing gardens

This article was originally published in the October 2016 edition of Testing Trapeze
At Trade Me we’ve built a strong internal testing community. I have been reflecting recently on what makes it so great, and how it’s different from what I’ve seen and heard about the test communities in organisations elsewhere.

Often strong test communities within an organisation are established, reliable, robust and fortified groups housing test experts and best practises. The testers who belong to these communities take the role of being Knights of quality within their project teams, carrying banners emblazoned with non-functional requirements, and shields depicting the values of their central castle. These castles work well in many organisations for a number of reasons.
The tester has a prominent structure to reference their testing to. The strength of that solid community means that people are able to trust and understand that the test practise is refined and established.
Castles usually come with rulers and round tables who use their knowledge and influence to build the laws and values for their community to live by. In castles, their Test Managers and Leads use their knowledge of testing to enhance the community and its practices.
There is a clear escalation structure in these communities. If the knights encounter a problem they know who to talk to. Likewise, if someone as an issue with a knight’s conduct the escalation path is clear.

At Trade Me we don’t have a castle.
But, our internal test community is really strong. It’s been openly envied by other disciplines within the company, and over the years I’ve been asked to help and advise other internal communities who want to build themselves to be like us.
One thing I stress when giving advice, is that we didn’t set out to build a castle.
In fact, the community  isn’t finished being built. It’s constantly being developed by those who work within it and anyone who values what it produces. It’s like a communal test garden.

The state of the garden is not solely due to my work or influence. Its shape is the result of the people who are, or  have been, part of  the community over time.
We’ve implemented suggestions for things like training sessions, test environment configurations, new tools, and our hiring process which came from people within the test community.
Castles on the other hand tend be to governed and directed by central figures and processes. The ideas and decisions tend to come down from the leadership teams and there is little opportunity to suggest or propose alternatives.
While we have a Test Manager as a central figurehead for the Test, we welcome ideas and input from all levels of tester to help shape our testing practise and community.
If you are accepting of people making suggestions you are more likely to discover new gardening techniques or fertilisers that you haven’t applied in the past.
Using our community’s experience, observations and product knowledge to shape the practices and guidelines we have means are more likely to have buy in to how things are done.
Having the community being influenced and nurtured by the people who benefit from it means it is dynamic, adjusting quickly to suit the needs and wants of its members.

We also don’t wall in the garden. People from outside our community are able to drop by and see what we’re doing and how. Developers, business analysts and other members of the business are welcome to attend our training sessions and meetings. We openly and actively share how and why our community does what it does, believing that transparency builds understanding.  Like learning from the people who tend our garden, we learn from our neighbours. We observe how their gardens grow, and are open to their suggestions on how to keep the weeds at bay or getting better returns for our investments. For example we’ve incorporated a number of improvements to our tools due to suggestions from developers.

We encourage our testers to better themselves and others, and provide frameworks for this to take place. Gardeners are always looking for ways to maximise their harvest, or grow the best flower and the best way to do this is by learning from people with experience or knowledge. In our test community this betterment can include things like peer led training sessions on new technologies or test techniques, or pairing with strong domain experts or SMEs.
I believe you can always learn something from anyone, and a test community is no different.  Anyone at any level of experience can teach you something new. Within our community anyone can run a training session. I’ve yet to sit through a training session where I haven’t learnt something new.
Recognising and learning from expertise and knowledge in your gardeners means your garden is stronger as a whole.
While the primary goal is upskilling and continuous improvement, it also results in strong relationships between testers. These relationships inspire the building of strong internal networks, and the community does a lot to support itself from within. People come to gather, and share. They leave more nourished than when they arrived, and are better equipped to take on their next task.

Like a castle community, our community does have central values and practise guides but in our garden they’re not carved in stone like you might find in a castle. We keep ours lightweight, flexible, and non prescriptive which leads to our gardeners employing fit for purpose techniques after judging the soil and weather conditions that they encounter
We learnt that prescriptive documentation can be dangerous when we went through a rapid growth period. I was chatting to a newish tester about his test documentation, specifically if his very thorough documentation was needed. His response was; “But you said we have to do it like this?”
And going by the wiki page he proceeded to show me , I had. Months prior I’d written a guide for our test documentation after a short training session I ran. It got referenced in our ‘new tester manual”, which this new tester diligently went through on his first day. When I’d written the wiki page I’d left out a caveat giving testers permission to be pragmatic and use their  own judgement. Now, if that tester had been at the original training session he would have had the opportunity to question me and get guidance on the effect of the missing caveat.

Likewise we value and strongly encourage face to face communication between testers whenever possible, over written emails or documentation.
Besides the time benefits you get from it, face to face gives people the opportunity to question and clarify, rather than the meaning or urgency being lost in the black and white of an email. A discussion in front of the roses which have an insect infestation at the point it’s occurring gives a faster and focussed response than waiting for a reply to an email. this leads to knowledge sharing and support within the community. Getting a situation in front of others increases the chances that you will find out which someone, or something, might be needed to diagnose the species and correct treatment, rather relying on the gardener to have previously memorised how to handle every bug they may encounter.

Of my my four and a half years as Test Manager at Trade Me, the internal test community is one of the things I’m most proud of being involved in. It has built a common sense of purpose, investment, ownership and autonomy without having to enforce rigorous structure or formality.
To me the strength it comes from what it produces and how it nourishes its members and neighbours.
A strong internal testing community has the ability to help to produce better quality products. But a strong community grown and nurtured from within has the ability to create engaged and enthusiastic testers who help to keep that community growing and nurtured in the future.

Footnote: Since writing this article I've moved on from Trade Me. I'm happy to report the garden along with the community of gardeners who tend to and benefit from it continue to thrive.
Testing Trapeze is a bi-monthly testing magazine to feature testers from Australia and New Zealand. If you haven't already, I highly recommend subscribing to Testing Trapeze, I've continually found it to be provide inspirational and insightful articles from very talented people.

Monday, 12 September 2016

Introducing testers to basic programming conventions

I recently ran a 'introduction to programming conventions' workshop with our test analysts.
It was really well received, so I thought it would be worth sharing it in case anyone would like to reuse or copy it.
You can find it in my recent blog post 'Robozzle'

Here's how it came about...

One of our test practices central themes this year is 'Grow the technical tester'.
We're aiming to build out our test analysts technical capability around reading, writing and understanding code, as well as understanding the systems and infrastructure our products run on.

I strongly believe there are some advantages to having a stronger technical base when you're working as a test analyst.
Whether it gives you a partial Rosetta stone to bridge the gap between technical terminology, or it gives you confidence to question implementation talking to the people writing or building your product - a technical understanding of your product used wisely will enhance your test approach.

As part of our internal training workshops and sessions, I wanted a light weight training exercise to kick start this technical growth.
Our test analysts have varied backgrounds in their exposure to programming, some have very limited or no exposure to programming.
I needed something friendly and in a language anyone could pick up.

There are some great courses our there which teach programming. We use PluralSight in house at Trade Me, and I've been through course on Code Academy. I also came across Code Avengers, which is an awesome resource aimed at schools to teach programming - I learnt a lot from some of their courses, so it's not just for kids!
These are great, but it was hard to find something that could be run in the group learning and workshop format I was after for our internal training session.

While researching courses one of our team leads told me about an iPhone game he was using to 'learn coding' called Robozzle.
Robozzle is a programming game where you give a robot a set of instructions to solve a puzzle. It can be pretty addictive...
There are simple tutorials, and then a large number of community created puzzles in varying degrees of difficulty. To solve the puzzles you have to assemble instructions for your robot to collect stars in a maze. utilize things like
I spent a couple of commutes playing the game, with good satisfaction when I cracked a puzzle, as well as good frustration when I spent upwards of 30 mins trying to solve one.

Robozzle ticked the boxes for what I wanted for a workshop
  • show that programming is a set of instructions
  • introduce basic programming conventions
  • be friendly and not scary to people who've never written code
  • be suitable for a group workshop 
So, I threw a draft together
I picked a handful of puzzles which showed the basic concepts within Robozzle; loops, subroutines, and conditional logic.
I added in an exercise on psuedo code to illustrate that the solutions were a set of instructions, and that that programming is writing instructions for computers to execute.
After I had this draft fleshed out, I socialised it with one of our team leads who's not got a strong programming back ground. He thought it would be a fun hour for people to go through, even if no learning took place.

So, we ran it with groups of 10 - 14 people in our training suite (room full of PCs), in one hour sessions.

What I saw and learnt in the sessions

  • People got psuedo code way faster than I expected them to, it wasn't that big a leap for people to get their heads around the concept. It proved to be really good for debugging solutions when people got stuck, and it reaffirmed that programming is just giving something a set of instructions.
  • Different people had different solutions to the puzzles. Most of the puzzles have more than one way of solving them, but at least two groups came up with solutions that stumped the facilitator (me).
  • The people with programming experience weren't the first to complete the solutions. I was worried that people with programming experience would be bored, or see it as a waste of time. But, at the end of the hour all groups in all sessions were still working.
  • People were keen to take the exercise back to their desks. I was walking to get a cup of tea this morning and spotted someone working on harder puzzles than were in the workshop. It was cool to see people still giving it a go five days after doing the session.
  • Some people resorted to writing out the psuedo code on paper for each puzzle, and stepping away from the computer.
  • People really liked the puzzle / game aspect of the workshop. They switched in to competition mode, trying to complete the games before others did. It was all in good fun, and added a nice energy to the room.
Overall, I'm really happy with how the exercise went.
The engagement was great, and people definitely walked away keen to get in to more programming training.

Robozzle

Welcome to a short exercise designed to teach you some basic programming concepts.
The point of the exercise is to show you how program code can be seen as a set of instructions, and show you some conventions like; loops, conditional logic, and subroutines
To do this we're going to use 'Robozzle'. It's a free web based programming game where you give a robot a set of instructions.
All up the exercise should take about 1-1.5 hrs.

Here's what to do...

Get set up

Grab a PC or phone (Android or iOS apps are available. Search 'Robozzle')
Pair up with someone.
Work through the puzzles in order, and utilise pair programming (one person use the mouse and keyboard, the other person talk - and then swap).
If you get stuck, feel free to ask for help!


Part 1: Introduction to Robozzle

  1. Tutorials
    1. Tutorial 1
    2. Tutorial 2
    3. Tutorial 3
    4. Tutorial 4
  2. Basic loop
    1. Stairs
      (make sure you keep this open once you solve it, you'll need it on the next page)

Part 2: Introduction to Psuedo Code

Psuedo Code is a notation resembling a simplified programming language, used in program design.
We're going to write some basic psuedo code to illustrate the instructions which we're giving the robot.

Log on to Trello
Visit our 'Robozle Psuedo Code (master)' trello board

This has been prepopulated with some psuedo-code statements
In the right hand menu in trello, choose '... More', then 'Copy Board' - this will make a copy of the board on your trello account.
  1. Using the statements; translate the solution you had for 'Stairs' (above) in to a psuedo code stack. 
  2. Move on to the Iteration Puzzle
    1. Using the board from above, build your solution with psuedo code FIRST
    2. Then, translate it in to Robozzle instructions. 
Question: What psuedo code instructions are missing? 

Extension if you're feeling up to itVisit our ''Robozle Psuedo Code (master)' trello board
This has been prepopulated with some more code-centric psuedo-code statements
In the right hand menu in trello, choose '... More', then 'Copy Board' - this will make a copy of the board on your trello account.


Part 3: More puzzles

Work through these puzzles.
If you get stuck, look at the Robozzle instructions like they're psuedo code. Walk through what you're telling the robot to do, and see where it might be going wrong.
  1. Nested subroutines
    1. Simple spiral
    2. Function calls
  2. Conditionals
    1. "First puzzle"
    2. "Very easy"
    3. "Don't fail"
  3. Conditional subroutine
    1. "Right on red"

Conclusion

You should have an understanding of program code as a set of instructions which is executed, and understand how things like loops, subroutines, and conditionals can be used to enhance instructions to increase efficiency, and expand logic.
Robozzle is a free to use game that you can play with in your spare time.
As well as web app versions, there are native apps for androis and iOS 

Thursday, 12 May 2016

Lost in metaphorical translation

I like to use metaphors and simile as a friendly, relatable way to communicate ideas.
I recently learnt it's worth being careful with how you use these devices, as it's easy to mix your metaphors, lose the information, and worse lose your audience.

An example of this occurred after Michael Bolton gave a talk at a We Test meetup in Wellington on Metrics and Measurements and Numbers oh my! Michael’s gave an engaging talk (as always) with good stories of how metrics can unintentionally obscure rather than reveal information, and therefore explored the importance of reporting relevant information in an appropriate format.

In the discussion that followed Michael's talk, the group discussed ideas for alternatives to metrics and graphs. One suggestion was to utilise second order measurement to quickly convey information to people about the state or health of a project. A thumbs up or a thumbs down - is it good? Or not good?

An idea was put forward (I think by Michael) that we could ask people to give an indication as to whether something was “too hot”, “too cold” or “just right”.
Too hot - it’s going to burn us; there’s something dangerous here. Too cold - we’re not satisfied; we need to pay more attention to this. Just right - things are good; we’re satisfied with how much attention we've given it, and we don’t think we’ll get burned.
A 'Goldilocks reading'.


After the talk I spent hours thinking about this metaphor and how it would be a really simple concept to introduce in our teams.

I first met the idea of second order measurement through Michael Bolton’s 1997 article Three types of measurement and two ways to use them in StickyMinds, where he talks about Gerald M. (Jerry) Weinberg’s classifications of measurement.
The article is on our recommended reading list for test analysts here at Trade Me. It’s an article that I've personally referred and forwarded a number of times when working in and with iterative and agile teams. Usually, this has been in response to higher ups wanting to see test metrics to determine if a project will ship on time - but also to people within teams who give extensively technical and detailed reports when the audience don’t have (or don’t want to have) the level of technical understanding to ‘correctly’ interpret them.

The idea of ‘Goldilocks readings’ as an informing process sits well with me because I strongly believe in trusting the people who are working on a project, empowering them to use their knowledge, observations and gut to inform stakeholders and start discussions. Obviously, you have to support this with escalation and ‘help’ paths to make sure they’re not out of their depth, but both projects and teams benefit from informed people.
People who are informed make better decisions, so informing people early and often should lead to even better decision making.
Too often you hear about projects missing deadlines and the team saying “we were never going to hit that date”, to the surprise of some other stakeholders. Assuming those people weren't being arrogant or ignoring available information, why were they surprised? Where was the information they needed? Was it too late in the project to change things? Was it buried in metrics?

My theory is that a ‘Goldilocks reading’ early and often from the team on anything from quality criteria, to deadlines, to team collaboration would make sure that people can be as informed as they need to be, and the discussions we have about mitigation more meaningful and timely.
Fewer surprises when the bears get home.
The reading is coming directly from the people building, testing and validating the project.
Hearing something is ‘too hot’ (might burn us) would start conversations about implementation, expectations, and hopefully mitigation plans. Doing readings throughout a project would allow you to track if a project is getting better or worse.

I wanted to test the theory out.

I'm the product owner for an agile team who implements, supports and maintains our automation frameworks. They set goals each sprint, but I don’t always get a chance to see how they’re tracking towards those goals until the sprint concludes.
So, on Monday I went to the team’s stand up and pitched the idea:
“I want us to try something out, so that I can get information on how we’re tracking against our goals. But - I don’t want to give you any reporting overhead.
I want you to try doing Goldilocks readings - each stand-up you give a ‘too hot’, ‘too cold’ or ‘just right’ reading on the goal. ‘Too hot’ means it’s unlikely we’ll hit the goal, ‘just right’ means we’ll achieve it, and ‘too cold’ means we havent investigated enough to make a judgement.”


Unfortunately, while nodding their willingness to try out my idea, their blank looks told me something was wrong. After a decent pause, one of the team members asked "what is a Goldilocks?"

The team is made up of three outstanding test engineers - two Indians and one Chinese.
I thought I was super clever introducing this measurement concept with an allusion to the ‘famous’ judgements in the story of Goldilocks. The metaphor of heat and satisfaction with a product (porridge) was meant to be relatable and friendly - but meant nothing to the team as they had no affinity to the story of Goldilocks and the three bears. In their cultures, the story wasn't prevalent like it was in my white New Zealander upbringing.

Now, unfortunately when I explained the fairy tale it spawned more conversations about ‘breaking and entering’ rather than the protagonist’s need for porridge at an ideal temperature.

But - I learnt a valuable lesson. Wrapping a simple concept into a metaphor damaged the delivery of the concept because the audience didn't see the information I was trying to convey. It got lost in the messaging.

We’re still going to try ‘Goldilocks readings’ in the team soon, and I’ll let you know how it goes.
But I think we might settle on something more universally relatable like ‘Temperature readings’.
Going forward,
I'm going to make an effort to make sure my information isn't being obscured, in both my reporting on test activities and when I'm communicating new ideas.

Sunday, 4 October 2015

Measuring success in Testing

I'm a strong believer in continual improvement of our practices
Recently we've been focussing on re-invigorating this attitude in our testers.
Most of this has been explaining that...
  • "what we do" now has resulted in a continual evolution over many years.
  • we don't believe in 'best practise' - as it implies you're not prepared to accept that another practice may be better.
  • we are open to trying new ideas and ways to do things.
When talking to test analysts about evolution, and trying new things - I started to think "what are we aiming for?" - how do we know the evolution is positive? how do we know the new idea made a positive difference?
So, I asked a couple of Test Analysts, Business Analysts, and Product Owners: how do we measure the success of our testing?
Below is a digest of some scribbles I kept in my notebook of their answers, and the issues that they (or I) felt existed in each type of success measure. Then I've included my personal view of how I think the success of our testing should be measured.
I'd be keen to hear people's comments.
Scribbles digested
  1. Defect rate
    • Amount of defects found during testing
      • Rationale - if we find lots of defects during the testing activities, we've stopped these making their way into production. Meaning we have a better quality product
    • Number of bugs released to production
      • Rationale - if we don't release any bugs to production, we must be testing all the possible areas
    • No severity 1 defects released
      • Rationale - we make sure the worst bugs don't make it to production
    • Issues
      • How much do those defects you find matter? You can almost always find bugs in any system, but do they matter to the product?
      • "Severity" require a judgement call by someone. If you release no severity 1 defects, and the product fails and no one uses it, you probably weren't assessing severity properly. So was your testing successful?
      • Just because we don't see bugs, doesn't mean they're not there.
        Alternatively, the code might not have been particularly buggy when it was written. So was the success that of the testing or the coding?
  2. Time
    • Time for testing to complete
      • Rationale - the faster something is deployed the better. So if we complete testing quickly, that's success
    • Time since last bug found
      • Rationale - if we haven't found any bugs recently, there must be no more to find
    • Issues
      • Fast testing is not always smartest approach to testing.
      • Defect discovery does not obey a decay curve. Yes, you may have found all the obvious defects, but it doesn't mean you've found all the defects which will affect your products quality.
  3. Coverage
    • Amount of code statements covered by testing activities
      • Rationale - if we execute all the code, we're more likely to find any defects. E.g. Unit tests.
    • Number of acceptance criteria which have been verified
      • Rationale - we know how it should work. So if we test it works, we've built what we wanted.
    • Issues
      • This can lead you to 'pure verification' and not attempting to "push code over" or try unexpected values/scenarios
      • We work on an ever evolving and intertwined code base, only focussing on the new changes ignores regression testing and the fact that new functionality may break existing features.
  4. Amount of testing
    • Amount of testing done
      • Rationale - we've done lots of testing, the product must be good
    • Amount of testing not done
      • Rationale - we made the call not to test this
    • Issues
      • Doing lots of testing can be unnecessary and poor time use.Removing testing requires a judgement call on what should and shouldn't be tested.
        There's always a risk involved when you make those judgements of more or less coverage, but the perhaps the bigger 'social' risk is that you can 
        introduce a bias or blindness to yourself. If you didn't test it last time, and the product didn't fail - are you going to test it next time?Or, it could introduce a business bias - "we did lots of testing last time, and the product failed, we need to do more testing this time."
My view of : how the success of testing should be measured?
To me, we should consider the points above, but focus more on: Did the testing activities help the delivery of a successful product?
If we delivered a successful product, then surely the testing we performed was successful?
So when I answer the question of 'Did my testing activities help the delivery of a successful product', I consider:
But to make that conclusion you have to understand what factors make the product successful.
And, they may not give you the immediate or granular level of feedback you need.
e.g. if success was that the product was delivered on time, and under budget - can you tell how much your testing contributed to that time and budget saving? 
  • Was my testing 'just enough'?
    • Did I cover enough scenarios?
    • Did I deliver it fast enough?
  • Did this testing add value?
    • Have I done enough testing for the success of this change?
    • Can I get the same value by removing some testing activities?
  • Did I find defects in the product?
    • What bugs are critical to the project's success?
    • What bugs matter most to its stakeholders?
  • What does success look like for the projects stakeholders?
    • Zero bugs?
    • Fast Delivery/Turnaround?
    • Long term stability?
I haven't explicitly said that the success of testing should be measured by the quality of a product. To me it's the third bullet point "Did I find defects in the product?" - the measure of the product's quality comes when we consider those defects and the level to which the stakeholders feel they're detrimental to the products success.
I really like Michael Bolton's 2009 article Three Kinds of Measurement and Two Ways to Use Them. It got me thinking about the different ways people approach measurement and made me think about how high level I need to be when giving people a measure of success.
I guess the main thing I've learnt when talking to people, and digesting my thoughts is that you should be thinking about how you're measuring success. I don't think it's formulaic, maybe it's more heuristic, but it's always worth thinking about.