Recorded: July 28th, 2017
A placebo is designed and used primarily for psychological benefit. Things like sugar pills, elevator door close buttons, and office thermostats aim “to please”, rather than have any other, “real” effects. Now, consider a placebo in the context of software and testing. What if “pleasing” is the only intended and expected result? How can it be tested? What does a bug look like? And, do these ideas also apply to non-placebos that have other, “real” effects? In this workshop, we’ll explore placebos, nocebos, the placebo/nocebo response, illusion and locus of control, relativism, wants, needs, and expectations, and will connect it all to testing.
- An introduction to placebos, and much more.
- A different and deeper perspective of testing.
- A practical approach to applying these ideas.
Q: Paula: My question is though that when placebos are discovered (and they always will eventually) then they do not work anymore and in fact generate ill-will.
A: Damian: I don’t see a question, but a few claims that I might challenge, such as, “a discovered placebo will generate ill-will”. Perhaps, sometimes. However, it is also possible that a discovered placebo might generate good-will. For example, if the source’s intent was benevolent and the target understands and values this.
Q: Paula: I disagree. You can solve problems without deceptions. Using a deception seems like a cope out for people not willing to look outside the box and solve the problem in an honest way with integrity. No deception is benevolent.
A: Damian: I can see how, for various reasons, using deception might seem to some as an undesirable method/solution. However, I would question the claim that “no deception is benevolent” as I think it challenges the intent of a placebo source.
Q: Paula: I Did enjoy the emotions in software side of the discussion very much! Great insights!
A: Damian: Thank you!
Q: Mike: Emotions definitely play a big role in the whole process of building and delivering software — on the topic of emotions – how do you handle that stakeholder who incorrectly shows emotions (for example Stakeholder X is always happy – so its difficult to tell if they are happy, pleased, etc —- then Stakeholder Y is always miserable and unhappy emotionally but they realize that you ARE delivering what they expect). How do you deal with that when working with them?
A: Damian: Dig deeper. I might suggest trying tools/methods from sociology/psychology (ethnography, UDUM, DEQ, PrEmo, or the Reiss Profile) that are intended to gain insight and understanding of underlying reasons, motivations, opinions, perceptions, and emotions of some target. That might help get you past any “incorrect”, superficial emotions and discover “correct”, deeper emotions.
Q: Mike: Many times – when someone is emotional (especially in a negative way) – its not necessarily due to the current situation (example: someone at work may be very upset during a requirements review session – but their emotions are high because one of their family members is sick and in the hospital and they are unable to think positively). What are your suggestions for how to deal with a situation where someone may be unhappy or emotional but the link is not with the current situation or meeting?
A: Damian: Again, I’d suggest digging deeper. Try using tools/methods from sociology/psychology to help identify if/when some emotion is not linked or related to the current situation. Also, you can use the Usability Matrix of Emotions (suggested by David Greenlees) to help track actual user emotions before a given situation (or system use).
Q: Venkatasubramanian: Sometimes, even when asked right question users does not know what exactly they need. They are giving requirements for sake of mere upgrade of systems and so on. What are your suggestions in these cases?
A: Damian: This problem occurs even when trying to elicit non-emotional requirements. And, in addition to the tools/methods I suggested in the webinar, there are many others that can help determine underlying causes and what a customer really requires (wants/needs).
Q: Amit: Just a small comment – Speaking on emotional requirements usually leads to considering the warm & fuzzy emotions (happiness, satisfaction, trust, etc. ). I think it’s useful to note that there are places where we want to induce negative emotions (The best example is security features that are intended to invoke frustration in attackers, but I can imagine a goal of “let’s annoy the users of the legacy product enough so that they will either upgrade or leave).
A: Damian: Great point, and I agree! Any intended emotional response (Emotional Requirement) is a relationship between someone and some system. If you change the person or the system, it may change the relationship, as well. And so, it is important to consider the perspectives of each “person that matters” so that the system induces the intended – positive or negative – emotional responses in them.
Q: Amit: How important are user’s emotions when the users are not the ones making the buy decision (example: any corporate system that can be shown to save more money than any other competition – so management, that don’t use the product, has a strong incentive to buy it)?
A: Damian: “How important are user’s emotions…” Important, to who? Depending on your point of view, the answer may change.
Damian Synadinos started testing software – on purpose and for money – in 1993. Since then, he has helped “build better software and build software better” using various methods and tools in numerous roles at many companies in diverse industries. Over the past ten years, Damian has focused primarily on teaching and leading testers and improving processes. Currently, he runs his own training and consulting company, Ineffable Solutions, specializing in IT and focusing on testing. Damian also enjoys performing improv comedy, golfing, playing poker and basketball, gaming, acting, cartooning, and spending time with his family.