The contribution of research to developing small, safe experiments

a focused experiment

Some people have been in touch to question my sceptical attitude toward contributions from research into human behaviour.

This attitude of mine is part-intended and I can accept, as with most scepticism, that I may over-state my case from time to time. So, let me use this page to say more on this topic.

I do so, as I appreciate that research can tease out useful, practical ideas that can be woven into the design of small, safe experiments.  This web site does point to some of this material.

Research can create useful insights

Research can contribute to the advancement of good practice in the delivery of therapeutic services. It can test out what we do in an objective fashion. In this respect, it is an example of the small, safe experiment ofStanding Back described right at the bottom if that hyper-linked page.

Even so, that objectivity may be an Achilles Heel.

How come? You and I, and any researcher, are no less susceptible to not knowing what is there to know. Once we are confident that we ‘know’ something, we can be at our most dangerous, as Adolf Guggenbuhl-Craig can tell us.

Just a few issues worth considering

  1. Passion and commitment are two qualities that appear to distinguish effective practitioners from the middle-of-the-road practitioner; more so, when they work with a passionate and committed ‘client’. These qualities are not easy to reconcile with ‘objective’! It is difficult – maybe not impossible – to be passionate about the research protocol! Furthermore, it seems to me that research focuses on things ‘done to‘ clients – most often after the event itself. The Past tense is important here.
  2. Effective therapeutic practice is, in my view, is about ‘doing with’ some-one else, and doing so at a specific time. The present tense is important here. The picture changes when we look at something. indeed, we can change it simply by participating in observation! This is especially so if we look at the picture that emerges after the event. For an interesting sociological view of this dilemma, do look at this page; useful despite the irritating advertisements! It is a tricky area to visit,  and Jeff Sauro does identify the fact that there are several ways to observe.
  3. There is a further complication; elsewhere, I have spoken about ‘interpretation‘.  Research processes often lead to interpretation. This is not ‘wrong’ in itself but its a specific action seeking to understand what happens. It is not what happens.  It’s a meaning-making process, and humans are keen on it!
  4. There is a different approach to commitment in research and practice. Researchers are committed to understanding the many. Their interpretations of data are based on limited individual client contact, and, most often, conclusions rely on interpretation of trends; what generally appears to be happening most of the time. There is attention to probability; what is the chance that something observed is significant? In therapeutic practice, there is, for the most part, a commitment to the individual, without attention to whether they are in the minority, or in the majority.  Whether the individual is ‘typical’ or ‘atypical’ is not a consideration when devising and implementing small, safe experiments. Odd ideas work very well for some people, and tried and tested techniques might work at one time, and not at another.
  5. In summary, then, the research focus tends to be in interpretation of data. The commitment is, primarily, to the research protocol.  By contrast, therapeutic interventions seem to work best with limited interpretation, plus maximum commitment to the individual ‘client’ (or couple, or family).  what the collaborative group come up with is regarded as valid, even if the conclusions change over time!

My main conclusion

….. is that the research outcomes can vary from the helpful, to the irrelevant and, indeed, it can even create category errors I have mentioned elsewhere. On that hyper-linked page I make mention of Newnham and Page (2010) and their reference to the “potential to bridge the scientist-practitioner gap“. there is growing doubt about the notion of ‘evidence-based treatment’ and there is now more willingness to answer the question: what is valid evidence in research into therapy and the strategies it uses.

Also, there is more call for ‘client’ participation in these processes, say, in relation to the management of suicidal responses. This is where you come in. Throughout this website I encourage the use of records as everything you write could be valid evidence.

A client speaks

It would be helpful to see this material better represented in some of the literature. Some would say it is there as there is a body of literature built around “The Client Speaks“. That book can be found at:

John Mayer and Noel Timms, (1970)  The Client Speaks, London, Routledge, 1970

There is a whole body of research built around this theme. A typical example would be: Barry L. Duncan and Scott D. Miller (2000) The Client’s Theory of Change: Consulting the Client in the Integrative Process in The Journal of Psychotherapy Integration, Vol. 10, No. 2, 2000.

Despite these significant advances, Norcross (1997) suggests that the integration field invites confusion and irrelevancy unless the immense differences are defined, and the ‘‘me and not me’’ are established (p. 87). This comment overlooks the further complication of ‘us’. It assumes that – for tidy research – there needs to be a separation of me and not me.  That is a design problem for researchers as ‘us’ is not the same as ‘you and me’, it is you-with-me. A rather unique entity in its own right.

What other problems arise when trying to translate research into small, safe experiments?

Randomness: research is big on the random allocation of individuals to research trials. The random controlled trial (RCT) is a gold standard ‘test’ pursued by researchers.  It removes the unintended consequences of unconscious bias that influence the actions of all human beings.  Unconscious biases are ‘bad’ things and it is not difficult to see how that is so. It makes it more difficult for the observer to see what they want to see. 

The ‘blind’ trial appears to the only way to ensure that unrelated factors are spread across study groups.  Any other insights can mean that differences between groups are systematically biased. The ‘results’ are – therefore – unreliable. The ‘truth’ requires no conscious or unconscious interpretation by the researchers.

Even so,  I want you to interpret your outcomes, and to do so robustly and unashamedly. I ask you to be ‘random’ for a different reason –  so you do not get into a rut with your own small, safe experiments. I encourage random actions for their own sake. I ask you to give up on being impartial. I do not want you to be impartial about your experiments. I want you to be passionate; passionate enough to discipline yourself to see the larger picture, not be cowed by it and to know what to do with it. 

Also, I ask that you cast a critical eye on that word ‘control’. We all possess a drive to control. Instead,  I ask that you see the full range of results – the small victories, the small defeats and all the inconvenient untidiness in between – and learn what you can do with a wide range of outcomes (but not all, and every).

In short, you and I are meaning-making entities. Use that inevitability to your advantage. I see no value in wanting to legislate it out of existence lest the meaning you make is unable to say more about you as a meaning-maker!

Valid Research and Valid Evidence: tricky notions

In effective therapy, there appear to be four features that make a difference to outcome:

  • the process involved in forming a bond between therapist and client,
  • the quality of goals negotiated between therapist and client,
  • the volume and consistency of strategies used to generate desired outcomes,
  • the level of commitment to those strategies shown by client and therapist.

 The essence of everyday therapy involves a cooperative process. This rather depends on both parties being aware of the impact of strategies that enhance understanding of how we behave as we do.

This requirement is in addition to any information arising from evidence-based results.

The value of techniques involved in changing that behaviour depend less on the classification of those techniques, and more on an understanding of what works for whom, and when.

Therapist expertise, and any of its associated aloofness, is more likely to arise from therapy restricted to a certain kind of ‘evidence’ and/or a manualised approach. In practice, psychological rapport is more likely to promote a curiosity about what constitutes ‘useful’ actions.

Strategic action generating personalised change rather assumes client-and-therapist are able to identify a wide range of choices needed to make a difference.

Further leads to consider

Designing safe experiments

Actions that might help with safe experiments

The limitations of actions

Researching safe experiments

The limitations of safe experimenting