As my web site has been around for some time, the number of pages has increases. As I have gathered feedback from readers and experimenters, I notice a regular question that arises is: is there evidence for the effectiveness of ‘safe experiments’?
I’m going to say ‘yes’ and ‘no’, aren’t I!?
The ‘yes’ is that all the information recorded by readers and clients over many decades constitutes ‘evidence’ in my book. It’s how you use the information that will be improotant. Cognitive behavioural therapy (CBT) – a key model for encouraging experiments (or homework, as some call it) – encourages substantial record keeping. Such records provide detailed information about the outcomes of all our efforts. Further, there is a large body of formal research seeking to organise evidence in books and PhD theses.
I am not an expert in this literature; it increases at an alarming rate and I do not see keeping it all at my finger tips as one of my professional strengths. What I’d like to do on this page is to go back to that word – ‘evidence’. What is meant by it and in what way does it help us to design experiments and promote the changes we want in our lives? There are some misunderstandings to identify and I’d like to clarify what is useful ‘evidence’ when exploring human experience and relationships (as compared to evidence obtained in laboratories).
The dominance of medicine in ‘healing’ has meant there is pressure to define ‘evidence’ the same in both medicine and therapy. This is polticial pressure as much as anything, given the dominance of medical science in the field of research. The British Association for Counselling and Psychotherapy (BACP) is starting to question this approach to the term, evidence in the world of therapy.
Don’t get me wrong; medical bodies and regulators are quite right to place emphasis on obtaining very solid evidence before they let a new medicine loose on the general public. There, the outcomes are matters of life-or-death; often, there is no Plan B or an opportunity to step back and re-design; this latter feature is essential in the design and implementation of small, safe experiments as I am describing them.
The Thalidomide scandal of the 1950’s and 60’s served to drive up standards in research. There have been moves to improve the independence of staff involved in research studies as well. Despite all this, it is possible to manage, manoevre or plain manipulate rsearch findings. The history of research financed by parties with conflicts of interest, e.g. the pharmaceutical industry is littered with examples of this.
The problem for measuring the effectiveness of therapy is that using the tight controls associated with medical science means that:
- useful data and results are sometimes excluded from research studies. For example, the experiments I am offering, and you will design, may have no visible result on some occasion. You can neither confirm nor deny your progress toward the objective under scrutiny at that time. Problem is, the same plan may well prodcue a different result on another day. What is ‘bad’ one time, may be ‘good’ or, at least, better another time.
- methods applied to the test of a drug are very different from tests we should apply in therapy. You can objectify a drug and make it a ‘subject’ of study. You can control that subject as tightly as you want. Good therapists do not objectify their clients. Effective researchers into therapy are ill-advised to try to do so. The good therapist will negotiate a preferred outcome – one a client wants, and one a therapist is equipped to help on its way. Then the therapist can help a client find a way towards that outcome. Research has to be able to focus light on how that process is initiated and sustained.
- Evidence-based researchers say they follow ethical guidelines and that is all well and proper. Those guidelines exist to see ‘subjects’ are not abused. Even so, the key focus of medical research will be: did what we do to our subjects – in applying a treatment in ethical fashion – make people better? In therapy, it is not enough to simply assist people to get better; the way therapists help people get better so clients can continue that work once therapy is terminated. Acheiving this outcome is central to the research in human relations. Ethics are more than a guideline to minimise the potential for abuse. How we behave towards one another is not an optional extra.
- Ethical research into therapy should assess what works to ensure clients are respected. Furthermore, research could identify what negotiating and communications styles engage clients. The way a tablet is given to a patient does not usually impact on outcomes (but, again, there may well be evidence to contradict this assertion!).
- Research into therapy could study the validity and reliability of experiments but are the criteria to define these terms identical in the scientific and therapeutic environment. Now that is a BIG question. My answer is, no, they are not.
- The recording systems used by client and therapist could be assessed. Some may be more efficient than others in illuminating outcomes. But even then, effective therapeutic research identifies how the parties got where they did. It follows the journey from the design of a safe experiment through to observing its outcome. Research in medicine and science may ill-afford to study the journey; some people may die en route and that is not acceptable.
- so the ‘danger’ to clients in therapy is of a different order to the risks involved in medicine. Some people do challenge this, say, in relation to reports of ‘false memory’ syndrome, but problems of that order say more about therapists pursuing their own ideas, rather than enabling ‘clients’ to make the move that is right for them. That is not ethical therapy.
- Once we can recognise that ‘safe experimenting’ is not what some-one else does to you, then it becomes much easier to look for ‘evidence’ that fosters incremental and fluid outcomes you obtain.
- Furthermore, taking small steps in the implementation of ‘safe experiments’ assumes that we can step back from the result and set off in a different direction. It is perfectly reasonable to consider that successful journeys depend on mistakes – or at least, noticing them. Some folk even say that there is no learning without mistakes: the bigger mistakes made, the bigger the lessons learned. Defining evidence in this situation means it is necessary to legitimise the ‘moving of the goal-posts’. That is a ‘no-no’ in strict research work.
- even when an experiment is a ‘small defeat’, things can be learned from the outcomes. As seen above, the strict assessment of evidence puts a negative value on ‘failure’ – some people even turns their noses up at Placebo effects. That cuts off a very large chunk of helpful research into ‘what works for whom’.
- Strict research looks askance at my assurance: if it works, don’t knock it. Therapeutic research needs systems to define what is meant by ‘what works’ as well as ‘works for whom’.
Research into therapy will find that what works with one person, and at one time, will not necessarily work for some-one else or a different time. Further, we learn much from apparent ‘failure’.
I have a suspicion that some researchers like to follow strict rules of research to affirm the neat and tidy outcomes needed to generate confidence in a new pill or procedure they have designed!! The world of therapy is rarely that tidy and it will miss important things if it tries to copy the ‘medical model’ (not a good term, but it will have to do for now!).