As my major blog has been around for some time on –
I have gathered some feedback as well as questions from readers and experimenters.
A regular question that arises is: is there evidence for the effectiveness of ‘safe experiments’? This question emerges from the modern preoccupation with ‘evidence-based practice’.
I’m going to say ‘yes’ and ‘no’, aren’t I!?
The ‘yes’ is that all the information recorded by blog users and clients over many decades constitutes ‘evidence’ in my book. Also, cognitive behavioural therapy (CBT) – a key model for encouraging experiments (or homework, as some call it) – encourages substantial record keeping. Such records provide detailed information about the outcomes of all our efforts. Further, there is a large body of formal research seeking to organise evidence in books and PhD theses.
I am not an expert in this literature; it increases at an alarming rate and I do not see keeping it all at my finger tips as one of my professional strengths. If you have a specific question, I’d hope to point you in a sensible direction. You could start your own enquiries with a PDF document at:
What I’d really like to do in this short blog is to go back to that word – ‘evidence’ in inverted commas. What is meant by it and in what way does it help us to design experiments and promote the changes we want in our lives? There are some misunderstandings to identify and I’d like to clarify what is useful ‘evidence’ when exploring human experience and relationships (as compared to evidence obtained in, say, medical trials).
The dominance of medicine in ‘healing’ has meant there is pressure to define ‘evidence’ the same in both medicine and counselling or psycho-therapy. Fortunately, even at this very moment (2017), the British Association for Counselling and Psychotherapy (BACP) is taking steps to question this approach to the term, evidence.
Don’t get me wrong; medical bodies and regulators are quite right to place emphasis on obtaining very solid evidence before they let a new medicine loose on the general public. The Thalidomide scandal of the 1950’s and 60’s served to drive up standards in research. There have been moves to improve the independence of staff involved in research studies as well. That said, even today, the pharmaceutical industry funds a lot of research and this does not give the appearance of ‘independence’, however sincere those companies may want to be.
The problem for measuring the effectiveness of therapy is that such tight control means that:
- useful data and results are sometimes excluded from research studies. For example, the experiments I am offering, and you will design, may have no visible result on some occasion. You can neither confirm nor deny your progress toward the objective under scrutiny at that time. By all means throw out ineffective medicines – but effective therapeutic research needs to measure pathways that are good, bad and indifferent. What is bad one time, may be good or, at least, better another time.
- methods applied to the test of a drug are very different from tests we should apply in therapy. You can objectify a drug and make it a ‘subject’ of study. You can control that subject as tightly as you want. Good therapists do not objectify their clients. Effective researchers are ill-advised to offer a different approach. The good therapist will negotiate a preferred outcome – one a client wants, and one a therapist is equipped to help on its way. Then the therapist can help a client find a way towards that outcome.
- Evidence-based researchers say they follow ethical guidelines and that is all well and proper. Those guidelines exist to see ‘subjects’ are not abused. Even so, the key focus of medical research will be: did what we do to our subjects – in applying a treatment in ethical fashion – make people better? In therapy, it is not enough to simply assist people to get better; the way therapists help people get better is central to the research. Ethics are more than a guideline to minimise the potential for abuse. How we behave towards one another is not an optional extra.
- Research into therapy should assess what works to ensure clients are respected. Furthermore, research could identify what negotiating and communications styles engage clients. The way a tablet is given to a patient does not usually impact on outcomes (but, again, there may well be evidence to contradict this assertion!).
- Research into therapy could study the validity and reliability of experiments but are the criteria to define these terms identical in the scientific and therapeutic environment. Now that is a BIG question
- The recording systems used by client and therapist could be assessed. Some may be more efficient than others in illuminating outcomes. But even then, effective therapeutic research identifies how the parties got where they did. It follows the journey from the design of a safe experiment through to observing its outcome is key. Words explaining how the observations arose could miss the point. Research in medicine and science may ill-afford to study the journey; some people may die en route and that is not acceptable.
- so the ‘danger’ to clients in therapy is of a different order to the risks involved in medicine. Some people do challenge this, say, in relation to reports of ‘false memory’ syndrome, but problems of that order say more about therapists pursuing their own ideas, rather than enabling ‘clients’ to make the move that is right for them.
- Once we can recognise that ‘safe experimenting’ is not what some-one else does to you, then it becomes much easier to look for ‘evidence’ that fosters incremental and fluid outcomes.
- Furthermore, taking small steps in the implementation of ‘safe experiments’ assumes that we can step back from the result and set off in a different direction. IT is perfectly reasonable to consider that successful journeys depend on mistakes – or at least, noticing them. Defining evidence in this situation means it is necessary to legitimise the ‘moving of the goal-posts’. That is a ‘no-no’ in strict research work and has been used to discredit some research in the past.
- even when an experiment is a ‘small defeat’, things can be learned from the outcomes. As seen above, the strict assessment of evidence puts a negative value on ‘failure’ – some people even turns their noses up at Placebo effects. That cuts off a very large chunk of helpful research into ‘what works for whom’.
- Strict research looks askance at my assurance: if it works, don’t knock it. Therapeutic research needs systems to define what is meant by ‘works’ and it needs to consider identifying, ‘works for whom’.
A useful example of the issues I am raising is encapsulated in the following quote from the web site listed above:
It [evidence based research] can also tell you what doesn’t work, and you can avoid repeating the failures of others.
I understand the web report is concerned about the apparent waste of resources when research appears to find out what does not work. However, in research into therapy, assessors will find that what works with one person, and at one time, will not necessarily work for some-one else or a different time. We can still learn from apparent ‘failure’.
I have a suspicion that some researchers like to follow strict rules of research to affirm the neat and tidy outcomes needed to generate confidence in a new pill or procedure they have designed!! The world of therapy is rarely that tidy and it will miss important things of it tries to copy the ‘medical model’ (not a good term, but it will have to do for now!).
If you want to apply your thinking to this subject, how about seeking out your own definition of evidence-based research. The one offered by the web site, listed above, is:
Evidence-based medicine is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence-based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research.
What IS “best”? Notice how the practitioner is included here but it is his/her “expertise“, that seems central. Do you wonder if the client is really included in the sentiment that “external clinical evidence” should be matched up with clinical expertise? Sounds research results are conclusions drawn from a conference of experts. Too rarely is a client understood to be expert in themselves.
For a more thorough review of ‘measuring’ the results of our work, have a look at Scott Miller’s blog on:
This will take you into a whole new area of research and enquiry.