Evidence-based practice

Posted 11 Dec 2015

Dr Ed Warren, FRCGP, GP, Sheffield and GP trainer, Barnsley VTS

Dr Ed Warren, FRCGP, GP, Sheffield and GP trainer, Barnsley VTS

One of the key purposes of revalidation is that it demonstrates your continued ability to practise safely and effectively. It reinforces the NMC Code by asking nurses to use it as a foundation for all the requirements, including the need to always practise in line with the best available evidence. But how can we tell what is good evidence, and how can we use it in our practice?

To be useful, evidence should enable you to do your job better: better outcomes for your patients and better professional satisfaction for you. The use of evidence by practice nurses is required by the Nursing and Midwifery Council1 as part of the Code, which is the reference point for all the requirements of revalidation.

 

From ‘The Code’1

Always practise in line with the best available evidence

Make sure that any information or advice given is evidence-based, including information relating to using any healthcare products or services

Spoken of in this way ‘evidence’ is usually taken to mean the results of trials that satisfy what is at present considered to be good scientific practice. What currently constitutes ‘good scientific practice’ was developed during the middle of the 20th century. For example, Karl Popper published his ideas about Falsifiability in 1959.2 He suggested that it is not possible to conclusively affirm a hypothesis, but it is possible to conclusively refute a hypothesis. This led to the use of the ‘Null Hypothesis’ in research, now considered a fundamental of good scientific practice.

So modern research methodology is a bit of a snotty-nosed upstart. Consider the idea of the four humours (blood, yellow bile, black bile and phlegm) which were the basis of much medical practice from their introduction by Hippocrates (460-370BC) and their development by Galen (129-201AD) and in widespread use until the middle of the 19th century. These ideas have a 2000 year track record, much longer than our recent notions. Who is to say that in a generation we too will not be considered fools for our folly?

The Randomised Controlled Trial and the Meta-analysis are not the only types of evidence available. Work done with GPs suggests that more experienced practitioners get better outcomes than juniors.3 Does that mean that experienced GPs know more about the results of research than juniors who have recently completed their qualification exams? This seems unlikely. It is somewhat more likely that experience brings with it an understanding of what works for that GP with that set of patients, and the ability to recall and use a selection of information which has proved useful in the past. This sort of evidence is difficult to quantify and also difficult to learn except through experience.

Other evidence depends on the characteristics of the population being treated, and this varies from place to place. Some health beliefs that patents have are specific to the area they live in,4 and this has an effect on health outcomes.5 Call it a placebo effect if you wish, but bear in mind that in clinical trials the effects of placebo are often greater that the effects of the treatment on trial. Deprivation has a significant effect on morbidity and mortality, and the sensitivity of a test or investigation depends on the prevalence of the disorder in the population being tested. So what may work in leafy Surrey does not necessarily work in inner-city Barnsley. In addition individual patients have their foibles. Many years ago I saw an elderly lady for the first time, and she said to me: ‘There is something you should know about me doctor. I never respond to cheap drugs’. It is perfectly plausible that ‘evidence’ true for one practice nurse in one situation is not true for another.

 

THE IMPOSSIBILITY OF KNOWING ANYTHING

There is too much evidence, and there is also too little evidence. It was recently estimated that worldwide there are over 25,000 published journals in science, technology and medicine.6 Looking at a subspecialty of a subspecialty (cardiac imaging) it was also estimated that a new recruit who did nothing except read would have caught up with the published work on the subject at roughly the time they would be due to retire. Most of these papers are written in English, but some are not, adding an extra dimension to the accumulation of wisdom. Think how much more stuff there is out there of potential interest to a general practice nurse who has to be aware of all specialities.

But ironically, there is also too little evidence. As current scientific research methodology is such a recent invention, there has not been time to do research on everything. Also, most research is not done by primary care workers – who generally have better things to do – but by academics and secondary care workers who often have a requirement to do research written into their employment contracts. They will take on topics of interest to themselves, and not necessarily of wider use in primary care. Some ‘boring’ topics such as viral coughs and colds are hardly ever considered worthy of attention even though they may affect millions of patients and cause a significant national burden of morbidity.

Human biology is not a branch of mathematics or physics. Two patients with the same problem treated identically will not inevitably get the same outcome. Perhaps it is the interaction of body and brain. Perhaps people are just different from each other. Even the best published research will only offer ‘likelihoods’: for a given treatment some patients will get worse; some will stay the same; and a percentage will improve. If the improvers are a bigger number than those who do not improve, then that is evidence of benefit, something to inform Evidence Based Practice (EBP). This also has the rather alarming corollary that you can do your job absolutely expertly and perfectly, and your patients may still not get better. By rights nurses should be judged on the quality of the care they offer and not on the outcome for their patients: ‘The operation was a success, but the patient died’

 

WHERE TO FIND GOOD EVIDENCE?

Clearly you can’t read everything that is published about your role. So who will you trust to give you the information you need to do your job properly? There has to be a leap of faith involved. In a perfect world you would analyse every relevant piece of information for yourself. This is not going to happen. Having made the decision that you are going to trust others to sift the evidence for you, life suddenly becomes less complicated. Make an early decision about what you are going to read and, perhaps more importantly, what you are not going to read.

A practice nurse’s professional reading will primarily be determined by access: what journals fall through my letter box?; what on-line resources have I heard about?; has anything happened to prompt an online search or a trip to the library? The Royal College of Nursing publishes lots of papers about nursing practice as well as papers on policy.7 As a major professional body, you can usually assume that the things they publish are true (i.e. they are consistent with the available evidence). Their professional reputation depends on it. In addition, such a reputable body can be relied upon to offer you an agenda of things that are being discussed, so you don’t have to worry that there is stuff out there, the existence of which you have no inkling.

Publications offering clinical guidelines are also useful. They are effectively a summary of current best practice and, at their best, offer sufficient supporting references so that, should you wish, you can delve into a topic more deeply. I would suggest Clinical Knowledge Summaries (CKS)8 as a first port of call. CKS summaries have always been a class act, and now they have teamed up with the National Institute for Health and Care Excellence, and so by association with the Quality and Outcomes Framework. Freely published online (as all the best medical and nursing research should be), the referencing is extensive. In addition CKS is sponsored by the Department of Health, so not only is the guidance authoritative, but it is also an expression of the standard of care that our patients should expect from the NHS.

 

GOOD AND BAD EVIDENCE

The perfect research paper has never yet been written, and there are always things that can be criticised. As long as the deficits are not too great, then such imperfections can be tolerated. In the face of the uncertainties described above, even a small step towards the truth is a step in the right direction. The evidence may not be perfect but it’s the best we have.

Many journals describe themselves as being ‘peer reviewed’. Indeed it is the mark of the status of a journal that it should be peer reviewed, and for good reason. In the peer review process any article or piece of research being considered for publication is sent out to experts in the field for their comments. These experts will (should) be aware of other work published in the field under examination, and so should be aware of factors that may bias the results. For example, an article on the effects of treating hypertension would not be believable unless the people under examination had their smoking habits recorded: smoking is such an important risk factor that other benefits of treatment could well be trivial by comparison. If a piece of research is markedly different from previously published work, then either the new research is ground-breaking or questionable.

Most research that is started never gets published at all. This idea, ‘Publication Bias’, means that a lot of the information on a topic is never available for scrutiny. Getting an article published is a major undertaking over and above doing the research itself, and some folk just run out of steam. There is also evidence that research that does not show a good effect from an intervention is less likely to be offered for publication than research that does show a good effect.9 It is as though the researchers get disheartened and give up. Drug manufacturers have a major incentive to do drug trials to support their marketing efforts: unfortunately some trials that do not show what the maker wants become buried from view and are never released.10

Not all published evidence is considered equally reliable. While it was still the National Institute for Clinical Excellence, NICE published with its guidance an indication of the strength of its recommendations, based on the reliability of the evidence used in their support:1

A. Meta-analysis of randomised controlled trials (RCTs) or at least one RCT. In meta-analysis the raw results from two or more trials are combined together into what becomes effectively a massive trial. A really good meta-analysis also tracks down the papers that are not published and uses their raw data as well. In an RCT the word ‘Randomised’ refers to patients being allocated to a control group or an active group by a random process. The ‘Controlled’ bit refers to the existence of a group of patients who were not subjected to the intervention – this is important if you want to allow for any placebo effect.

B. At least one controlled study without randomisation, or an extrapolation from A-type evidence

C. Non-experimental descriptive study, or extrapolated from A or B. A descriptive study is one that simply describes what happens, without testing an intervention. Most practice audits follow this format.

D. Expert committee reports or opinions from respected authorities. Presumably these authorities derive their opinions from somewhere, but this may not always be stated. Sometimes opinions are simply based on personal experiences.

Using this classification it was embarrassing to see how many of the recommendations were based on category D evidence. Regrettably the categorisation of the strength of evidence has not been transferred to CKS/NICE guidance.

 

CRITICAL APPRAISAL: A WORKED EXAMPLE

I have found it quite difficult to find a piece of research published in a nursing journal. It is a shame that nurses are not involved more in research: only nurses fully understand the problems of nursing, and are the only people able to ask the sorts of questions that nurses need answers to in their job. I suspect this is because nurses are too busy seeing patients, because there are fewer nurses working in academic institutions, and because the nature of the job of nursing does not lend itself so readily to research activity. However what I did find is a number of commentaries on research that had been published in medical journals, so I will use one of these for demonstration purposes.

‘Critical appraisal’ is only partly to do with an understanding of statistical methods and jargon. It is mainly to do with applying common sense, and seeing if the evidence presented might be useful for you and your professional discipline. So here are some questions you should ask yourself when critically appraising a piece of ‘evidence’.

In the February 2014 issue of Evidence Based Nursing,12 a commentary appeared about a paper from the BMJ, published the previous year and entitled ‘Women’s views on overdiagnosis in breast cancer screening: a qualitative study.’13

 

Where is it published?

The BMJ publishes articles that are peer-reviewed. It is considered the fifth best medical journal in the world (top is the New England Journal of Medicine).14 As far as the UK is concerned, the BMJ is worth reading. You are likely to find reliable evidence. Just a word of caution: paper editions of the BMJ include research articles but usually in abbreviated form. If you want enough information to be able to critically appraise the article then you will have to look at the online version.

 

What is the topic?

Most journal articles do not get read because the title looks boring. Is the content likely to tell you something that is worth knowing? The whole topic of breast cancer should be of interest to practice nurses, especially as this article deals with screening for breast cancer. Most practice nurses are themselves women, and so may take a personal as well as a professional interest. So the journal is OK, the topic is OK. So far so good.

 

Who are the authors?

This paper is written by a team from the School of Public Health at the University of Sydney in Australia. They are mainly professors and other academics and epidemiologists, so they have a professional duty not to get the statistical analysis wrong: you do not have to worry about doing the sums, which is always a relief. On the other hand there are no nurses or general practitioners among the authors, no people more likely to give the article a primary care emphasis.

 

How many subjects?

This study was done on only 50 women. This is understandable given the methods used, which involved getting the women to a meeting, listening to them and trying to find common trends in their attitudes. But a sample of only 50 in an RCT would be highly unlikely to give an accurate result: the confidence limits would be huge. [Confidence limits – a statistical device that calculates that there is a 95% probability that the truth lies within these limits. Narrower limits are better].

 

Could there be any bias?

Is there any reason or reasons why the women examined might be different from the ones you meet in your work? If so then it may be that the results would not be the same.

These women were recruited from the suburbs of Sydney, the suburbs having been chosen for their socioeconomic diversity. This is good – screening behaviour is known to be influenced by socioeconomic status.15 However, are Australian women the same as UK women? There will be more similarities than differences, but this does not mean that any results are inevitably applicable to the women in your practice. The results may not be useful for you.

One hundred and eighteen women were invited to participate in this study, but eventually only 50 were used. The first women excluded were those who did not consent to participate, and the second lot were women who were not able to attend the meetings. A drop-out rate of over 50% from a survey is not unusual, but does beg the question of whether those who dropped out would have given the same results. No information is given about the characteristics of the women who dropped out. Participants were paid to attend: will this have influenced the results?

The opinions of these women about breast cancer overdiagnosis were invited, having subjected them to a presentation on the risks of overdiagnosis inherent in the breast screening programme (women aged 50 to 74 in Australia are invited for mammography every 2 years, and like the UK, the invitation materials make no mention of any disadvantages of screening). Their opinions were then checked for a second time to see if they had changed.

Quite a lot of detail is given about the presentations used (which is good), and an opinion questionnaire was used before and after the presentation (also good). So in some respects this work looks at the effects of an intervention on the attitude of women towards breast cancer screening. However this is an observational study: if it were a study about the effects of the presentation then there would have to be a control group and a randomisation process to decide who got the intervention and who did not to make the study valid.

 

Will it change anything?

The study concludes that generally women are unfamiliar with the idea that screening can lead to overdiagnosis, and that this might affect their willingness to participate in breast screening especially if the rate of overdiagnosis is high. This information is not new. Using a male corollary, when you tell men about the disadvantages of a prostate-specific antigen (PSA) screening test, they are less likely to consent to testing.16 In order to secure the changes in attitude that this study did, then women would have to attend a 2 hour meeting and be paid to attend. This is unlikely to happen. If the aim is to decrease screening uptake then there are other ways to do this – such as cancelling the mammography programme. If the aim is to increase awareness generally about the pros and cons of all types of screening, then this study makes more sense, but different screening programmes have different levels of false positives and false negatives. So this study will probably not be useful for a practice nurse in her day-job, even if it gives an interesting insight into the ways that our patients think.

 

CONCLUSION

There is too much published work out there for any practice nurse to be able to critically appraise every bit of information that might be relevant to her job. Luckily this is not necessary as the editors of journals and the compilers of guidelines will do it for you. On the other hand, continuing professional development requires you to at least have a passing familiarity with those journals and guidelines – it is not reasonable to do no reading at all.

If you do come across a piece of research that will make a big difference to how you work (for example, something about the treatment of diabetes that goes against what you are already doing) you may feel the need to go into more depth and to appraise the article for yourself to see if you believe it. But if you do, keep firmly in mind that the process is mainly about the utilisation of common sense.

And your experience is important. If an article does not speak to you and your patients, then ignore it.

See also The Metabolic Syndrome, which includes further insights into critical appraisal of 'evidence'.

REFERENCES

1. The Nursing and Midwifery Council. The code: Professional standards of practice and behaviour for nurses and midwives, 2015. http://www.nmc.org.uk/standards/code/

2. Popper KR. Logic of scientific discovery. London: Hutchinson;1959

3. Advanced Consulting in Family Medicine. Eds: Worrall P, French A & Asthton L. Oxford:Radcliffe;2009

4. Helman C. Culture, Health and Illness. 4th edition. London:Arnold;2000

5. Silverman J et al. Skills for Communicating with Patients. 2nd edition. Oxford;Radcliffe;2005

6. Fraser AG & Dunstan FD. On the impossibility of being expert. BMJ 2010;341:1314-16.

7. Royal College of Nursing www.rcn.org.uk/

8. Clinical Knowledge Summaries. www.cks.nice.org.uk/

9. Song F et al. Publication bias: what is it? How do we measure it? How do we avoid it? Dove Press 2013;5:71-81

10. Peter Doshi. No correction, no retraction, no apology, no comment: paroxetine trial reanalysis raises questions about institutional responsibility. BMJ 2015;351:h4629

11. NICE. The epilepsies: diagnosis and management of the epilepsies in adults in primary and secondary care. CG20 October 2004.

12. Kalager M. Overdiagnosis in breast cancer screening: women have minimal prior awareness of the issue, and their screening intentions are influenced by the size of the risk. Evid Based Nurs 2014;17:7-8 doi:10.1136/eb-2013-101281

13. Hersch J et al. Women’s views on overdiagnosis in breast cancer

screening: a qualitative study. BMJ 2013;346:f158 doi: 10.1136/bmj.f158

14. Medical Journal Impact Factors 2015 impactfactor.weebly.com/ [Accessed 27.10.15]

15. Public Health England. Press release 2013. Levels of socio-economic deprivation affect screening. www.gov.uk/levels-of-socio-economic-deprivation-affect-screening-uptake-for-breast-cancer

16. Woolf S H. Should we screen for prostate cancer? BMJ 1997;314:989-90.

    • title

      label
    • title

      label
    • title

      label
    • title

      label
    • title

      label
    • title

      label