July 8, 2013
By: Chris Graham, Nanne Bos, PhD Student
Policy makers, professionals, and researchers increasingly consider patient experience an important source of information regarding healthcare quality, joining more established measures of clinical outcomes and safety. (1, 2) The use of patient experience measures plays an important role in what is rapidly becoming a global trend towards patient-centred care. (3) With patients seen more as active participants rather than the passive recipients of healthcare, there comes a recognition that the manner in which people are looked after can prove as important as the clinical effectiveness of the treatment.
It is not enough to rely on what has traditionally – in the United Kingdom and Europe – been seen as 'hard' data relating to clinical practice: volumes of procedures, mortality rates, waiting times, and so on. Alongside this, a consensus has emerged over the last two decades in support of measuring services from the patient's perspective. (4) But if this patient-reported information is to be given the same weight and priority as those aforementioned 'hard' measures, then services need data of comparable quality and robustness.
In England, the National Health Service (NHS) has operated a national patient survey programme for over a decade. The NHS patient survey programme (5) includes a suite of surveys focussing on different care settings, and each survey employs best-practice methods in survey design, administration, and analysis in order to produce meaningful, reliable data for individual providers such as NHS hospital trusts.
The NHS Emergency Departments or Accident and Emergency (A&E) Survey has been conducted four times, in 2003, 2004/05, 2008, and 2012. Building on evidence from primary qualitative research, the survey addresses what matters most to patients attending emergency departments (6); it currently comprises fifty-three questions on all aspects of patients' experiences during and around emergency department attendances. As demonstrated by Bos et al (2012) (7) , the survey performs well in describing a number of key domains of quality, including:
- Waiting time in the A&E.
- Doctors and nurses (enough time/clarity explanations/ listening/ dealing with fears/confidence and trust).
- Care and treatment (information about condition/privacy/attention/involvement).
- Hygiene, information before discharge (medication/ daily activities/danger signals).
- Overall treatment.
Surveys of large samples, such as this one, allow for analyses of the perspectives of important subgroups. In the case of emergency departments, this facilitates identification and quantification of inequalities in the health experiences of different groups. For example, we know that older patients – those aged 75 or over – are likely to have less positive experiences around communication and information, despite being generally more content with the standards of care they receive. (8) More starkly, people from black and minority ethnic groups routinely report less positive experiences in English emergency departments. (9) The survey does not necessarily tell us why these differences exist, and indeed the explanations are likely complex: genuine inequalities in the standard of care, differential expectations (10), communication issues (11), and different cultural norms in survey response have all been posited as factors. It is realistic to expect that these factors may have different degrees of impact and interaction in different locations and settings. Nevertheless, measuring systematically allows us to identify issues such as these that need further investigation and action. These questions suggest a need for different kinds of research: qualitative and/or observational research, for instance, would be useful in examining the interactions between people of different groups with health services.
Regularly repeating surveys calls for striking a balance between maintaining consistent questions (to allow for analysis of trends over time) and ensuring contemporary relevance. The issues that matter to people can change as healthcare practices and modes of service delivery shift. The 2012 survey, for example, includes new questions on patients' experiences of care transitions and repeat attendance due, in part, to an increasing focus on preventing avoidable A&E attendances. This, in turn, is driven by growing pressure on A&E departments in England since the survey was first conducted in 2003: the number of attendances has risen whilst the NHS as a whole strives to find substantial efficiency savings. Updating the survey to cover issues such as these helps connect the policy debate to the experiences of patients.
Since first administering the survey, patients' experiences of attending A&Es have improved in a number of areas but worsened in others. Of 22 questions used consistently in the survey from 2004 to 2012, 9 (40.9%) have shown significant improvement whilst 8 (36.4%) have shown significant decline. Some of the most substantial changes in patients' reported experiences relate to areas where national policies, priorities, or goals incentivise or demand improvement. For example, the proportion of patients describing A&E departments as 'very clean' dramatically increased in 2012 compared to 2008 – up from 45% to 55% for wards and from 38% to 48% for bathrooms (see Figure 1, below). These results have coincided with a national focus on hospital cleanliness following public concern regarding high rates of healthcare-associated infections such as methicillin-resistant Staphylococcus aureus (MRSA) and Clostridium difficile in hospitals in 2007.
Problems remain in other areas. The 2012 survey showed a decline of positive experiences regarding overall time in A&E and in the co-ordination of care between services. For example, 10% of patients transported to A&E by ambulance reported waiting with the ambulance crew for more than 30 minutes before their care was transferred.
Ultimately, we as survey developers intend the A&E survey to provide a catalyst for improvement. To support this aim, comprehensive benchmarking data is disseminated to hospital trusts and published in the public domain – including, on the CQC's Web site (12), in a user-friendly format intended for public use. CQC also uses survey data for regulatory purposes and the survey is included as an indicator in the NHS Outcomes Framework – the list of outcomes and indicators by which NHS England is held accountable for national performance. Whilst these uses of the survey nationally afford it priority, in most cases the onus to improve services is very much local and dependent on providers. This fits with the wider health policy mantra in England – that of 'national standards, local action' – and it is an inevitable consequence that the extent to which improvements can be generated depends on local imperatives. Whilst good examples of targeted action exist – including a trust introducing video screens to report live waiting times in response to patients feeling uninformed of waiting times (13), or another remodelling its A&E reception in response to poor survey results on privacy – other organisations may find it more difficult to drive improvement. Specific barriers include the lack of a strong leadership focus on people's experiences (14) and a demand for timelier reporting of survey data. (15) This latter concern has partly been addressed in recent years by the shortening of turnaround time from survey completion to reporting and via the introduction of a new 'local surveys' facility, wherein the survey co-ordination centre provides materials and guidance for trusts wishing to undertake comparable surveys.
Given the focus on local improvement and the potential barriers to this, we encourage trusts to look at results from national surveys as part of a suite of evidence on user experience. This provides a balance between the benefits of intelligence that is locally focussed – but typically not comparable with other providers – and the more robust but complex data from the national survey. The national survey gives providers reliable benchmark information on the issues that matter most to patients. This comparative data is crucial for services to understand and contextualise the quality of care that they are providing.
Figure 1: Patients' reported experiences of cleanliness in NHS A&E departments, 2004-2012
Figure description: The title of figure 1 is "Patients' reported experiences of cleanliness in NHS A&E departments, 2004-2012." The figure is a bar graph comparing the distribution of patients' reported experiences of cleanliness in NHS hospitals' A&E departments for three time periods; 2004 (n=51,675), 2008 (n=46,587), and 2012 (n=43,113). The proportion of responses is provided for each of four categories of cleanliness: 1) Very clean, 2) Fairly clean, 3) Not very clean, and 4) Not at all clean. Very clean (2004): 45%; Very clean (2008): 44%; Very clean (2012): 55%. Fairly clean (2004): (46%); Fairly clean (2008): 47%; Fairly clean (2012): 39%. Not very clean (2004): 7%; Not very clean (2008): 7%; Not very clean (2012): 4%. Not at all clean (2004): 2%; Not at all clean (2008): 2%; Not at all clean (2012): 1%.
Director of Research & Policy, Picker Institute, Europe
Nanne Bos, PhD Student
University Medical Center Utrecht, the Netherlands, Stichting Miletus, Zeist, the Netherlands – Project Coordinator
The views and opinions expressed are those of the author and do not necessarily state or reflect those of the National Quality Measures Clearinghouse™ (NQMC), the Agency for Healthcare Research and Quality (AHRQ), or its contractor ECRI Institute.
Potential Conflicts of Interest
Chris Graham and Nanne Bos declared no personal or family member financial interests with respect to this commentary.
Chris Graham reports that the Picker Institute obtains some of its funding by providing research and expert advice to organisations (including government agencies, professional societies, and other research organisations). Part of their funding is received from the Care Quality Commission (CQC) and NHS England to develop, co-ordinate, and analyse national surveys of NHS patients and staff respectively. The Accident and Emergency (A&E) Survey's development was funded by CQC. In these instances, the Picker Institute's role is as an independent research organisation which he discloses but does not consider a source of bias with regard to this commentary.
Chris Graham also reports that he has been involved in the development of a number of surveys of people's experiences of health care, which are used as the basis for indicators in England's NHS Outcomes Framework and published as Official Statistics. Relevant projects in the last five years are as follows:
- 2013 — National survey of people's experiences of maternity services — chief investigator (CI)
- 2013 — National survey of users of 'Hear and Treat' ambulance service — CI
- 2012 — National accident and emergency department survey — CI
- 2011 — National outpatient survey — CI
- 2011-2013 — National acute hospital inpatient survey — CI
- 2011-2013 — National community mental health survey — CI
- 2008, 2010 — National community mental health survey — project lead for sponsor (Healthcare Commission & CQC)
- 2009 — National inpatient mental health survey — project lead for sponsor (Healthcare Commission & CQC)
- 2008 — National survey of local health services — project lead for sponsor (Healthcare Commission)
- Darzi A. High quality care for all: NHS next stage review final report. London: Department of Health; 2008.
- Darzi A. Quality and the NHS next stage review. Lancet. 2008; 371(9624): 1563-4.
- Coulter A. Understanding the experience of illness and treatment. In: Ziebland S, Coulter A, Calabrese J, Locock L, editor(s). Understanding and using experiences of health and illness. Oxford: Oxford University Press. In press.
- Gerteis M, Edgman-Levitan S, Daley J, Delbanco TL. Through the patient's eyes: understanding and promoting patient-centered care. California: Jossey-Bass; 1993.
- Care Quality Commission & Picker Institute Europe (n.d.). NHS Surveys. Focused on patients' experience. Accessed 2013 Apr 4. Available at: http://www.nhssurveys.org .
- Bullen N, Magee H, Reeves R. Development and pilot testing of the NHS Acute Trust Emergency Department Survey 2003. Accessed 2013 Apr 4. Available at http://www.nhssurveys.org/survey/158 .
- Bos N, Sizmur S, Graham C, Stel HF van. The accident and emergency department questionnaire: a measure for patients' experiences in the accident and emergency department. BMJ Quality & Safety. 2013 Feb; 22(2):139-46. doi: 10.1136/bmjqs-2012-001072. Epub 2012 Sep 1.
- Graham C. Experiences of older people in emergency departments. Presented at the 23rd International Society for Quality in Healthcare (ISQua) conference. London; 2006.
- Department of Health. Report on the self-reported experience of patients from black and minority ethnic groups. London; 2009.
- Bowling A, Rowe G, McKee M. Patients' experiences of their healthcare in relation to their expectations and satisfaction: a population survey. J R Soc Med. 2013; 106(4): 143-149.
- Raleigh V, Irons R, Hawe E, Scobie S, Cook A, Reeves R, Petruckevitch A, Harrison J. Ethnic variations in the experiences of mental health service users in England. Results of a national patient survey programme. Br J Psychiatry. 2007; 191: 304-312.
- Care Quality Commission. Accessed 2013 Jun 25. Available at http://www.cqc.org.uk/public/reports-surveys-and-reviews/surveys/accident-and-emergency-2012 .
- Nutter T. Emergency department survey results. Salisbury NHS Foundation Trust. Accessed 2013 Apr 4. Available at http://www.salisbury.nhs.uk/AboutUs/TrustBoard/AgendaBoardPapersAndMinutesTrustBoard/Documents/8%20April%202013/SFT%203381.pdf .
- Shaller D. Patient-centered care: what does it take? New York: The Commonwealth Fund; 2007.
- Reeves R, Seccombe I. Do patient surveys work? The influence of a national survey programme on local quality-improvement initiatives. Quality and Safety in Health Care. 2008; 17(6): 437-441.