Measuring patient experience in the emergency department: A scoping review

Copyright © 2020 African Federation for Emergency Medicine. Publishing services provided by Elsevier.

This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Abstract

Introduction

Measuring patients' experience in the emergency department can be an avenue through which the patients are able to evaluate their own care experience, and this may provide guidance for healthcare professionals in addressing quality improvement. This scoping review aimed to identify and examine existing tools that measure patients' experience in the emergency department.

Methods

A scoping review was carried out to synthesize evidence from a range of studies in order to describe the characteristics of each study and their sample, and to describe the tools used to measure patients' experience in the emergency department.

Results

Out of the 308 articles retrieved, results of the first and second level screening yielded 10 articles for inclusion using 9 different experience tools/questionnaire in the emergency department.

Conclusion

Measuring patients' experience in the emergency department is a global concern, however research conducted in low-to-middle-income countries is very limited and such research in Africa appears to be absent. Getting consumers of care to evaluate their experience may help healthcare professionals to identify discrepancies in care and plan possible strategies to address them.

Keywords: Tools, Questionnaire, Patient, Experience, Emergency department, Scoping review

African relevance

Measuring patients' experience in the ED is a global concern, however there appears to be a limited number of tools available, and such research in Africa appears to be absent.

This article provides details of specific tools available in the literature for measuring patients' experience in the emergency department, which could be useful for African EDs

Getting consumers of care to evaluate their experience may help healthcare professionals to identify discrepancies in care and plan possible strategies to address them.

Introduction

Patients' experience with healthcare services describes their interaction with hospital personnel, products, services and structure that contributes to care provision (The Beryl Institute, 2017). Measuring patients' experience can be an avenue through which the patient voices are heard, and may also enable patients' to participate in their own care experience, which has been shown to promote higher compliance to treatment and discharge instructions [[1], [2], [3], [4]]. It can also provide an opportunity for benchmarking and policy formulation thus making it necessary for healthcare professionals to identify tools that can accomplish this task [5,6].

Many overlapping concepts such as patients' satisfaction, perceptions, engagement, needs, participation and preference are used to explain what is meant by patients' experience, however most of these terms are not well defined and often lack conceptual clarity which in turn challenges their accurate measurement [7]. Measuring these concepts independently is also problematic as they are often found to be ill-defined to capture the associated complexities [[7], [8], [9]]. These concepts therefore need to be clearly defined and used to develop a valid, reliable, acceptable and relevant tools to measure patients' experience in the hospital setting [10].

The unique environment of the emergency department (ED), which is usually chaotic, rushed, with little or no privacy, is indeed difficult for patients [11,12]. Factors such as increased patient volume, complex patients' needs, lack of beds, intensive treatment within the ED, staff and space shortages, language and cultural barriers are factors that appear to influence patients' satisfaction with care in the ED (Nairn, 2004). The ability of patients' to accurately describe their experience of care can be compromised by this often stressful environment of the ED, which can result in the patient experiencing anxiety, fear and uncertainties [5,13,14]. The often noisy and unpredictable nature of the ED environment might also limit the healthcare professionals' ability to assess patients' experience [14,15]. This scoping review aimed to identify and examine tools that measure patients' experience in the ED. The objectives of this scoping review were to 1) describe the characteristics of each study and their sample and 2) describe the tools used to measure the patients' experience in the ED.

Methods

A scoping review is done to synthesize evidence from a range of studies with the aim of identifying gaps in existing literature about a phenomenon of interest [16]. This scoping review utilised the Arksey and O'Malley's Framework [17] which was further developed by Levac, Colquhoun and O'Brien [18] as well as the Joanna Briggs Institute Framework for Scoping Reviews [19]. This review also adhered to the Preferred Reporting Items for Systematic Reviews and meta-analyses (PRISMA-ScR) search guidelines [20]. The final protocol was registered prospectively with the Open Science Framework on November 26, 2019 at https://osf.io/bcqfg.

Search criteria

The review started by identifying the inclusion criteria guided by the PCC (Population, Concept and Context) [19] that would produce relevant documents to meet the study objectives ( Table 1 ). The search strategy for this study identified any existing literature, both published and unpublished, in the following databases; PubMed, MEDLINE, and CINAHL. The search terms used are listed in Box 1.

Table 1

PCC and inclusion and exclusion criteria.

Literature published over 20 years (1994 to 2018) was considered.

Studies, reports, and published or unpublished articles that focused on tools to measure adult patients' (18 years and older) experience in the ED.

Articles published using the English language. Studies that focused on patients'/clients in the ED. Studies that reported on patients' satisfaction, patient engagement and quality outcomes

Review articles including: systematic reviews, meta-analysis, scoping reviews, peer-reviewed journals and rapid reviews, quantitative studies, and pilot studies, letters and guidelines

Grey literature sources such as documents from government and non-governmental organisations and academic dissertations.

Articles not published in English language Articles published before January 1994 and after 2018 Articles focusing on children

Box 1

Search terms.

Search terms that were used in each database included:

Patient/client experience, patient/client satisfaction; patient “AND” participation; patient engagement “AND” perceptions. Emergency department “AND” emergency room; tools “AND” questionnaire “AND” instruments. (((tools[Title] OR instrument[Title] OR questionnaire[Title]) AND ((“patients”[MeSH Terms] OR “patients”[All Fields] OR “patient”[All Fields]) AND experience[All Fields])) AND ((“emergency service, hospital”[MeSH Terms] OR (“emergency”[All Fields] AND “service”[All Fields] AND “hospital”[All Fields]) OR “hospital emergency service”[All Fields] OR (“emergency”[All Fields] AND “department”[All Fields]) OR “emergency department”[All Fields]) OR (“emergency service, hospital”[MeSH Terms] OR (“emergency”[All Fields] AND “service”[All Fields] AND “hospital”[All Fields]) OR “hospital emergency service”[All Fields] OR (“emergency”[All Fields] AND “room”[All Fields]) OR “emergency room”[All Fields]))) NOT (“child”[MeSH Terms] OR “child”[All Fields] OR “children”[All Fields]). (“trauma centers”[MeSH Terms] OR (“trauma”[All Fields] AND “centers”[All Fields]) OR “trauma centers”[All Fields] OR (“trauma”[All Fields] AND “unit”[All Fields]) OR “trauma unit”[All Fields]) OR (“emergency service, hospital”[MeSH Terms] OR (“emergency”[All Fields] AND “service”[All Fields] AND “hospital”[All Fields]) OR “hospital emergency service”[All Fields] OR (“accident”[All Fields] AND “emergency”[All Fields] AND “department”[All Fields]) OR “accident and emergency department”[All Fields]) OR (“emergency service, hospital”[MeSH Terms] OR (“emergency”[All Fields] AND “service”[All Fields] AND “hospital”[All Fields]) OR “hospital emergency service”[All Fields] OR (“casualty”[All Fields] AND “department”[All Fields]) OR “casualty department”[All Fields]) OR unit[All Fields] OR room[All Fields].

⁎ MeSH terms used in PubMed and MEDLINE.

Study selection: inclusion and exclusion criteria

All identified studies were exported into EndNote reference management software and duplicates were identified and removed. The eligibility criteria were used by the two authors independently to conduct the first level screening of the articles using titles and abstracts ( Table 1 ). Full text articles were then used during the second level independent screening to determine those to include in the review. Disagreements regarding the eligibility of three articles were resolved by consensus. The same process was followed for both the database and the hand search articles retrieved. Refer to Fig. 1 for the PRISMA flow diagram of the selection process. A data extraction form was developed to extract study characteristics including; author(s), year of publication and country of study, research design, setting, sampling and sample size, name of the tool and number of question items, response format and scoring methods, validity and reliability, and limitations of the study. A narrative synthesis of the results was carried out to answer the objectives.

Fig. 1

PRISMA Flow diagram of selection process.

Results

The results of the first and second level screening (both electronic and hand searching) yielded 10 articles for inclusion using 9 different patients' experience tools/questionnaires in the ED; 7 of those articles came from the hand search.

International scope of studies

The results of the scoping review indicated that measuring patients' experience in the ED is a global concern, with 2 papers originating from The Netherlands [21,22], United Kingdom and the Netherlands (1) [23], Australia (1) [24], Iran (3) [[25], [26], [27]], Canada (2) [4,28] and United States of America (1) [29].

Characteristics of the study and study participants

All of the articles describing these tools were published between 1999 and 2018, with the majority (n = 6) being published between 2011 and 2014. All the studies utilised a cross-sectional survey design (n = 10) (see Table 2 ). Convenience sampling was used in 4 of the studies, random sampling by 3 and systematic (1) and accidental sampling (1), while 1 article did not report its sampling method. The sample sizes in these studies ranged from 103 to 128,350, with most studies collecting data in state urban and rural hospitals. One study focused on a vulnerable population consisting of elderly, mentally ill and homeless patients [28]. Regarding the mode of tool administration in 5 studies, the participants took part in face-to-face questionnaires while still in the hospital or in their homes 7–10 days after discharge. Four studies collected data via a mail survey and 1 collected the data via mail as well as telephonically. ( Table 2 ). Two studies had a mixed sample of patient and family members [26,27] who answered the same tool.

Table 2

Characteristics of the studies (N = 10).

Author, year of publication and countryResearch design/setting/sampling and sample sizeName of tool & number of question itemsResponse format & scoring methodsValidity and reliabilityLimitations
Atari, M & Atari, M (2015)
Iran
Survey (face-to-face questionnaire)
Urban hospital
Convenience sampling N = 301
132 patients and 165 family members. N = 4 missing data
Brief Emergency Department Patient Satisfaction Scale (BEPSS) - 24 question itemsEach item was scored from 4 (complete satisfaction) to 1 (complete dissatisfaction) on a 4 point Likert scale. Total ED patient satisfaction score was calculated by adding all the items' scores.The total alpha coefficients = 0.94. Reliability coefficients of five of the domains range from 0.75 to 0.88. One way ANOVA and t-test = no significant difference (P > 0.05).
Face validity of each item was evaluated by panel of experts: 2 hospital managers, 2 quality-improvement officers, 1 physician, and 1 psychometrics expert.
Cultural diversity of the particular participants may have influenced the way they responded to the questionnaire.
Bos, N. et al. (2016)
England and The Netherlands
Cross-sectional mail survey
Urban & rural hospital ED of NHS Trust,
Random sampling N = 43, 892 (England) and N = 1865
(The Netherlands)
National Health Services (NHS) - 50 questionsSecondary analysis of the data from England and Netherlands was performed.
Overall mean of the experience scores for each item was computed. Linear mixed effects model was used to examine the association between countries and patients' experience.
Internal consistency of each question item was scored individually. Cronbach's α co-efficient range from 0.634 to 0.877Differences in sample sizes, number of respondents and selection of hospitals, including variabilities in patients' characteristics between the two countries in the study appears to limit the generalisation of the findings of the study.
Bos, N. et al. (2013)
The Netherlands.
Cross-sectional mail survey in urban and rural setting.
Systematic sampling N = 128,350 in total @ 850 patients/per trust conducted in 151 trusts.
National Health Services (NHS) - 50 questionsThe Principal Components Analysis (PCA), a method out of three of grouping and summarising items, present the best score reliability on six clear and interpretable composites: waiting time; doctors and nurses; your care and treatment; hygiene; information before discharge and overall.Cronbach's α range from 0.634 to 0.877.Significant differences between respondents and non-respondents age and sex limits generalisation of finding.
Bos, N. et al. (2015)
The Netherlands
Cross-sectional mail survey
Urban hospital Random sampling N = 4883
Consumer Quality Index for the Accident and Emergency Department
(CQI A&E) - 78 questions
2, 3 or 4 point Likert scale. Response categories were recoded from 1 to 4, summed up and divided by the number of items in the domain. (i.e. no/big problem/never = 1, sometimes = 2, bit of a problem = 2.5, usually = 3, yes/not a problem/always = 4.Domains were internally consistent with Cronbach's α score of 0.67–0.84.Recall bias related to language differences and state of consciousness of patients' could potentially influence the survey results and therefore also their generalizability
Chiu, H. et al. (2014)
Canada
Survey (face to face) and administered by mail in 110 EDs across British Columbia, Canada.
Convenient Sampling of N = 170 vulnerable patients' who were elderly, aged 75 or above, had low income, were homeless or resided in unstable housing, or were disenfranchised with MH/SU issues.
Picker Canada Patient Experience Survey (condensed version) - 9 questions +1 open-ended question.Overall patients' experience was measured using the % of positive responses. The % of positive responses was calculated based on the proportion of the number of responses categorized as “positive” to the total number of responses in the question.
Responses from open-ended question were coded as positive, negative, both positive and negative or neutral
Not reported.Administration of the survey alone indicate that it does not provide sufficient information to guide quality improvement activities. Patients' experience should be measured alongside other types of quality indicators to guide overall quality improvement and provide a balanced view of performance.
Davis, B. (1999)
Australia
Descriptive research design (face to face survey) One rural and one urban hospital ED
Convenience sampling N = 103 patients.
Consumer Emergency Care Satisfaction Scale (CECSS) - 17 questions items + 2 open-ended questions.5 point Likert scale.
Individual item means (M) and standard deviation (SD) were scored.
The higher the score, the higher the level of satisfaction with ED nursing care.
Qualitative analysis of two open-ended questions produced four and six themes respectively.
Cronbach's α coefficient range from 0.85 to 0.88.Use of small, convenience sample and that data was collected in only one area of Australia limits the generalisation of findings.
Mohammadi-Sardo, M. R. & Salehi, S. (2018)
Iran
Cross-sectional study.
Face to face at the hospital and at homes.
Random sampling N = 373
Emergency Department Patient Satisfaction Assessment - 24 questions + 1 open-ended question5 point Likert scale.
Items were scored individually. Total scores of items in each component were determined as low, moderate, and high. The higher the score, the higher the level of patients' satisfaction.
Face and content validity were confirmed by experts in the field.
Tool was found to be valid and the Cronbach's alpha was 0.995. Translation validity was assessed by expert panel, rechecked by a linguist and piloted in 10 patients.
Some data were missing because it was collected in patients' houses after discharge. It could be helpful if social and cultural factors could be investigated as they could influence patients' satisfaction.
Sari, O. et al. (2011)
Canada
Survey (mail)
Sampling method not reported
N = 6255
Patient Experience Survey (PES). Question items not indicated.Each item was scored individually.Not reported.Survey had lower response rate because experience of the homeless and mentally ill were very difficult to capture.
Soleimanpour, H. et al. (2011).
Iran
Cross-sectional study (face to face). ED of an Iranian hospital Accidental quota sampling.
N = 500 (452 patients and 48 family members).
Press Ganey Questionnaire (PGQ) - 30 questionsTotal ED patients' satisfaction score was calculated by summing all the items' scores.Press Ganey Questionnaire is a valid tool. From a previous study, internal consistency reliability ranged from 0079 to 0.96. Content validity was measured by team of ED experts and academic members.Difficult to generalise findings because of regional differences. Not measuring time spent in the ED and different diagnosis of patients' might have different satisfaction rates.
Weinick, R. et al. (2014)
USA
Cross-sectional (mail and telephone survey)
USA
Convenience sampling
N = 829
Centres for Medicare & Medicaid Maryland USA
Emergency Department Patient Experience of Care Survey EDPEC (On Site/Admitted Stand Alone) (29 questions).
Using any number from 0 to 10, 0 is the worst and 10 is the best care. Total ED patients' satisfaction score was calculated by summing all the items' scores.The instrument was found reliable with the score of 0.70.
Face validity evaluated by experts in the ED.
Further studies should be conducted since results were based on pilot studies.

Description of the tools

The developed tools ranged in number of items/questions asked; from 78 to 9 items. The most frequently used response format for answering the tools was a Likert type rating scale using a four or five-point scale (n = 7). When scoring the tool, authors added up all the individual items and then classified the total experience score as low, moderate and high. Only 3 tools obtained responses through some open-ended questions [24,25,28].

The tools were examined to identify the different patients' experience domains being investigated and these 10 domains are indicated in Table 3 . The most common domains covered by the 9 tools are related to the patients' experience with the doctors and nurses as well as the patients' experience with their care and treatment; these domains appearing in 8 of the 9 tools. The least common (only appearing in 1 tool) are the domains of patients' family and hospital environment and facilities. There were only 4 domains that were included in

Table 3

Domains of the tools.

Name of the toolsDomains investigated in the tool
TotalArrival in the EDWaitingDoctors & NursesYour care & treatmentPainTestsMedicationsHospital environment & facilitiesPatients' familyLeaving the EDOverall ED experience
National Health Service (NHS) Trust Questionnaire10XXXXXXXXXX
Emergency Department Patient Experience of Care (EDPEC) Survey (On Site/Admitted Stand Alone6XXXXXXX
Consumer Emergency Care Satisfaction Scale (CECSS)7XXXXXXX
Accident and Emergency Department Questionnaire (A&ED)6XXXXXX
Picker Canada Patient Experience Survey (Condensed Version)6XXXXXX
Press Ganey Questionnaire (PGQ)5XXXXX
Patient Experience Survey (PES).5XXXXX
Brief Emergency Department Patient Satisfaction Scale (BEPSS)7XXXXXXX
Emergency Department Patient Satisfaction Assessment (EDPSA)8XXXXXXXX
Total 88884351178

Validity and reliability

Researchers ensured that most of the developed tools were tested using a pilot study [[21], [22], [23], [24]], however for 2 of the tools, validity or reliability were not reported [4,28]. Four of the questionnaires were mailed to ED patients after being recruited anonymously, to minimise biases [30]. The face validity of each question item in 3 tools were determined by a panel of experts in the field of emergency care and quality improvement [22]. Construct, translation and content validity were all tested in 1 study [25]. The internal consistency of 5 tools were tested for reliability and the results were found to be acceptable as they range from 0.634 to 0.995 [31]. Cronbach's alpha was used in 8 of the studies to gauge reliability. See Table 2 for more details.

Discussion

This scoping review aimed to identify and examine tools that measure patients' experience in the ED.

Characteristics of study and study participants

The patients' experience in the ED tools were all developed and tested in various high income countries, except for 2 tools that were developed in Iran. It is important to note that this serves to highlight the fact that measuring patients' experience in the ED is a global concern, however research conducted in low-to-middle-income countries is limited and such research in Africa appears to be absent.

The fact that all the studies utilised a cross-sectional survey design was not surprising as this design is generally quick, easy and cheap to perform. It would also limit the lost-to-follow-up in the context of a busy and chaotic ED environment, as participants are usually accessed only once [32]. It is also important to note that only 4 of the studies made use of probability sampling; the advantage being that samples are more representative of the target population [33]. The use of a convenience sampling method makes one question the quality of the studies since such a sampling method is highly vulnerable to high level of sampling error, response and selection bias [34]. It is most important to note that most articles included in this study were obtained through hand searching. This is due to that fact that some studies may be published in journals not included in electronic databases used or they may not have included wording in the title and abstracts or be indexed with terms that allowed them to be easily identified [35].

Regarding the mode and timing of tool administration; some of the studies administered face-to-face questionnaires while the patient was still in the hospital or in their homes shortly after discharge. A number of studies also made use of mail surveys to collect their data. It is important to consider which is the most appropriate method to administer such tools in the challenging ED environment? The pen and paper method of administering such tools may not be suitable in a highly transitory ED environment and leveraging technology and social media platforms to obtain information on patients' experience in the ED may be quicker and easier when compared to the traditional method of survey administration. The WhatsApp Messenger application which has been found very useful in Africa can be an avenue to obtain real-time responses from patients' regarding their ED experience [36,37].

It is very important to determine the best time to administer the tool. Administering the tool to measure patients' experience while the patient is still in the ED might delay treatment or have health consequences for the patient; the ever busy ED environment might not be conducive for such an exercise and doing so after the ED experience might reduce the amount of information patients can remember. Question items in a patients' experience tool should therefore be limited, concise and not time-consuming for patients' to complete.

Description of the tools used in measuring patients' experience.

Nine different tools were identified in this scoping review. The NHS Trust Questionnaire that was adapted from the United Kingdom and used in The Netherlands revealed significant differences in the within-in country results compared to the result obtained in The Netherlands using the same tool. The language and the wording of question items may have limited the interpretation of the questions and therefore produced different results in the different cultural settings. The Press Ganey Questionnaire, used in Iran was adapted from the United States. Cultural diversity of the particular participants may have influenced the way they responded to the questionnaire. Differences in patients' characteristics may cause difficulties in adapting the tools across different countries and also difficult to generalise findings because of regional differences [38,39]. The most frequently used response format for answering the tools was a Likert type rating scale. This method is quick, efficient and inexpensive for data collection and can be sent through mail, over the internet or given in person [40].

Only 3 studies included some open-ended, qualitative type of questions and future research should consider incorporating such questions to allow for more in-depth information that could be helpful to guide decisions and policies about health care and social welfare [34]. The internal consistency of most of the tools was measured using the Cronbach's α, thus attempting to justify the psychometric measurement [41]. Reporting the validity and reliability of the tools is essential as this could guide fellow researchers to evaluate the tool [42]. It is important to consider that administration of the patients' experience tool alone does not provide sufficient information to guide quality improvement activities. Patients' experience should be measured alongside other types of quality indicators to guide overall quality improvement and provide a balanced view of performance [43].

Study limitation

The first screening was done by reading the title, and browsing the abstract, thus the precision of this procedure is entirely dependent on the terminology used in the titles and the abstracts. There is the risk that relevant articles may have been overlooked for this reason. There may also be questionnaires used by governments and organisations that our internet searches did not find. Studies not indexed in PubMed, CINAHL and MEDLINE may have been missed, as well as studies using different terminology in the title and abstracts. Only literature published in English language was used for this review and thus research published in other languages were not included. Many overlapping concepts such as patients' satisfaction, perceptions, engagement, needs, participation and preference are used to explain what is meant by patients' experience, however most of these terms are not well defined and often lack conceptual clarity which in turn challenges their accurate measurement.

Implications for practice and conclusion

Measuring patients' experience in the ED is a global concern, however there appears to be a limited number of tools to measure patients' experience in the ED and especially within low-to-middle-income countries in Africa. Getting consumers of care to evaluate their experience may help healthcare personnel to identify discrepancies in care and plan possible strategies to address them.

Authors' contribution

Authors contributed as follow to the conception or design of the work; the acquisition, analysis, or interpretation of data for the work; and drafting the work or revising it critically for important intellectual content: YO contributed 50% and PB contributed 50% each. All authors approved the version to be published and agreed to be accountable for all aspects of the work.

Declaration of competing interest

The authors declare no conflict of interest.

References

1. Browne K., Roseman D., Shaller D., Edgman-Levitan S. Measuring patient experience as a strategy for improving primary care. Health Aff. 2010; 29 (5):921–925. doi: 10.1377/hlthaff.2010.0238. [PubMed] [CrossRef] [Google Scholar]

2. Junewicz A., Youngner S.J. Patient-satisfaction surveys on a scale of 0 to 10: improving health care, or leading it astray? Hastings Center Report. 2015; 45 (3):43–51. http://10.1002/hast.453 [PubMed] [Google Scholar]

3. Ahmed F., Burt J., Roland M. Measuring patient experience: concepts and methods. The Patient. 2014; 7 (3):235–241. doi: 10.1007/s40271-014-0060-5. [PubMed] [CrossRef] [Google Scholar]

4. Sari O., Sidhu N., Wohlgemuth N. In: Patients’ experiences with emergency care in Saskatchewan hospitals, Saskatoon. Council H.Q., editor. Saskatchewan Health Council; Saskatoon: 2011. pp. 1–13. [Google Scholar]

5. Kash B., McKahan M. The evolution of measuring patient satisfaction. Journal of Primary Health Care and General Practice. 2017; 1 (1):1–4. [Google Scholar]

6. Al-Abri R., Al-Balushi A. Patient satisfaction survey as a tool towards improvement. Oman Med J. 2014; 29 (1):3–7. http://10.5001/omj.2014.02 [PMC free article] [PubMed] [Google Scholar]

7. Berkowitz B. The patient experience and patient satisfaction: measurement of a complex dynamic. Online J Issues Nurs. 2016; 21 (1):1–12. doi: 10.3912/OJIN.Vol21No01Man01. [PubMed] [CrossRef] [Google Scholar]

8. LaVela S.L., Gallan A.S. Evaluation and measurement of patient experience. Patient Experience Journal. 2014; 1 (1):28–36. http://pxjournal.org/journal/vol1/iss1/5 [Google Scholar]

9. Graham B., Green A., James M., Katz J., Swiontkowski M. Measuring patient satisfaction in orthopaedic surgery. Journal of Bone and Joint Surgery. 2015; 97 (1):80–84. doi: 10.2106/JBJS.N.00811. [PubMed] [CrossRef] [Google Scholar]

10. Beattie M., Murphy D.J., Atherton I., Lauder W. Instruments to measure patient experience of healthcare quality in hospitals: a systematic review. BioMed Central. 2015; 4 (97):1–21. doi: 10.1186/s13643-015-0089-0. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

11. Brysiewicz P., Emmamally W. Focusing on families in the emergency department. Int Emerg Nurs. 2017; 30 :1–2. doi: 10.1016/j.ienj.2016.08.002. [PubMed] [CrossRef] [Google Scholar]

12. Cypress B.S. The emergency department: experiences of patients, families, and their nurses. Adv Emerg Nurs J. 2014; 36 (2):164–176. http://10.1097/TME.0000000000000017 [PubMed] [Google Scholar]

13. Iwanowski P., Budaj A., Członkowska A., Wąsek W., Kozłowska-Boszko B., Olędzka U. Informed consent for clinical trials in acute coronary syndromes and stroke following the European Clinical Trials Directive: investigators’ experiences and attitudes. Trials. 2008; 9 (1) doi: 10.1186/1745-6215-9-45. 1 of 6. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

14. Wilets I., O’Rourke M., Nassisi D. How patients and visitors to an urban emergency view clinical research. Acad Emerg Med. 2003; 10 (10):1080–1085. doi: 10.1197/S1069-6563(03)00352-X. [PubMed] [CrossRef] [Google Scholar]

15. Flynn D., Knoedler A., Hess E.P., Murad H., Erwin P.J., Montori V.M. Engaging patients in healthcare decisions in the emergency department through shared decision-making: a systematic review. Acad Emerg Med. 2012; 19 (8):959–967. http://10.1111/j.1553-2712.2012.01414.x [PubMed] [Google Scholar]

16. O"Brien K., Colquhoun H.L., Levac D., Baxter L., Tricco A.C., Straus S.E. Advancing scoping study methodology: a web-based survey and consultation of perceptions on terminology, definition and methodological steps. BMC Health Serv Res. 2016; 16 :1–12. doi: 10.1186/s12913-016-1579-z. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

17. Arksey H., O’Malley L. Scoping studies: towards a methodological framework. International Journal of Social Research Methodology. 2007; 8 (1):19–32. http://10.1080/1364557032000119616 [Google Scholar]

18. Levac D., Colquhoun H.L., O’Brien K.K. Scoping studies: advancing the methodology. Implementation Science. 2010; 5 (69):1–9. doi: 10.1186/1748-5908-5-69. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

19. The Joanna Briggs Institute . Joanna Briggs Institute; 2015. The Joanna Briggs Institute reviewers’ manual: methodology for JBI scoping reviews. [Google Scholar]

20. Moher D., Liberati A., Tetzlaff J., Altman D.G., The PRISMA Group Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009; 9 (7) doi: 10.1371/journal.pmed.1000097. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

21. Bos N., Sizmur S., Graham C., van Stel H.F. The accident and emergency department questionnaire: a measure for patients’ experiences in the accident and emergency department. BMJ Quality & Safety. 2013; 22 (2):139–146. http://10.1136/bmjqs-2012-001072 [PubMed] [Google Scholar]

22. Bos N., Sturms L.M., Stellato R.K., Schrijvers A.J.P., Stel H.F. The consumer quality index in an accident and emergency department: internal consistency, validity and discriminative capacity. Health Expect. 2015; 18 (5):1426–1438. doi: 10.1111/hex.12123. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

23. Bos N., Seccombe I.J., Sturms L.M., Stellato R., Schrijvers A.J.P., Stel H.F. A comparison of the quality of care in accident and emergency departments in England and the Netherlands as experienced by patients. Health Expect. 2016; 19 (3):773–784. doi: 10.1111/hex.12282. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

24. Davis B.A., Duffy E. Patient satisfaction with nursing care in a rural and an urban emergency department. Australian Journal of Rural Health. 1999; 7 (2):97–103. [PubMed] [Google Scholar]

25. Mohammadi-Sardo M.R., Salehi S. Emergency department patient satisfaction assessment using modified servqual model; a cross-sectional study. Advanced Journal of Emergency Medicine. 2018; 3 (1):1–6. doi: 10.22114/ajem.v0i0.107. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

26. Atari M., Atari M. Brief Emergency Department Patient Satisfaction Scale (BEPSS); development of a new practical instrument. Emergency. 2015; 3 (3):103–108. [PMC free article] [PubMed] [Google Scholar]

27. Soleimanpour H., Gholipouri C., Salarilak S., Raoufi P., Vahidi R.G., Rouhi A.J. Emergency department patient satisfaction survey in Imam Reza hospital, Tabriz, Iran. Int J Emerg Med. 2011; 4 (1):2. [PMC free article] [PubMed] [Google Scholar]

28. Chiu H., Batara N., Stenstrom R., Carley L., Jones C. Feasibility of using emergency department patient experience surveys as a proxy for equity of care. Patient Experience Journal. 2014; 1 (2):78–86. https://pxjournal.org/journal/vol1/iss2/13 [Google Scholar]

29. Weinick R.M., Becker K., Parast L., Stucky B.D., Elliott M.N., Mathews M. Emergency department patient experience of care survey: development and field test. Rand Health Q. 2014; 4 (3):5. [PMC free article] [PubMed] [Google Scholar]

30. Boudreaux E.D., Cruz B.L., Baumann B.M. The use of performance improvement methods to enhance emergency department patient satisfaction in the United States: a critical review of the literature and suggestions for future research. Acad Emerg Med. 2006; 13 (7):795–802. http://10.1197/j.aem.2006.01.031 [PubMed] [Google Scholar]

31. Taber K.S. The use of Cronbach’s alpha when developing and reporting research instruments in science education. Research in Science Education. 2017; 48 (6):1273–1296. http://10.1007/s11165-016-9602-2 [Google Scholar]

32. Sedgwick P. Cross-sectional studies: advantages and disadvantages. BMJ. 2014; 348 (g2276):1–2. doi: 10.1136/bmj.g2276. [PubMed] [CrossRef] [Google Scholar]

33. Elfil M., Negida A. Sampling methods in clinical research; an educational review. Emergency. 2017; 5 (1):e52. [PMC free article] [PubMed] [Google Scholar]

34. Etikan I., Musa S.A., Alkassim Rukayya Sunusi. Comparison of convenience sampling and purposive sampling. Am J Theor Appl Stat. 2016; 5 (1):1–4. doi: 10.11648/j.ajtas.20160501.11. [CrossRef] [Google Scholar]

35. Dickersin K., Scherer R., Lefebvre C. Systematic reviews: identifying relevant studies for systematic reviews. BMJ. 1994; 309 (6964):1286–1291. https://10.1136/bmj.309.6964.1286 [PMC free article] [PubMed] [Google Scholar]

36. Mars M., Scott R.E. WhatsApp in clinical practice: a literature review. In: Maeder A.J., Ho K., Marcelo A., Warren J., editors. The promise of new technologies in an age of new health challenges. IOS Press BV; New Zealand: 2016. pp. 82–90. [CrossRef] [Google Scholar]

37. WhatsApp in Africa: statistics & business potential [internet]. Messenger people. 2019. https://www.messengerpeople.com/whatsapp-in-africa/ [cited November 14, 2019]. Available from.

38. Lee J., Tran T.-T., Lee K.-P., editors. Cultural difference and its effects on user research methodologies. Springer Berlin Heidelberg; Berlin, Heidelberg: 2007. [Google Scholar]

39. Psychologist World United Kingdom: psychologist world. 2019. https://www.psychologistworld.com/issues/cultural-differences-psychology [cited 2019]. Available from.

41. Price R.A., Elliott M.N., Zaslavsky A.M., Hays R.D., Lehrman W.G., Rybowski L. Examining the role of patient experience surveys in measuring health care quality. Medical Care Research Review. 2014; 71 (5):522–524. doi: 10.1177/1077558714541480. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

42. Bolarinwa O. Principles and methods of validity and reliability testing of questionnaires used in social and health science researches. Nigerian Postgraduate Medical Journal. 2015; 22 (4):195–201. doi: 10.4103/1117-1936.173959. [PubMed] [CrossRef] [Google Scholar]

Articles from African Journal of Emergency Medicine are provided here courtesy of African Federation for Emergency Medicine