Search control section

Hospital Performance: Time patients spent in emergency departments in 2011–12

Left hand navigation section

Fair comparisons of hospitals and their EDs

Peer groups

Decision

NEAT performance reporting will use the AIHW Revision 1 method for peer classification. The analysis resulted in the following groups being defined:

  • Major metropolitan hospitals
  • Major regional hospitals
  • Large metropolitan hospitals
  • Large regional hospitals
  • Medium hospitals.

Further work on the impact of adjustment on hospital results is being conducted by the Authority. This analysis will determine the difference between adjusted and non-adjusted relative performance of individual hospitals within their peer group and inform our approach to standardisation.

Evidence

To facilitate fair comparisons between hospitals, the Authority investigated existing and revised hospital peer classifications for NEAT reporting. The following groups for analysis and reporting were investigated:

  • ED role delineation using the Australasian College for Emergency Medicine (ACEM)5 role delineation framework
  • ED role delineation by the Independent Hospital Pricing Authority (IHPA)6
  • Public hospitals peer groups by AIHW7
  • Revision 1 of AIHW public hospitals peer groups with the following amendments:
    • Separation of Principal Referral (A1) hospitals into metropolitan and regional locations using Australian Standard Geographical Classification (ASGC) Remoteness Area, 2006
    • Exclusion of specialist women’s and children’s hospitals (A2)
    • Exclusion of specialist eye or eye and ear hospitals
    • Exclusion of hospitals with EDs with <20,000 presentations in 2011–12.
  • Revision 2 of AIHW public hospitals peer groups with the following amendments:
    • Separation of Principal Referral (A1) hospitals into metropolitan and regional locations using Australian Standard Geographical Classification (ASGC) Remoteness Area, 2006
    • Combination of Large Regional (B2) and Medium (C1) hospitals
    • Exclusion of specialist women’s and children’s hospitals (A2)
    • Exclusion of specialist eye or eye and ear hospitals
    • Exclusion of hospitals with EDs with <20,000 presentations in 2011–12.

In order to determine the type of peer group classification to use to support fair comparisons between hospitals in relation to time spent in EDs, an analysis was undertaken to determine which peer group classification was most related to, or predictive of, relative performance. This analysis is best done using ‘goodness of fit’ statistics.

A goodness of fit statistic (R2 values) was calculated for each peer group classification option to identify the optimal peer classification.

R2 was calculated using the following formula:

The following image is the formula to calculate R squared. R squared is a number that indicates how well data fit a statistical model. R squared is calculated as 1 minus the residual sum of squares for hospitals within peer groups divided by the total sum of squares for all hospitals.Equation to calculate R^2

where

k is the number of groups in the classification;
ni is the number of hospitals in the ith group;
yij is the percentage of NEAT for the jth hospital in the ith group;
mean y_i is the average percentage of NEAT for the ith group;
mean y is the average percentage of NEAT for all hospitals.

Table 1 shows R2 for each peer classification option. It can been seen from Table 1 that AIHW Revision 1 and AIHW Revision 2 have the largest R2 value (48%), demonstrating the greatest variation between the hospital groups within the classification and the least variation between hospitals in the groups within the classification. At this point of development, for reporting the NEAT indicator, R2 is used to understand and quantify the variation between the hospital groups for each approach to peer group classification, as well as the variation within the hospital groups within the classification. During this analysis, hospitals with fewer than 20,000 presentations in 2011–12 were excluded. The rationale was to allow fair comparison of methods and because ACEM has not assigned an ED role delineation to any ED with fewer than 20,000 presentations.

AIHW Revision 1 was the preferred option as it has an equally high R2, value but with greater flexibility around reporting as the large regional hospitals and medium hospitals peer groups are not combined. AIHW revision 1 includes 122 peer-grouped hospitals. These hospitals and their peer groups are listed in Appendix 1.

Table 1: R2 values for peer classification options

Peer classification option ACEM IHPA AIHW AIHW
Revision 1
AIHW
Revision 2
R2 value 22% 22% 41% 48% 48%

Source: National Non-admitted Patient Emergency Department Care Database 2011–12, data extracted 5 November 2012.

An important finding of this work is that the type of peer group hospital in which an emergency department is located is highly related to, or predictive of, the length of time in ED as measured against the four-hour NEAT. That is, from a statistical perspective, half the variation in the performance of hospital EDs on this indicator relates to the type of hospital. One-fifth of the variation relates to the type of ED. This finding suggests that the four-hour NEAT indicator is highly associated with whole-of-hospital flow of patients.

The Authority also investigated the effect of including risk-adjustment for each peer classification option. Based on expert clinical advice and recent literature8, triage casemix, the percentage of admitted patents and the percentage of patients transferred to other EDs were identified as factors that may affect NEAT performance.

Table 2 shows R2 for each peer classification option as additional casemix variables are added. R2 from linear regression models was calculated using the following formula:

The following image is the formula to calculate R squared. R squared is a number that indicates how well data fit a statistical model. R squared is calculated as 1 minus the residual sum of squares for hospitals within peer groups divided by the total sum of squares for all hospitals.Equation to calculate R^2 from linear regression models

where

n is the number of hospitals;
yi is the percentage of NEAT for the ith hospital;
predicted y_i is the estimated percentage of NEAT for the ith hospital from the linear model;
mean y is the average percentage of NEAT for all hospitals.

Table 2: Goodness of fit (R2) for peer classification options with incremental casemix investigation

Goodness of fit (R2)
Casemix variables used for analysis ACEM IHPA AIHW AIHW Revision 1 AIHW Revision 2
Peer classification method only 22% 22% 41% 48% 48%
Peer + percentage of ED patients admitted 43% 40% 49% 52% 52%
Peer + percentage of triage 3+4+5 patients combined 31% 30% 46% 50% 50%
Peer + percentage of triage 3+4+5 patients combined + percentage of ED patients admitted 46% 43% 51% 53% 53%
Peer + percentage of triage 3, 4, 5 (as separate variables) 33% 32% 47% 51% 51%
Peer + percentage of triage 3, 4, 5 (as separate variables) + percentage ED patients admitted 48% 45% 52% 54% 54%
Peer + percentage of triage 3, 4, 5 (as separate variables) + percentage ED patients admitted + percentage ED patients transferred 49% 46% 53% 55% 55%

Note: Addition of the percentage of triage 1 & 2 presentations as variables did not significantly affect goodness of fit so these variables were not included in the analyses.

Source: National Non-admitted Patient Emergency Department Care Database 2011–12, data extracted 5 November 2012.

While addition of casemix adjustment does result in a large improvement for the two ED role delineation classifications and, to a lesser extent, the traditional AIHW peer group classification, there was only a marginal improvement in R2 observed for the two revised AIHW classifications. In addition, the variables used for risk-adjustment were found to have a complex relationship with NEAT performance depending on the peer classification of the hospital. Based on these findings, a decision has been made to use the AIHW Revision 1 method for reporting NEAT performance in December 2012 but not to use casemix adjustment until a more detailed assessment has been completed.

Specialist hospitals

Decision

For reporting NEAT, the Authority has used a revised version of the AIHW public hospitals peer classification (Revision 1) that excludes all specialist hospitals, including women’s and children’s hospitals. Due to the differences of these hospitals with regards to role and casemix, the Authority has not directly compared the specialist hospitals for NEAT performance. Because the Authority will not directly compare these facilities, those with fewer than 20,000 presentations were not excluded.

Evidence

The Authority investigated the effect that the proportion of paediatric presentations to ED might have on NEAT performance. Expert clinicians advised the Authority that the way children are managed in EDs can be different from adults, and that hospitals and jurisdictions may vary with regard to formal and informal policies for managing children in EDs.

Figure 1 presents the percentage of paediatric cases among all ED presentations and performance against the NEAT indicator. The data does not demonstrate a direct relationship between percentage of paediatric patients and NEAT performance but two distinct clusters can be seen. The densest cluster of hospitals has between highest to lowest of presentations defined as paediatric and the largest range of performance against NEAT (highest to lowest). A second cluster of facilities has in excess of 90% presentations defined as paediatric, again demonstrating a wide NEAT distribution—this second cluster is comprised of the children’s hospitals. However, some of the specialist hospitals (women’s and others) have very low percentages of paediatric patients and, even after risk adjustment, should not be compared with the children’s hospitals.

This reinforces the decision not to casemix adjust NEAT performance until additional research has been conducted. The hospitals making up this specialist group will be reported separately from non-specialist hospitals and are not presented in a way to facilitate comparisons within this group of hospitals.

Figure 1: Percentage of paediatric ED presentations by hospital, 2011–12

Source: National Non-admitted Patient Emergency Department Care Database 2011–12, data extracted 5 November 2012.

Category Percentage of paediatric presentations Percentage of presentations within 4 hours
Children's Hospital 98 80
Children's Hospital 97 76
Children's Hospital 97 94
Children's Hospital 96 59
Children's Hospital 96 71
Children's Hospital 94 70
Women's and Childrens' Hospital 89 83
Women's Hospital 4 86
Women's Hospital 4 92
Other Hospital 31 41
Other Hospital 30 61
Other Hospital 28 53
Other Hospital 28 65
Other Hospital 28 69
Other Hospital 26 83
Other Hospital 25 46
Other Hospital 25 85
Other Hospital 25 69
Other Hospital 24 46
Other Hospital 24 80
Other Hospital 24 52
Other Hospital 24 71
Other Hospital 24 65
Other Hospital 24 65
Other Hospital 24 68
Other Hospital 24 48
Other Hospital 24 88
Other Hospital 24 66
Other Hospital 23 90
Other Hospital 23 80
Other Hospital 23 87
Other Hospital 23 93
Other Hospital 23 73
Other Hospital 23 79
Other Hospital 23 87
Other Hospital 23 72
Other Hospital 23 61
Other Hospital 23 54
Other Hospital 23 58
Other Hospital 23 76
Other Hospital 22 64
Other Hospital 22 69
Other Hospital 22 92
Other Hospital 22 53
Other Hospital 22 49
Other Hospital 22 79
Other Hospital 22 69
Other Hospital 22 77
Other Hospital 22 83
Other Hospital 22 59
Other Hospital 22 71
Other Hospital 21 57
Other Hospital 21 70
Other Hospital 21 77
Other Hospital 21 68
Other Hospital 21 58
Other Hospital 21 53
Other Hospital 21 73
Other Hospital 21 72
Other Hospital 21 75
Other Hospital 21 58
Other Hospital 21 61
Other Hospital 21 54
Other Hospital 20 79
Other Hospital 20 42
Other Hospital 20 55
Other Hospital 20 52
Other Hospital 20 61
Other Hospital 20 53
Other Hospital 20 62
Other Hospital 20 48
Other Hospital 20 53
Other Hospital 20 47
Other Hospital 20 60
Other Hospital 20 61
Other Hospital 20 61
Other Hospital 20 50
Other Hospital 19 71
Other Hospital 19 65
Other Hospital 19 81
Other Hospital 19 62
Other Hospital 19 56
Other Hospital 19 56
Other Hospital 19 45
Other Hospital 19 54
Other Hospital 19 45
Other Hospital 19 66
Other Hospital 19 81
Other Hospital 19 74
Other Hospital 19 83
Other Hospital 18 58
Other Hospital 18 59
Other Hospital 18 51
Other Hospital 18 57
Other Hospital 18 44
Other Hospital 18 52
Other Hospital 18 60
Other Hospital 17 63
Other Hospital 17 62
Other Hospital 17 54
Other Hospital 17 36
Other Hospital 17 82
Other Hospital 17 47
Other Hospital 16 55
Other Hospital 16 70
Other Hospital 16 63
Other Hospital 16 53
Other Hospital 14 67
Other Hospital 13 47
Other Hospital 13 49
Other Hospital 12 64
Other Hospital 12 74
Other Hospital 9 55
Other Hospital 9 37
Other Hospital 8 66
Other Hospital 7 65
Other Hospital 6 55
Other Hospital 6 76
Other Hospital 5 84
Other Hospital 3 55
Other Hospital 2 87
Other Hospital 1 41
Other Hospital 1 64
Other Hospital 1 58
Other Hospital 1 33
Other Hospital 0 52
Other Hospital 0 54
Other Hospital 0 72
Other Hospital 0 68
Other Hospital 0 39
Other Hospital 0 47
Other Hospital 0 52
Other Hospital 0 50
Other Hospital 0 63

Note: Addition of the percentage of triage 1 & 2 presentations as variables did not significantly affect goodness of fit so these variables were not included in the analyses.

Source: National Non-admitted Patient Emergency Department Care Database 2011–12, data extracted 5 November 2012.

5. Australasian College for Emergency MedicineExternal link, opens in a new window.[https://www.acem.org.au/][https://www.acem.org.au] - viewed online on 1 October 2012. Role delineations for EDs that had not been assigned by ACEM were assigned by the Authority using a data driven method.

6. Independent Hospital Pricing AuthorityExternal link, opens in a new window.[https://www.ihpa.gov.au/publications/three-year-data-plan-2015-16-2017-18][https://www.ihpa.gov.au/publications/three-year-data-plan-2015-16-2017-18] - viewed online 1 October 2012.

7. Australian Institute of Health and WelfareExternal link, opens in a new window.[https://www.aihw.gov.au/reports/hospitals/australian-hospital-statistics-2010-11][https://www.aihw.gov.au/reports/hospitals/australian-hospital-statistics-2010-11] - viewed online 1 October 2012.

8. Green J and Hall J (2012.) The comparability of emergency department waiting time performance data. Medical Journal of Australia, 197:6 pp345-348. Eagar K, Dawber J, Masso M, Bird S and Green J (2011) Emergency Department Performance by States and Territories. Centre for Health Service Development, University of Wollongong.