• HYPOTHESIS AND BACKGROUND
    • The classification and treatment of acromioclavicular (AC) joint dislocations remain controversial. The purpose of this study was to determine the interobserver and intraobserver reliability of the Rockwood classification system. We hypothesized poor interobserver and intraobserver reliability, limiting the role of the Rockwood classification system in determining severity of AC joint dislocations and accurately guiding treatment decisions.
  • METHODS
    • We identified 200 patients with AC joint injuries using the International Classification of Diseases, Ninth Revision code 831.04. Fifty patients met inclusion criteria. Deidentified radiographs were compiled and presented to 6 fellowship-trained upper extremity orthopedic surgeons. The surgeons classified each patient into 1 of the 6 classification types described by Rockwood. A second review was performed several months later by 2 surgeons. A κ value was calculated to determine the interobserver and intraobserver reliability.
  • RESULTS
    • The interobserver and intraobserver κ values were fair (κ = 0.278) and moderate (κ = 0.468), respectively. Interobserver results showed that 4 of the 50 radiographic images had a unanimous classification. Intraobserver results for the 2 surgeons showed that 18 of the 50 images were rated the same on second review by the first surgeon and 38 of the 50 images were rated the same on second review by the second surgeon.
  • CONCLUSION
    • We found that the Rockwood classification system has limited interobserver and intraobserver reliability. We believe that unreliable classification may account for some of the inconsistent treatment outcomes among patients with similarly classified injuries. We suggest that a better classification system is needed to use radiographic imaging for diagnosis and treatment of AC joint dislocations.