Loading [Contrib]/a11y/accessibility-menu.js
Skip to main content
null
J Orthopaedic Experience & Innovation
  • Menu
  • Articles
    • Brief Report
    • Case Report
    • Data Paper
    • Editorial
    • Hand
    • Meeting Reports/Abstracts
    • Methods Article
    • Product Review
    • Research Article
    • Review Article
    • Review Articles
    • Systematic Review
    • All
  • For Authors
  • Editorial Board
  • About
  • Issues
  • Blog
  • "Open Mic" Topic Sessions
  • Advertisers
  • Recorded Content
  • CME
  • JOEI KOL Connect
  • search

RSS Feed

Enter the URL below into your favorite RSS reader.

http://localhost:9521/feed
Research Article
Vol. 5, Issue 1, 2024April 19, 2024 EDT

Is the Interpretation of Radiographic Knee Arthritis Consistent Between Orthopaedic Surgeons and Radiologists?

Justin A. Magnuson, MD, Nihir Parikh, Francis Sirch, Justin R. Montgomery, MD, Raja N. Kyriakos, MD, Arjun Saxena, MD, Andrew M. Star, MD,
OsteoarthritisTotal Knee ArthroplastyMusculoskeletal Radiology
Copyright Logoccby-nc-nd-4.0 • https://doi.org/10.60118/001c.91022
J Orthopaedic Experience & Innovation
Magnuson, Justin A., Nihir Parikh, Francis Sirch, Justin R. Montgomery, Raja N. Kyriakos, Arjun Saxena, and Andrew M. Star. 2024. “Is the Interpretation of Radiographic Knee Arthritis Consistent Between Orthopaedic Surgeons and Radiologists?” Journal of Orthopaedic Experience & Innovation 5 (1). https:/​/​doi.org/​10.60118/​001c.91022.
Save article as...▾
Download all (1)
  • Click here: https://joeipub.com/learning
    Download

Sorry, something went wrong. Please try again.

If this problem reoccurs, please contact Scholastica Support

Error message:

undefined

View more stats

Abstract

Background

Knee radiographs are often examined independently by both radiologists and orthopaedic surgeons when evaluating osteoarthritis (OA). While multiple systems have been described, formal classification systems are infrequently used in clinical practice and documentation. Instead, providers commonly describe knee OA on radiographs as “mild,” “moderate,” or “severe,” with loose and unclear interpretations. From a patient’s perspective, inconsistent reading and charting of knee OA severity can have financial and psychological implications, such as prior authorization denial, as well as anxiety-provoking uncertainty with their diagnosis. The purpose of this study was to investigate the agreement between orthopaedic surgeons, musculoskeletal radiologists, and general radiologists on the severity and location of knee OA.

Methods

105 deidentified radiographs of patients presenting with knee pain were obtained. Anteroposterior (AP) and lateral radiographs were reviewed independently by two high-volume arthroplasty surgeons, two musculoskeletal radiologists, and two general radiologists. Each radiograph was classified as mild, moderate, or severe OA, mirroring the language used in the providers’ documentation. Providers were also asked to comment on the location of OA, described as medial, lateral, patellofemoral, or any combination. Agreement was calculated using Fleiss’ kappa in which values less than 0.3 were considered no true agreement, 0.3 and 0.5 weak agreement, 0.5 and 0.8 moderate agreement, and greater than 0.8 strong agreement.

Results

There was inconsistent agreement for severity and location among physicians of the same specialty and between specialties. There was moderate agreement (k = 0.513) in the assessment of patellofemoral arthritis among radiologists. Orthopaedic surgeons (k = 0.503) and musculoskeletal radiologists (k = 0.568) demonstrated moderate agreement in the perceived need for TKA, and there was moderate agreement between the two specialties (k = 0.556). All other comparisons indicate weak or no agreement.

Conclusion

A high degree of inconsistency was found in the subjective interpretation of radiographic knee OA. Although grading systems exist, providers often document knee OA based on the terms “mild,” “moderate,” and “severe,” which was shown to have poor reliability. Utilization and adherence to an existing standardized system of interpreting knee x-rays, which can be efficiently integrated into clinical practice, is necessary to improve communication for providers, patients, and insurers.

Click here: https://joeipub.com/learning

Introduction

Knee osteoarthritis (OA), or degenerative joint disease, is one of the most common reasons for presentation to orthopaedic and primary care offices (Weinstein et al. 2013; Turkiewicz et al. 2015; Van Manen, Nace, and Mont 2012). The prevalence of knee arthritis has grown considerably since the mid-twentieth century, affecting more than 50% of individuals over the age of 65 and approximately 80% by the age of 75 years old (Wallace et al. 2017; Arden and Nevitt 2006). Standing knee radiographs with multiple views are typically the first imaging study obtained to evaluate the presence and severity of OA (Boegård and Jonsson 1999; Duncan et al. 2015). At most institutions and radiology centers, a radiologist interprets the radiographs, and a written report is made available to the referring physician (primary care or orthopaedic surgeon). In some, but not all cases, radiographs are reviewed by a musculoskeletal specialized radiologist. Additionally, radiographs are commonly repeated at the initial orthopaedic evaluation, even if ordered previously by a different physician (Yayac et al. 2021). Unfortunately, at the present time there is no gold standard radiographic scale for osteoarthritis, and this difficulty is compounded by the involvement of physicians across different specialties.

While there is no one gold standard classification method, multiple systems have been described for the classification of radiographic knee osteoarthritis based on etiology, symptom duration and severity, and radiographic findings (Lespasio et al. 2017). The Kellgren-Lawrence (KL) system, described in 1957, is one of the most widely used systems for research purposes (Kellgren and Lawrence 1957). Despite common use in research, the KL system is not widely used in clinical practice (Riddle, Jiranek, and Hull 2013). Limitations include the overemphasis of osteophytes and underemphasis of joint space narrowing, which has been demonstrated as a more reliable indicator of OA (Heng, Bin Abd Razak, and Mitra 2015; Kallman et al. 1989; Wright and The MARS Group 2014). In our experience, OA is commonly graded in clinical practice as “mild,” “moderate,” or “severe” disease without the use of any specific classification system. Thus, differences in each physician’s interpretation of the same image may lead to discrepancies in clinical documentation, patient management, and prior authorization.

The purpose of this study was to investigate agreement in the interpretation of knee radiographs between orthopaedic surgeons and radiologists using these simple and subjective terms that are commonly used in practice. We looked at agreement among orthopaedic surgeons specializing in arthroplasty, musculoskeletal radiologists, and general radiologists. Specifically, we investigated agreement in (1) severity of OA, (2) location of OA. We hypothesized that there would be moderate to strong agreement between physicians of the same specialty but lower agreement between those of different specialties.

Methods

Study Setting and Participants

One hundred five patients presenting to a single orthopaedic practice for unilateral knee pain were identified. Mean age was 62 +/- 16 with 65 females (62%) and a mean body mass index (BMI) of 28 +/- 7. Standing anterior to posterior (AP) and lateral radiographs were obtained for each patient. Patient history and demographic information was blinded before evaluation by each reviewer.

Six physicians independently reviewed the radiographs to characterize the severity and location of OA. Reviewers included two high-volume adult reconstruction orthopaedic surgeons, two fellowship-trained musculoskeletal (MSK) radiologists, and two general radiologists. For each set of radiographs, osteoarthritis was described as “mild,” “moderate,” or “severe,” which mirrored the language utilized in the clinical documentation of the providers. The location of degenerative changes was described as medial compartment, lateral compartment, patellofemoral (PF), or any combination. Although blinded to the entirety of the patient’s clinical presentation, reviewers were then asked to indicate the perceived need for TKA solely based on the severity noted on the knee radiographs.

Statistical Analysis

Agreement testing was calculated using Fleiss’ kappa for categorical variables. Values less than 0.3 were considered no true agreement, between 0.3 and 0.5 weak agreement, between 0.5 and 0.8 moderate agreement, and greater than 0.8 strong agreement. Moderate or strong agreement was considered reliable agreement. We compared agreement among readers of the same specialty and between different specialties using the following groups: (1) surgeons and all radiologists, (2) surgeons and MSK radiologists, (3) surgeons and general radiologists, and (4) MSK radiologists and general radiologists. All statistical analyses were done using R Studio (Version 3.6.3, Vienna, Austria).

Results

Overall Agreement

When comparing reads between all reviewers, we found weak agreement in assessment of severity, and PF OA (Table 1). There was no true agreement in the assessment of medial, lateral, or tricompartmental OA. No evaluations had moderate or strong evaluation among the entire group of reviewers.

Table 1.Agreement among all reviewers
Variable Kappa Interpretation
Severity 0.383 Weak agreement
Medial OA 0.237 No true agreement
Lateral OA 0.287 No true agreement
Patellofemoral OA 0.395 Weak agreement
Tricompartmental OA 0.138 No true agreement
Surgical Recommendation 0.443 Weak agreement

Agreement Within Orthopaedic Surgeons

Orthopaedic surgeons demonstrated weak agreement in assessment of severity and lateral compartment OA, but no true agreement for medial, PF, or tricompartmental OA (Table 2). They did, however, show moderate agreement (κ = 0.503) in their perceived need for TKA based on radiographic findings.

Table 2.Agreement among orthopaedic surgeons
Variable Kappa Interpretation
Severity 0.455 Weak agreement
Medial OA 0.135 No true agreement
Lateral OA 0.302 Weak agreement
Patellofemoral OA 0.229 No true agreement
Tricompartmental OA 0.153 No true agreement
Surgical Recommendation 0.503 Moderate agreement

Agreement Among all Radiologists (MSK and General)

The radiologists combined as a single group showed moderate agreement in assessment of PF OA, weak agreement in severity, and no true agreement for all other locations and recommendation of TKA (Table 3). MSK radiologists had weak agreement in assessment of severity and PF OA and no agreement in the presence of medial, lateral, or tricompartmental OA (Table 4). Similar to orthopaedic surgeons, they demonstrated moderate agreement in the perceived need for TKA (κ = 0.568), which was the strongest agreement within any single specialty.

Table 3.Agreement among all radiologists
Variable Kappa Interpretation
Severity 0.354 Weak agreement
Medial OA 0.286 No true agreement
Lateral OA 0.190 No true agreement
Patellofemoral OA 0.513 Moderate agreement
Tricompartmental OA 0.054 No true agreement
Surgical Recommendation 0.262 No true agreement
Table 4.Agreement among MSK radiologists
Variable Kappa Interpretation
Severity 0.397 Weak agreement
Medial OA 0.250 No true agreement
Lateral OA 0.195 No true agreement
Patellofemoral OA 0.368 Weak agreement
Tricompartmental OA 0.074 No true agreement
Surgical Recommendation 0.568 Moderate agreement

Agreement Across Specialties

There was weak agreement in the assessment of severity in all four comparison groups (Table 5), with the strongest agreement between general and MSK radiologists (Kappa-Fleiss 0.438) followed by surgeons and MSK radiologists (κ = 0.438). Location of OA showed the lowest agreement of any comparison between groups, ranging from κ values of 0.126 to 0.183.

Table 5.Assessment of severity between specialties
Variable Kappa Interpretation
Surgeons vs All Radiologists 0.383 Weak agreement
Surgeons vs MSK Radiologists 0.420 Weak agreement
Surgeons vs General Radiologists 0.383 Weak agreement
General vs MSK Radiologists 0.438 Weak agreement
Table 6.Assessment of location between specialties
Variable Fleiss’ Kappa Interpretation
Surgeons vs All Radiologists 0.160 No true agreement
Surgeons vs MSK Radiologists 0.126 No true agreement
Surgeons vs General Radiologists 0.174 No true agreement
General vs MSK Radiologists 0.183 No true agreement
Table 7.Perceived need for TKA between specialties
Variable Fleiss’ Kappa Interpretation
Surgeons vs All Radiologists 0.443 Weak agreement
Surgeons vs MSK Radiologists 0.556 Moderate agreement
Surgeons vs General Radiologists 0.354 Weak agreement
General vs MSK Radiologists 0.393 Weak agreement

Discussion

The most important result of this study was that the commonly used definitions of mild, moderate, and severe arthritis reported for knee radiographs are not consistent or reproducible. In this study, we found that there was generally weak agreement both among and between orthopaedic surgeons and radiologists in the interpretation of radiographic knee arthritis based on an assessment of the severity and location of disease. Only three comparisons had moderate agreement: (1) assessment of PF arthritis by radiologists, (2) perceived need for TKA by orthopaedic surgeons, (3) perceived need for TKA by MSK radiologists. The provider’s perceived need for TKA is merely a subjective measure of whether the patient would be a candidate for TKA solely based on radiographic OA severity but is not clinically relevant as a patient’s entire clinical picture and physical exam is needed. There were no comparisons that resulted in strong agreement within or between specialties. These findings suggest that the utilization and adherence to one of the many standard classification systems is needed for consistently interpreting radiographic knee arthritis that is both reliable and applicable for use in clinical practice.

The Kellgren and Lawrence (KL) grading system is widely regarded and was found to have the highest inter- and intra-observer correlation coefficients when looking at the severity of knee arthritis (0.83 for both) compared to other joints (Kellgren and Lawrence 1957). Despite the popularity of the KL grading system, it is not without drawbacks. Wright et al. evaluated interrater reliability in six classification systems (KL, International Knee Documentation Committee (IKDC), Fairbank, Brandt et al, Ahlback, and Jager-Wirth) in patients undergoing anterior cruciate ligament revision reconstruction, looking for degenerative changes (Wright and The MARS Group 2014). For the KL classification, they had an intraclass correlation coefficient of 0.38 for AP radiographs and 0.54 for Rosenberg flexion radiographs (Wright and The MARS Group 2014). They found the IKDC classification, which is based on the degree of joint space narrowing, to have the best combination of interrater reliability and correlation with arthroscopic findings (Wright and The MARS Group 2014; Mehta et al. 2007). A study by Riddle et al. looked at the interrater reliability of the KL system in arthroplasty surgeons, finding moderate to high agreement in knees indicated for TKA, but lower agreement in contralateral knees (Riddle, Jiranek, and Hull 2013).

While different radiographic classification systems have demonstrated various advantages or disadvantages for research purpose, the lack of a gold standard system of radiographic arthritis leads to a variety of clinical approaches to interpreting knee radiographs. Although widely regarded and used in practice, the KL system has been shown to underpredict the degree of OA observed intraoperatively at time of the arthroplasty (Abdelaziz et al. 2019; Blackburn et al. 1994). Complicating matters further, even within the KL system, different versions of the same criteria have led to lower agreement between readers (Schiphof et al. 2011). While studies have investigated the reliability of systems, there is little reported on the prevalence at which systems are used in routine evaluation of radiographs. It is critical that reviewers—both within and between specialties—speak the same language when communicating and documenting in patient notes. As variable agreement has been demonstrated in the literature using these well-established systems, our findings suggest even lower agreement when reviewers grade OA based on a subjective evaluation, as is commonly the case in practice.

From a patient’s perspective, inconsistent reporting and charting of their radiographic findings can lead to potential implications. Especially in current times, in which patients can access their charts and read clinical documentation, these inconsistencies can be anxiety-provoking for a patient (Meyer et al. 2021). When one provider reports “mild” and another provider reports something different, the diagnostic uncertainty can cause unnecessary stress and the potential for mistrust. Additionally, insurance companies often deny surgeries based on a radiologist’s reading and documentation of mild-moderate and severe OA. Therefore, communication with loosely defined terms such as “mild,” “moderate,” and “severe” can lead to prior authorization issues for the patient and payor.

Previous studies have investigated the agreement between surgeons and radiologists in other orthopaedic subspecialties, with variable results. One study demonstrated higher agreement between radiologists compared to surgeons when evaluating chondral knee lesions on magnetic resonance imaging (MRI) (Cavalli et al. 2011). Another study showed that experienced surgeons were more accurate at identifying shoulder lesions on MRIs when comparing to intraoperative findings (van Grinsven et al. 2015). A study on radiographic diagnosis of femoro-acetabular impingement demonstrated higher interobserver reliability within the same specialties but poor agreement between radiologists and surgeons (Ayeni et al. 2014). Assessment of hip fracture healing, another challenge that has demonstrated poor agreement with no reliable standard, was shown to be improved by using a standardized union score (Chiavaras et al. 2013). These studies demonstrate inconsistent agreement across multiple subspecialties within orthopaedics, which we found to be true as well when looking at knee OA.

There are several limitations to note in our study. Radiographs reviewed included AP and lateral, but the inclusion of Rosenberg flexion and PA would have given the providers a more complete assessment, which could alter agreement findings. A sunrise view would have been helpful for our providers to best assess PF osteoarthritis. Additionally, this study does not include an intra-rater reliability measure in order to limit the workload on our providers. There is also potential for experience bias in our study, in which the Orthopaedic surgeons and MSK radiologists may be more familiar looking at knee films. Despite this, all six providers were from a high-volume institution and had previous experience interpreting knee osteoarthritis. A limitation of our study also includes selection of radiographs from a single orthopaedic practice in a metropolitan area. Radiographs were not reviewed prior to the study to determine quality of alignment; therefore, some inconsistencies may have existed between images. Additionally, the two orthopaedic surgeons were fellowship-trained high-volume arthroplasty surgeons, thus our findings may not be generalizable to other subspecialty or generalist orthopaedic surgeons. Instructions to grade osteoarthritis as mild, moderate, or severe were likely interpreted subjectively by each of the reviewers; however, this subjective interpretation contributed to our main finding of inconsistent agreement between reviewers and is consistent with that used in ordinary practice.

Utilizing and having strict adherence to a standard grading system for evaluating radiographic knee arthritis remains a challenge, both within and between specialties. While the ultimate decision to undergo TKA must include information from patient history and physical examination, radiographs play an integral role in the evaluation and grading of arthritis. But without objectively stated findings, the utility of such an x-ray report must be considered. Radiographic assessment also frequently plays a role in assessment of surgical necessity by third-party payers, which invites issues of prior authorization. Our study demonstrated that, even between physicians of the same specialty, there remains a high degree of inconsistency. This inconsistency was even more pronounced when comparing across specialties. This may in turn result in obstacles when obtaining third-part payor approval, thus leading to challenges in providing timely and appropriate care. Future research should seek to identify how often language such as “mild,” “moderate,” and “severe” is used to grade OA as opposed to standardized grading systems. Establishing and adhering to a gold standard for use in the clinical arena that is reliable and efficient is critical to allow for improved decision making and communication among physicians, patients, and third-party payers.

Submitted: September 20, 2023 EDT

Accepted: December 09, 2023 EDT

References

Abdelaziz, Hussein, Oury M. Balde, Mustafa Citak, Thorsten Gehrke, Ahmed Magan, and Carl Haasper. 2019. “Kellgren–Lawrence Scoring System Underestimates Cartilage Damage When Indicating TKA: Preoperative Radiograph versus Intraoperative Photograph.” Archives of Orthopaedic and Trauma Surgery 139 (9): 1287–92. https:/​/​doi.org/​10.1007/​s00402-019-03223-6.
Google Scholar
Arden, N, and M Nevitt. 2006. “Osteoarthritis: Epidemiology.” Best Practice & Research Clinical Rheumatology 20 (1): 3–25. https:/​/​doi.org/​10.1016/​j.berh.2005.09.007.
Google Scholar
Ayeni, Olufemi R., Kevin Chan, Daniel B. Whelan, Rajiv Gandhi, Dale Williams, Srinivasan Harish, Hema Choudur, Mary M. Chiavaras, Jon Karlsson, and Mohit Bhandari. 2014. “Diagnosing Femoroacetabular Impingement From Plain Radiographs.” Orthopaedic Journal of Sports Medicine 2 (7): 2325967114541414. https:/​/​doi.org/​10.1177/​2325967114541414.
Google ScholarPubMed CentralPubMed
Blackburn, W.D., W.K. Bernreuter, M. Rominger, and L.L. Loose. 1994. “Arthroscopic Evaluation of Knee Articular Cartilage: A Comparison with Plain Radiographs and Magnetic Resonance Imaging.” J Rheumatol 21:675–79.
Google Scholar
Boegård, T., and Kjell Jonsson. 1999. “Radiography in Osteoarthritis of the Knee.” Skeletal Radiology 28 (11): 605–15. https:/​/​doi.org/​10.1007/​s002560050561.
Google Scholar
Cavalli, Fábio, Anela Izadi, Ana Paula R. B. Ferreira, Larissa Braga, Andresa Braga-Baiak, Marco Antonio Schueda, Mihir Gandhi, and Ricardo Pietrobon. 2011. “Interobserver Reliability among Radiologists and Orthopaedists in Evaluation of Chondral Lesions of the Knee by MRI.” Advances in Orthopedics 2011 (743742): 1–4. https:/​/​doi.org/​10.4061/​2011/​743742.
Google ScholarPubMed CentralPubMed
Chiavaras, Mary M., Simrit Bains, Hema Choudur, Naveen Parasu, Jon Jacobson, Olufemi Ayeni, Brad Petrisor, Rajesh Chakravertty, Sheila Sprague, and Mohit Bhandari. 2013. “The Radiographic Union Score for Hip (RUSH): The Use of a Checklist to Evaluate Hip Fracture Healing Improves Agreement between Radiologists and Orthopedic Surgeons.” Skeletal Radiology 42 (8): 1079–88. https:/​/​doi.org/​10.1007/​s00256-013-1605-8.
Google Scholar
Duncan, Stephen T., Michael S. Khazzam, Jeremy M. Burnham, Kurt P. Spindler, Warren R. Dunn, and Rick W. Wright. 2015. “Sensitivity of Standing Radiographs to Detect Knee Arthritis: A Systematic Review of Level I Studies.” Arthroscopy 31 (2): 321–28. https:/​/​doi.org/​10.1016/​j.arthro.2014.08.023.
Google Scholar
Grinsven, Susan van, Thijs A. Nijenhuis, Peer C. Konings, Albert van Kampen, and Corné J.M. van Loon. 2015. “Are Radiologists Superior to Orthopaedic Surgeons in Diagnosing Instability-Related Shoulder Lesions on Magnetic Resonance Arthrography? A Multicenter Reproducibility and Accuracy Study.” Journal of Shoulder and Elbow Surgery 24 (9): 1405–12. https:/​/​doi.org/​10.1016/​j.jse.2015.05.050.
Google Scholar
Heng, H.-Y.C., H. R. Bin Abd Razak, and A. K. Mitra. 2015. “Radiographic Grading of the Patellofemoral Joint Is More Accurate in Skyline Compared to Lateral Views.” Ann Transl Med 3:263. https:/​/​doi.org/​10.3978/​j.issn.2305-5839.2015.10.33.
Google Scholar
Kallman, Douglas A., Fredrick M. Wigley, William W. Scott Jr., Marc C. Hochberg, and Jordan D. Tobin. 1989. “New Radiographic Grading Scales for Osteoarthritis of the Hand. Reliability for Determining Prevalence and Progression.” Arthritis & Rheumatism 32 (12): 1584–91. https:/​/​doi.org/​10.1002/​anr.1780321213.
Google Scholar
Kellgren, J. H., and J. S. Lawrence. 1957. “Radiological Assessment of Osteo-Arthrosis.” Annals of the Rheumatic Diseases 16 (4): 494–502. https:/​/​doi.org/​10.1136/​ard.16.4.494.
Google ScholarPubMed CentralPubMed
Lespasio, Michelle J., Nicolas S. Piuzzi, M. Elaine Husni, George F. Muschler, A.J. Guarino, and Michael A. Mont. 2017. “Knee Osteoarthritis: A Primer.” The Permanente Journal 21 (4). https:/​/​doi.org/​10.7812/​tpp/​16-183.
Google ScholarPubMed CentralPubMed
Mehta, Vishal M., Liz W. Paxton, Stefan X. Fornalski, Rick P. Csintalan, and Donald C. Fithian. 2007. “Reliability of the International Knee Documentation Committee Radiographic Grading System.” The American Journal of Sports Medicine 35 (6): 933–35. https:/​/​doi.org/​10.1177/​0363546507299742.
Google Scholar
Meyer, Ashley N.D., Traber D. Giardina, Lubna Khawaja, and Hardeep Singh. 2021. “Patient and Clinician Experiences of Uncertainty in the Diagnostic Process: Current Understanding and Future Directions.” Patient Education and Counseling 104 (11): 2606–15. https:/​/​doi.org/​10.1016/​j.pec.2021.07.028.
Google Scholar
Riddle, Daniel L., William A. Jiranek, and Jason R. Hull. 2013. “Validity and Reliability of Radiographic Knee Osteoarthritis Measures by Arthroplasty Surgeons.” Orthopedics 36 (1): e25-32. https:/​/​doi.org/​10.3928/​01477447-20121217-14.
Google Scholar
Schiphof, D., B. de Klerk, H. Kerkhof, A. Hofman, B. Koes, M. Boers, and S. Bierma-Zeinstra. 2011. “Impact of Different Descriptions of the Kellgren and Lawrence Classification Criteria on the Diagnosis of Knee Osteoarthritis.” Annals of the Rheumatic Diseases 70 (8): 1422–27. https:/​/​doi.org/​10.1136/​ard.2010.147520.
Google Scholar
Turkiewicz, A., M. Gerhardsson de Verdier, G. Engstrom, P. M. Nilsson, C. Mellstrom, L. S. Lohmander, and M. Englund. 2015. “Prevalence of Knee Pain and Knee OA in Southern Sweden and the Proportion That Seeks Medical Care.” Rheumatology 54 (5): 827–35. https:/​/​doi.org/​10.1093/​rheumatology/​keu409.
Google Scholar
Van Manen, M. D., J. Nace, and M. A. Mont. 2012. “Management of Primary Knee Osteoarthritis and Indications for Total Knee Arthroplasty for General Practitioners.” J Am Osteopath Assoc 112:709–15.
Google Scholar
Wallace, Ian J., Steven Worthington, David T. Felson, Robert D. Jurmain, Kimberly T. Wren, Heli Maijanen, Robert J. Woods, and Daniel E. Lieberman. 2017. “Knee Osteoarthritis Has Doubled in Prevalence since the Mid-20th Century.” Proceedings of the National Academy of Sciences 114 (35): 9332–36. https:/​/​doi.org/​10.1073/​pnas.1703856114.
Google ScholarPubMed CentralPubMed
Weinstein, Alexander M., Benjamin N. Rome, William M. Reichmann, Jamie E. Collins, Sara A. Burbine, Thomas S. Thornhill, John Wright, Jeffrey N. Katz, and Elena Losina. 2013. “Estimating the Burden of Total Knee Replacement in the United States.” Journal of Bone and Joint Surgery 95 (5): 385–92. https:/​/​doi.org/​10.2106/​jbjs.l.00206.
Google ScholarPubMed CentralPubMed
Wright, Rick W. and The MARS Group. 2014. “Osteoarthritis Classification Scales: Interobserver Reliability and Arthroscopic Correlation.” Journal of Bone and Joint Surgery 96 (14): 1145–51. https:/​/​doi.org/​10.2106/​jbjs.m.00929.
Google ScholarPubMed CentralPubMed
Yayac, Michael, Gregory R. Toci, Eric B. Smith, Andrew M. Star, Javad Parvizi, and Arjun Saxena. 2021. “The Frequency, Reasoning, and Impact of Repeated Radiographs at the Initial Orthopedic Arthroplasty Visit.” The Journal of Arthroplasty 36 (11): 3641–45. https:/​/​doi.org/​10.1016/​j.arth.2021.07.007.
Google Scholar

This website uses cookies

We use cookies to enhance your experience and support COUNTER Metrics for transparent reporting of readership statistics. Cookie data is not sold to third parties or used for marketing purposes.

Powered by Scholastica, the modern academic journal management system