000 | 03285nam a22003737a 4500 | ||
---|---|---|---|
008 | 240117s20242024 xxu||||| |||| 00| 0 eng d | ||
022 | _a2168-8184 | ||
024 | _aPMC11283313 [pmc] | ||
040 | _aOvid MEDLINE(R) | ||
099 | _a39070516 | ||
245 | _aAssessing the Efficacy of an AI-Powered Chatbot (ChatGPT) in Providing Information on Orthopedic Surgeries: A Comparative Study With Expert Opinion. | ||
251 | _aCureus. 16(6):e63287, 2024 Jun. | ||
252 | _aCureus. 16(6):e63287, 2024 Jun. | ||
253 | _aCureus | ||
260 | _c2024 | ||
260 | _fFY2024 | ||
260 | _p2024 Jun | ||
265 | _sepublish | ||
265 | _tPubMed-not-MEDLINE | ||
266 | _z2024/07/29 05:31 | ||
520 | _aBackground The use of artificial intelligence (AI) as a tool for patient care has continued to rapidly expand. The technology has proven its utility in various applications across several specialties in a variety of applications. However, its practicality in orthopedics remains widely unknown. This study seeks to determine if the open-access software Chat Generative Pre-Trained Transformer (ChatGPT) can be a reliable source of data for patients. Questions/purposes This study aims to determine: (1) Is the open-access AI software ChatGPT capable of accurately answering commonly posed patient questions? (2) Will there be a significant difference in agreement among the study experts in the answers generated by ChatGPT? Methods A standard list of questions for six different procedures across six subspecialties is posed to ChatGPT. The procedures chosen were anterior cruciate ligament (ACL) reconstruction, microdiscectomy, total hip arthroplasty (THA), rotator cuff repair, carpal tunnel release, and ankle fracture open reduction and internal fixation. The generated answers are then compared to expert opinion using a Likert scale based on the agreement of the aforementioned experts. Results On a three-point Likert scale with 1 being disagree and 3 being agree, the mean score across all subspecialties is 2.43, indicating at least partial agreement with expert opinion. There was no significant difference in the Likert scale mean across the six subspecialties surveyed (p = 0.177). Conclusions This study shows promise in using ChatGPT as an aid in answering patient questions regarding their surgical procedures. This opens doors for the use of the software by patients for understanding and increased shared decision-making with their surgeons. However, studies with larger participation groups are necessary to ensure accuracy on a larger and broader scale as well as studies involving specific application of AI within surgeon's practice. Copyright © 2024, Smith et al. | ||
546 | _aEnglish | ||
650 | _zAutomated | ||
651 | _aMedStar Washington Hospital Center | ||
656 | _aMedStar Georgetown University Hospital/MedStar Washington Hospital Center | ||
656 | _aOrthopaedic Surgery Residency A | ||
656 | _aOrthopedics and Sports Medicine | ||
657 | _aJournal Article | ||
700 |
_aArgintar, Evan H _bMWHC |
||
700 |
_aJacquez, Evan _bMGUH _cOrthopaedic Surgery Residency _dMD |
||
790 | _aSmith AM , Jacquez EA , Argintar EH | ||
856 |
_uhttps://dx.doi.org/10.7759/cureus.63287 _zhttps://dx.doi.org/10.7759/cureus.63287 |
||
942 |
_cART _dArticle |
||
999 |
_c14644 _d14644 |