Artificial intelligence (AI) is starting to be widely leveraged for ophthalmic classification tasks [1,2,3]. One of the primary goals of such technology is to aid ophthalmologists and optometrists in the more accurate and timely management of disease. Thus, research mainly targets the development of AI systems to target image intensive clinical data such as optical coherence tomography (OCT) classification to fundoscopy image recognition and segmentation. While these studies have proven high levels of accuracy and the ability to be potentially clinically beneficial, we feel as though more work should be focused on the accessibility of such technologies for the clinician. The vast majority of work solely focuses on the development of novel AI solutions but disregard the integration at the point-of-care, and how it may augment or benefit clinical workflows.
A lack of appreciation of how an AI system will perform within a clinical setting can lead to many problems with respect to implementation. Pristine datasets are critical for accurate AI classification systems and poor image resolution, non standardized data types, and crooked pictures barely describe the vast majority of errors that can occur in data acquisition. When deploying AI systems into accessible formats, it is also essential for computation to be fast. Mobile applications are one of the most widely used platforms with many technologies available to streamline the deployment of AI systems. With mobile AI systems, clinicians can conveniently utilize technologies from the palm of their hand.
Our prior work focused not only on the development of an AI system for ophthalmic classification but also on accessibility [4]. We developed an end-to-end OCT image analysis system that can accurately categorize scans into various disease categories and then studied the optimization and processes required for smooth deployment onto an iPhone. We compressed our model through various tools and during the training processes built resilience to image resolution and augmentations. By doing so, we were able to create a robust mobile tool that can be used by ophthalmologists at the point-of-care and were also able to understand the limitations and processes required for accessible deployment.
References
Yoo TK, Ryu IH, Kim JK, Lee IS. Deep learning for predicting uncorrected refractive error using posterior segment optical coherence tomography images. Eye (2021). https://doi.org/10.1038/s41433-021-01795-5
Li F, Wang Y, Xu T, Dong L, Yan L, Jiang M, et al. Deep learning-based automated detection for diabetic retinopathy and diabetic macular oedema in retinal fundus photographs. Eye (2021). https://doi.org/10.1038/s41433-021-01552-8
Li Z, Guo C, Nie D, Lin D, Cui T, Zhu Y, et al. Automated detection of retinal exudates and drusen in ultra-widefield fundus images based on deep learning. Eye. 2021. https://doi.org/10.1038/s41433-021-01715-7
Rao A, Fishman HA. OCTAI: Smartphone-based Optical Coherence Tomography Image Analysis System. In 2021 IEEE World AI IoT Congress (AIIoT) 2021 May 10 (pp. 0072-0076). IEEE. https://doi.org/10.1109/AIIoT52608.2021.9454200
Author information
Authors and Affiliations
Contributions
AR wrote the main body of the text regarding technology and HF added clinical information.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Rao, A., Fishman, H. Accessible artificial intelligence for ophthalmologists. Eye 36, 683 (2022). https://doi.org/10.1038/s41433-021-01891-6
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41433-021-01891-6