Article Text

Download PDFPDF

Keratoconus detection using deep learning of colour-coded maps with anterior segment optical coherence tomography: a diagnostic accuracy study
  1. Kazutaka Kamiya1,
  2. Yuji Ayatsuka2,
  3. Yudai Kato2,
  4. Fusako Fujimura1,
  5. Masahide Takahashi3,
  6. Nobuyuki Shoji3,
  7. Yosai Mori4,
  8. Kazunori Miyata4
  1. 1 Visual Phisiology, School of Allied Health Sciences, Kitasato University, Sagamihara, Japan
  2. 2 Cresco Ltd, Technology Laboratory, Tokyo, Japan
  3. 3 Department of Ophthalmology, School of Medicine, Kitasato University, Sagamihara, Japan
  4. 4 Miyata Eye Hospital, Department of Ophthalmology, Miyakonojo, Japan
  1. Correspondence to Dr Kazutaka Kamiya; kamiyak-tky{at}umin.ac.jp

Abstract

Objective To evaluate the diagnostic accuracy of keratoconus using deep learning of the colour-coded maps measured with the swept-source anterior segment optical coherence tomography (AS-OCT).

Design A diagnostic accuracy study.

Setting A single-centre study.

Participants A total of 304 keratoconic eyes (grade 1 (108 eyes), 2 (75 eyes), 3 (42 eyes) and 4 (79 eyes)) according to the Amsler-Krumeich classification, and 239 age-matched healthy eyes.

Main outcome measures The diagnostic accuracy of keratoconus using deep learning of six colour-coded maps (anterior elevation, anterior curvature, posterior elevation, posterior curvature, total refractive power and pachymetry map).

Results Deep learning of the arithmetical mean output data of these six maps showed an accuracy of 0.991 in discriminating between normal and keratoconic eyes. For single map analysis, posterior elevation map (0.993) showed the highest accuracy, followed by posterior curvature map (0.991), anterior elevation map (0.983), corneal pachymetry map (0.982), total refractive power map (0.978) and anterior curvature map (0.976), in discriminating between normal and keratoconic eyes. This deep learning also showed an accuracy of 0.874 in classifying the stage of the disease. Posterior curvature map (0.869) showed the highest accuracy, followed by corneal pachymetry map (0.845), anterior curvature map (0.836), total refractive power map (0.836), posterior elevation map (0.829) and anterior elevation map (0.820), in classifying the stage.

Conclusions Deep learning using the colour-coded maps obtained by the AS-OCT effectively discriminates keratoconus from normal corneas, and furthermore classifies the grade of the disease. It is suggested that this will become an aid for improving the diagnostic accuracy of keratoconus in daily practice.

Clinical trial registration number 000034587.

  • deep learning
  • keratoconus
  • diagnosis
  • accuracy
  • optical coherence tomography

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

View Full Text

Statistics from Altmetric.com

Strengths and limitations of this study

  • Deep learning using the image of corneal colour-coded maps with anterior segment optical coherence tomography (AS-OCT) based on clinical diagnosis has not so far been investigated for keratoconus detection and grade classification.

  • Deep learning using colour-coded maps obtained from the AS-OCT will be an aid not only for the screening of keratoconus, but also for the grade classification of the disease.

  • This deep learning was not applied to other corneal disorders such as forme fruste keratoconus, subclinical keratoconus, or postsurgical eyes.

  • The arithmetic mean outputs from the six classifiers without any weighting were utilised for classifying the grade of the disease.

Introduction

Keratoconus is one of progressive corneal disorders characterised by anterior protrusion and thinning. The progressive thinning and the subsequent bulging of the cornea are often accompanied by high myopic astigmatism, as well as by irregular astigmatism, resulting in severe visual impairment. The elimination of keratoconus is essential for refractive surgery candidates, since iatrogenic keratectasia can occur when performing keratorefractive surgery in such eyes.

Deep learning is one of the machine learning techniques dealing with the training of multilayer artificial neural networks. Machine learning is a general technique to find appropriate parameters or functions to classify input data from large amounts of training data. Many methodologies to implement machine learning, such as support vector machines, decision trees, or neural networks, have so far been advocated. In recent years, multilayered neural networks, especially convolutional neural networks, have achieved impressive results in many types of image classifications in many scientific fields.1–3 Number of layers often referred to as a kind of depth, and then machine learning with multilayered neural network is called deep learning. In ophthalmology, it has been mainly applied in the diagnosis of retinal diseases4–6 and glaucoma.7–9 Until now, there have been several studies on the sensitivity and the specificity of keratoconus detection using machine learning.10–22 However, most previous studies have merely used either topographic numeric indices measured with a Placido disk-based corneal topographer, or tomographic numeric indices measured with a scanning slit tomographer and a rotating Scheimpflug camera, for machine learning in order to discriminate keratoconus from normal corneas. Accordingly, deep learning using the whole image of corneal colour-coded maps with the anterior segment optical coherence tomography (AS-OCT) based on clinical diagnosis, which enables us to precisely determine the curvature and the elevation of the anterior and posterior corneal surfaces even in eyes with opaque cornea, has not so far been performed to determine the diagnostic accuracy or the grade of keratoconus. It may give us intrinsic insights on keratoconus detection especially in the preoperative screening of the candidates for corneal refractive surgery, because it can result in unpredictable outcomes and subsequent corneal ectasia, when applied to such eyes. The aim of the current study is to assess the accuracy of deep learning using anterior and posterior corneal elevations, curvatures, total refractive power, and pachymetry map, obtained by the AS-OCT, in order to discriminate between normal and keratoconic eyes, as well as to classify the stage of the disease, according to the Amsler-Krumeich classification.

Materials and methods

Study population

We retrospectively reviewed the data of keratoconic patients who underwent corneal tomography obtained by a swept-source AS-OCT (CASIA SS-1000, Tomey, Aichi, Japan) between March 2013 and April 2018 at Miyata Eye Hospital. We enrolled 304 eyes with good quality scans of corneal tomography. Keratoconus was diagnosed by corneal specialists with evident findings characteristic of keratoconus (eg, corneal tomography with asymmetric bow-tie pattern with or without skewed axes), and at least one keratoconus sign (eg, stromal thinning, conical protrusion of the cornea at the apex, Fleischer ring, Vogt striae, or anterior stromal scar) on slit-lamp examination.23 The grade of keratoconus was determined by the Amsler-Krumeich classification, based on astigmatism, corneal power, corneal transparency, and corneal thickness, obtained from the slit-lamp biomicroscopy and the AS-OCT.24 The study group was divided into four keratoconus subgroups; grade 1 (108 eyes), 2 (75 eyes), 3 (42 eyes) and 4 (79 eyes), according to this classification. Other corneal diseases such as pellucid marginal degeneration and eyes with a history of trauma or corneal surgery such as corneal cross-linking for progressive keratoconus were excluded from the study. The patients were recruited in a continuous cohort. The control group comprised of 239 eyes in subjects with normal corneal and ocular findings applying for a contact lens fitting or a refractive surgery consultation. The control subjects had a refractive error (spherical equivalent) of less than 6 dioptres (D) and/or astigmatism of less than 3 D. The patients who wore rigid and soft contact lenses were asked to stop using them for 3 weeks and 2 weeks, respectively, before this assessment. Our Institutional Review Board waived the requirement for informed consent for this retrospective study. The data that support the findings of this study are available from the corresponding author on reasonable request.

Anterior segment optical coherence tomography imaging

We obtained six standardised colour-coded maps (anterior elevation (−130 to 130 µm, 5 µm step), anterior curvature (9.0 to 101.5 D, 5 D step (35.5 to 50.5 D, 1.5 D step)), posterior elevation (−260 to 260 µm, 10 µm step), posterior curvature (−3.0 to −10.5 D, 0.3 D step), total refractive power (9.0 to 101.5 D, 5 D step (35.5 to 50.5 D, 1.5 D step)) and pachymetry map (340 to 840 µm, 20 µm step)), based on the manufacturer’s instructions, by experienced examiners who were masked to the clinical condition of the subjects, using the swept-source AS-OCT (figure 1). In each map, a colour-scale bar was excluded for deep learning. The device utilises a wavelength of 1310 nm, with an axial resolution of 10 µm, a transverse resolution of 30 µm and a scan-rate of 30 000 A-scans/s. The patient’s chin was placed on the chin rest and the forehead against the forehead strap. The patient was asked to open both eyes and stare at the fixation target. After attaining perfect alignment, the instrument automatically began obtaining the measurements. The scan was initiated when a cross-sectional image of the cornea was visualised on a computer screen. Collected data were processed by the system to achieve cross-sectional images. Image quality was checked, and only one examination with a high image quality factor was recorded.

Figure 1

A representative example of six colour-coded maps (anterior elevation, anterior curvature, posterior elevation, posterior curvature, total refractive power and pachymetry map) measured with an anterior segment optical coherence tomography. A colour-scale bar was excluded in each map for deep learning.

Deep learning

Neural network is one of the powerful tools available for classifying data into some groups or categories. Convolutional neural network is a variation of neural network for classifying images or two-dimensional data.1–3 A typical convolutional neural network consists of mainly two types of layers: a convolution layer and a fully connected layer. A convolution layer automatically extracts two-dimensional patterns and their geometrical relations to distinguish training data, and finds out these characteristics in images. Colour-coded maps provide much information of the corneal shape as well as two-dimensional patterns, so that convolutional neural networks are expected to classify them well.

We exported each image data by taking a screenshot of the CASIA2 application displaying six types of corneal images, and stored it in a lossless compression format such as PNG. After that we cut out each type of images from the screenshots, we saved it in PNG format for deep learning. We used an open source deep learning platform (PyTorch) for deep learning with a network model called a ResNet-18. The ResNet-18 is one of the available convolutional neural networks which is pretrained with millions of images from the ImageNet database. Each input image is resized to 224-by-224 pixels without deformation. The output is one value (0–4) that can be mapped to the grades (including normal eyes). ‘Normal’ is represented as ‘0’, and grades 1, 2, 3 and 4 are denoted as ‘1’, ‘2’, ‘3’ and ‘4’ in teaching data. The output value of neural network for an image is not an integer, so that we aligned it to the nearest integer value to interpret. For example, if an output is 1.37, it is interpreted as 1, that is, grade 1.

We separately trained six neural networks from each image of six colour-coded maps (anterior elevation, anterior curvature, posterior elevation, posterior curvature, total refractive power and pachymetry map) without a colour-scaled bar. Each network classifies an image into 0–4. We integrated these six outputs by averaging them. For example, if these six classifiers outputs are 2, 2, 2, 3, 4 and 3, their average is 2.67, resulting in an integrated result of 3. We applied a floor function for values such as 2.5, that is, it was interpreted as 2. A total of 543 eyes were split into five groups (108 or 109 eyes in each group). We used fivefold cross-validation to increase the reliability of the accuracy outcomes of these six classifiers.

Patient and public involvement

Patients were not involved in the development of the research question, study design, or conduct of this study.

Results

The output data of deep learning in classifying the keratoconus grade of the disease are listed in table 1. Deep learning using the arithmetical mean data of these six colour-coded maps showed an accuracy of 0.991 (sensitivity 1.000, specificity 0.984) in discriminating between normal and keratoconic eyes (table 2). For a single map analysis, posterior elevation map (0.993) showed the highest accuracy, followed by posterior curvature map (0.991), anterior elevation map (0.983), corneal pachymetry map (0.982), total refractive power map (0.978) and anterior curvature map (0.976), in discriminating between normal and keratoconic eyes.

Table 1

The output data of deep learning in classifying the grading of the disease according to the Amsler-Krumeich classification

Table 2

The sensitivity, the specificity, and the accuracy outcomes in classifying the grading the disease according to the Amsler-Krumeich classification

Deep learning using the arithmetical mean data of these six colour-coded maps showed an accuracy of 0.874 (sensitivity 0.889, specificity 0.977 for grade 1, sensitivity 0.680, specificity 0.951 for grade 2, sensitivity 0.714, specificity 0.952 for grade 3 and sensitivity 0.747, specificity 0.987 for grade 4) in classifying the stage of the disease, according to the Amsler-Krumeich classification (table 2). For a single map analysis, posterior curvature map (0.869) showed the highest accuracy, followed by corneal pachymetry map (0.845), anterior curvature map (0.836), total refractive power map (0.836), posterior elevation map (0.829) and anterior elevation map (0.820), in classifying the stage of the disease.

Discussion

In the present study, our results showed that deep learning using the colour-coded maps obtained by the AS-OCT provided an accuracy of 0.991 in discriminating between keratoconic and normal eyes, suggesting that it will be an aid to improve the diagnostic accuracy as a keratoconus screening test. Our results also showed that it provided an accuracy of 0.874 in determining the keratoconus stage, indicating that it will also be helpful to classify the grade of the disease. We applied the Amsler-Krumeich classification, since there is no other standardised classification of the disease, and thus it is still often used in daily practice. As far as we can ascertain, this is the first study on deep learning using the whole image of each colour-coded map for keratoconus detection and grade classification based on clinical diagnosis. We believe that deep learning will be an aid for the screening and the staging of keratoconus in a clinical setting, because the precise preoperative screening of early keratoconus for refractive candidates is still challenging in daily practice.

To date, there have been several studies on machine learning for the screening of keratoconus, as listed in table 3. Using 8, 11 and 10 indices measured with a Placido disk-based corneal topography, Maeda et al 10 11 and Smolek and Klyce12 demonstrated that the accuracy of distinguishing keratoconus from other conditions was 96%, 80% and 100%, respectively. Using 9 indices measured with another corneal topography, Accardo and Pensiero13 showed that the sensitivity and the specificity was 94.1% and 97.6%, respectively. Using 11 indices measured with a scanning-slit corneal tomography, Souza et al 14 described that the sensitivity at 75% and 90% specificity was 43%–100% and 22%–100%, respectively, and that the area under the curve was 71%–99%. Arbelaez et al 15 reported that the support vector machine algorithm, using the data from anterior and posterior corneal surfaces and pachymetry measured with a Scheimpflug camera combined with Placido corneal topography, increased its sensitivity from 89.3% to 96.0% in abnormal eyes, 92.8%–95.0% in eyes with keratoconus, 75.2%–92.0% in eyes with subclinical keratoconus, and 93.1%–97.2% in normal eyes. Using 55 indices measured with a dual Scheimpflug camera, Smadja et al 16 stated that the sensitivity and the specificity were 100% and 99.5%, respectively. Using 15 and 22 indices measured with a Scheimpflug camera, Kovács et al 17 and Ruiz Hidalgo et al 18 19 mentioned that the sensitivity and the specificity were 100% and 95%, and 99.1% and 98.4%, respectively. Yousefi et al 20 recently demonstrated that the specificity in identifying normal from keratoconus eyes was 94.1% and the sensitivity of identifying keratoconus from normal eyes was 97.7%, based on Ectasia Status Index diagnosis labels. Dos Santos et al 21 reported that the custom neural network architecture could segment both healthy and keratoconus images with high accuracy, and that deep learning algorithms could be applied for OCT image segmentation in various clinical settings. Issarti et al 22 stated that computer aided diagnosis detected suspect keratoconus with an accuracy of 96.56% (sensitivity 97.78%, specificity 95.56%), suggesting that the algorithm is highly accurate and provides a stable screening platform to assist ophthalmologists with the early detection of keratoconus. Since the inclusion criteria, the category of the disease, and the sample size, were different among these studies, we cannot directly compare the sensitivity and the specificity outcomes between these previous and current studies. Especially the category of the disease might affect the outcomes in this kind of the diagnostic accuracy test in a clinical setting. However, most previous studies have merely used topographic and tomographic numeric values for machine learning, except for one study.21 These numeric values are simple and easy to grasp the overall corneal shape, but hide the spatial gradients and distributions of the corneal curvature, elevation, refractive power and thickness. In the current study, we used the whole images of six colour-coded maps for deep learning, instead of topographic and tomographic numeric indices. We assume that the use of colour-coded maps has advantages over that of numeric values for machine learning, since these colour-coded maps can bring a larger amount of corneal information than these numeric values for this learning. Contrary to our expectations, the sensitivity to detect more advanced keratoconus (grade 2, 3, or 4) was lower than that to detect mild keratoconus (grade 1). We speculate that the colour-coded maps might not be typical for grade 2, 3 and 4, and thus the discrimination between grade 2 and grade 3, or that between grade 3 and grade 4, was still difficult even using this deep learning of these colour-coded maps. A further validation using another study population is still necessary to clarify this point.

Table 3

Previous studies on the diagnostic accuracy of keratoconus using machine learning

In previous studies, simple multilayer neural networks, support vector machines, or decision trees were used for machine learning, whereas convolutional neural network was applied in our study. We also assume that convolutional neural network has advantages over other machine learning methods, since convolutional neural network can directly extract the morphological characteristics from the obtained images without preliminary learning, and subsequently provide a higher classification precision, especially in the field of image recognition.

Placido disk-based corneal topography is a highly sensitive and specific diagnostic tool, but it only examines the anterior corneal surface. It has been reported that both the curvature and the elevation of the posterior corneal surface play a vital role in early-stage keratoconus detection.25–29 Ishii et al 28 showed that the cases of lower staging had a larger area under the receiver operating characteristic curve in the posterior elevation differences than in the anterior elevation differences, suggesting a greater diagnostic value of the posterior elevation measurement. We29 previously demonstrated that anterior and posterior corneal surface height data effectively discriminate keratoconus from normal corneas, and may provide useful information for improving the diagnostic accuracy of keratoconus, especially in the early stage of the disease. Interestingly, for a single map analysis, the posterior elevation map (0.993) and the posterior curvature map (0.869) showed the highest accuracy in discriminating between normal and keratoconic eyes and in classifying the stage of the disease, respectively, supporting the significance of posterior corneal information for keratoconus detection. Moreover, the AS-OCT may have advantages over the Scheimpflug imaging system, in the grading of the disease, especially for grade 4 keratoconic eyes.30

There are at least two limitations to this study. One limitation is that we used the arithmetic mean outputs from these six classifiers, without any weighting. We investigated some variations to integrate outputs, including weighted averaging and machine learning with neural network, but the arithmetic mean data without weighting resulted in the best accuracy in this study population. Another limitation is that we did not include other corneal disorders such as forme fruste keratoconus or subclinical keratoconus, and did not apply this deep learning to other populations. We are currently conducting a new study to apply this deep learning to other corneal disorders as well as other populations to confirm the authenticity of our results.

In summary, our results may support the view that deep learning using six colour-coded maps obtained from the swept-source AS-OCT was effective not only for the screening of keratoconus, but also for the grade of the disease. It may be an aid for improving the accuracy of keratoconus detection in a clinical setting. A further study with a large sample size will be helpful to confirm our findings.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.
  24. 24.
  25. 25.
  26. 26.
  27. 27.
  28. 28.
  29. 29.
  30. 30.
View Abstract

Footnotes

  • Contributors KK and KM were involved in the design and conduct of the study. KK, YA, YK and YM were involved in collection, management, analysis and interpretation of data. KK, YA, YK, FF, MT, NS, YM and KM were involved in preparation, review and final approval of the manuscript.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Patient consent for publication Obtained.

  • Ethics approval The study was approved by the Institutional Review Board of Miyata Eye Hospital

    (18-023), and followed the tenets of the Declaration of Helsinki.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data availability statement Data are available upon reasonable request.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.