Results of a study comparing classification of high-resolution microscopy images of histopathologic slides of biopsy specimens from the esophagus and gastroesophageal junction using a convolutional neural networks (CNNs)-based method with the sliding windows-based approach showed the former method “marginally outperformed” the latter. Findings from this study were published in JAMA Network Open.

While other studies have demonstrated improvements in the accuracy and ease of histopathologic analysis with machine-learning approaches, image analysis using CNNs based on a computing system that uses an artificial neural network to “learn” to recognize complex patterns and perform tasks through consideration of specific examples.2 More specifically, the process involves “deconvolution of the image content into thousands of salient features followed by selection and aggregation of the most meaningful features.”

Related Articles

The CNNs-based approach differs from other computer-based learning systems in that, unlike the latter, such as the sliding windows approach, it does not require a pathologist to extensively annotate and crop specific regions of interest on whole slides. Another limitation of the sliding windows approach, which is considered to be based on the use of “hard attention,” is that learning based on specific regions of interest is applied to windows within images of the whole slide without correlation to neighboring regions of interest. In contrast, the CNNs approach uses “soft attention” to “generate a nondiscrete attention map that pays fractional attention to each region and produces better gradient flow and thus is easier to optimize.”1

This study involved use of a data set of 379 de-identified histological images of which 51.5%, 21.1%, 12.1%, and 15.3% were classified as normal, Barrett’s esophagus-no dysplasia, Barrett’s esophagus with dysplasia, and adenocarcinoma, respectively, as well as an independent testing set of 123 histological images of which 47.2%, 24.4%, 11.4%, and 17.1% represented normal tissue, Barrett’s esophagus-no dysplasia, Barrett’s esophagus with dysplasia, and adenocarcinoma, respectively.

A comparison of the results of the 2 approaches using the independent testing set revealed a higher accuracy for the CNNs-based approach regarding identification of adenocarcinoma (0.88; 95% CI, 0.84-0.92) than with the sliding windows-based approach (0.87; 95% CI, 0.83-0.91). Similarly, a higher mean accuracy across all classifications was observed for the CNNs-based approach (0.83; 95% CI, 0.80-0.86) compared with the sliding windows-based approach (0.76; 95% CI, 0.73-0.80). Nevertheless, these differences were not statistically significant.  

In their concluding remarks, the study authors commented that “the model [CNNs-based approach] presented here was trained end-to-end with labels only at the tissue level, thus removing the need for high-cost data annotation and creating new opportunities for applying deep learning in digital pathology.”1 

References

  1. Tomita N, Abdollahi B, Wei J, et al. Attention-based deep neural networks for detection of cancerous and precancerous esophagus tissue on histopathological slides [published online November 1, 2019]. JAMA Netw Open. doi: 10.1001/jamanetworkopen.2019.14645
  2. Gertych A, Swiderska-Chadaj Z, Ma Z, et al. Convolutional neural networks can accurately distinguish four histologic growth patterns of lung adenocarcinoma in digital slides. Sci Rep. 2019;9:1483.

This article originally appeared on Cancer Therapy Advisor