top of page

Yelena Yesha, PhD

megan_broccoli.jpeg
link.png

Chief Innovation Officer

UM Institute for Data Science and Computing

Dr. Yelena Yesha is a Visiting Distinguished Professor, Department of Computer Science, University of Miami, Chief Innovation Officer, Miami Institute for Data Science and Computing (iDSC), and a tenured Distinguished University Professor at the Department of Computer Science and Electrical Engineering at the University of Maryland, Baltimore County. She is also the Director of the National Science Foundation Center for Accelerated Real Time Analytics (CARTA). She received her B.Sc. degrees in Computer Science and in Applied Mathematics from York University, Toronto, Canada, in 1984, and her M.Sc. degree and Ph.D. degree in Computer Science from The Ohio State University in 1986 and 1989, respectively. She has published 11 books as author or editor, and more than 200 papers in prestigious refereed journals and refereed conference proceedings, and has been awarded external funding in a total amount exceeding 40 million dollars. Dr. Yesha is currently working with leading industrial companies and government agencies on new innovative technology in the areas of cybersecurity and big data analytics with applications to electronic commerce, climate change and health. Dr. Yesha is a fellow of the IBM Centre for Advanced Studies.

Data Science for Medical Imaging

Artificial intelligence (AI) has great potential to augment the clinician as a virtual radiology assistant (vRA) through enriching information and providing clinical decision support.  Deep learning is a type of AI that has shown promise in performance for Computer Aided Diagnosis (CAD) tasks. A current barrier to implementing deep learning for clinical CAD tasks in radiology is that it requires a training set to be representative and as large as possible in order to generalize appropriately and achieve high accuracy predictions. We present an Active Semi- supervised Expectation Maximization (ASEM) learning model for training a Convolutional Neural Network (CNN) for lung cancer screening using Computed Tomography (CT) imaging examinations. Our learning model is novel since it combines semi-supervised learning via the Expectation-Maximization (EM) algorithm with active learning via Bayesian experimental design for use with 3D CNNs for lung cancer screening. ASEM simultaneously infers image labels as a latent variable, while predicting which images, if additionally labeled, are likely to improve classification accuracy. The performance of this model has been evaluated using three publicly available chest CT datasets: Kaggle2017, NLST, and LIDC-IDRI. Our experiments showed that ASEM-CAD can identify suspicious lung nodules and detect lung cancer cases with an accuracy of 92% (Kaggle17), 93% (NLST), and 73%  (LIDC) and Area Under Curve (AUC) of 0.94 (Kaggle), 0.88 (NLST), and 0.81 (LIDC).  These performance numbers are comparable to fully supervised training, but use only slightly more than 50% of the training data labels. Computed Tomography has become an essential tool in diagnosing numerous diseases, especially cancer. CT scans have become vital in identifying the presence of a cancerous tumor and to find if cancer has metastasized. The scans are generated by exposing the patient to ionizing radiation. Exposure to radiation increases the patient’s risk of cancer. Repeated scans might aggravate the condition and might also aid in spreading the cancerous cells to nearby tissues in patients that already have cancer. Hence, it is required to conduct a CT examination involving a low dose of radiation, which reduces the amount of ionizing radiation exposure. But a lower dose would result in poor image quality which lowers the diagnostic accuracy. Currently, there are not many optimal noise measurement and prediction methods to assess the diagnostic quality of the image. We propose an automated approach to find the global noise of a CT scan associated with a tissue.

bottom of page