AN ADVANCED APPROACH TO PREDICT CARDIOVASCULAR DISEASES USING DEEP LEARNING MODEL
Abstract
Cardiovascular disease is one of the most common types of chronic disorders
that people are facing currently and this is due to the factors like lifestyle and food
choices. Spending on medical and public health research is a major economic priority
worldwide, especially for disease prediction. When performing routine screenings,
echocardiography is the imaging method for evaluating the heart's chambers. But this
imaging technique is not fruitful always so it is important to employ machine learning
models that are used to predict cardiovascular diseases accurately with a minimum
budget and time. The goal of this research work is to identify and create deep-learning
models that can reliably detect cardiovascular disease in its earliest stages. This research
work aims to develop a health informatics system for the classification and
segmentation of heart disorders using machine learning and deep learning methods,
with a focus on ultrasonic images. Two proposed models, the AWMYolov4+ method
and the KSDSC method, are developed for this work. Both models train a deep learning
architecture with an ultrasonic image dataset to improve the classification and
segmentation of heart disease.
The KSDSC model used in the field of deep learning is composed of an input
layer, hidden layers, and output layer. Initially, ultrasound images are collected by the
input layer to be used as a data source for the model's parameters. Once the ultrasound
image has been cropped, the Kushner-Stratonovich filter is applied as a preliminary
step to further reduce noise. Once the initial stage is completed the preprocessed image
is delivered to the next stage. In the second stage of the process, preprocessed images
are segmented into a variety of sub-images according to the Sorensen-Dice image
segmentation method. Then applies the Haar wavelet transformation into the segmented
image to extract numerous features. The output layer receives the extracted features and
uses them to implement the softmax activation function to match the extracted features
with the disease to predict heart disease. Experimental evaluations of prediction
accuracy, false positive rate, and prediction time are carried out on a variety of
ultrasound images. Both qualitative and quantitative evidence showing that our
proposed KSDSC model outperforms conventional approaches.
The second model, AWMYolov4+, combines the Adaptive Weighted Mean
Filter (AWM) with the You Only Look Once Version 4 plus (Yolov4+) model. It has
three phases of development, the first phase involves pre-processing the input data
iv
given to the model, the second phase involves splitting up the noise in the images into
two different processing paths namely noise detection and noise removal, and the third
phase involves combining the results of the two processes. The images with noise are
identified and eliminated by this module. Yolov4+ is a well-tested darknet53
networking model, with Mish activation, and it is used in the next phase of the CVD
classification procedure. It uses two new classification and segmentation models to
detect cardiomyopathy and heart valve disease, which results in a cutting-edge deep
neural network methodology (Yolov4+ with Mish activation). The automatic
identification of heart chambers in echocardiogram images is based on discriminative
deep-learning algorithms. The precision, recall, and F1-score are compared with the
alternative methods, in that the model loss value and the prediction time for the
proposed method are significantly lower. For region-segmentation purposes, the
proposed AWMYolov4+ model outperforms previous classifiers. There is a decrease
in cost and an increase in accuracy. Area Under the Curve (AUC) graph values show
95.06 % accuracy for the ultrasound image dataset using the proposed AWMYolov4+
model, with a false positive rate of less than 7%, a short prediction time, and high
sensitivity. CAMUS images are collected because they are of a higher quality and
accuracy than those found in the dataset.