ANALYSIS AND DESIGN OF SOFTWARE RELIABILITY MODEL FOR BIG FAULT DATA USING SOFT COMPUTING
Date
2023-05Author
SHARMA, SHALINI
Kumar (Supervisor), Dr. Naresh
Kaswan (Co-Supervisor ), Dr. Kuldeep Singh
Metadata
Show full item recordAbstract
With technological advancement in the current scenario, software usage became an
integral part of our daily activities ranging from various applications usage through
mobile to highly complicated medical devices used in surgeries. Software reliability
plays a crucial part in the proper functioning of software at the site and rendering
services to the customer. Therefore, it is of utmost importance to eliminate as many
faults in the software as possible before its release. The need to deliver a high-quality
product becomes a major concern in the industry. Product quality is the only factor that
determines its success in the market and can be identified with its reliability.
Development of a system became complex and costly due to technological
advancement thus one needs to address the criteria regarding security, development
cost, and reliability during the development phase to ensure a defect-free, cost effective, and reliable final product. Moreover, a reliability assessment is required on
the upgraded versions of the already existing systems. Existing systems are
continuously monitored for any possible fault and additional components are added in
the new version to address the resulting issues which further required an up-gradation.
As the complexity of a system increases so do its functionality and capabilities but since
the reliability is inversely proportional to the degree of software complexity achieving
a balance between complexity and reliability became difficult.
Software companies undergo rigorous testing to remove any probable causes
that result in problems and hinder the smooth functioning or reliability of software.
Although rigorous testing is carried out to remove software faults they cannot be
removed. Developers make use of software reliability models for reliability estimation
either by selecting or developing a reliability model suitable for their environmental
conditions. The usage of Software reliability growth models (SRGM) for reliability
assessment of software quality is centered around the reliability phenomenon. Although
reliability models are ideal for measuring and predicting reliability, it's challenging to
find an optimal model that works well in all environmental conditions and on different
types of datasets. Reliability models are abundant in the literature. Still, not all models
can exactly depict reality since while determining the model parameters, there is always
iii
the possibility of uncertainty. Also, model selection depends on the evaluated
parameter's value, comparison criteria for model selection, and fault data set. The
parameter evaluation and hence the model’s capability is tied to the usage of a particular
data set making predictions less accurate. With the extensive usage of Big-data, the
usage of a distributed, high-capacity storage system with a fast-accessing mechanism
is required to handle its high speed, high volume, and variety, characteristics which
may arise errors in software due to hardware malfunctioning. Similarly, because of
unfamiliarity with specific software and an abundant amount of data to handle, give
rise to errors in software because of human negligence. Several reliability models have
been developed to determine the reliability of a software product assuming that the fault
in software results only due to incorrect specifications or errors in code they didn't
consider the fault induced in software due to external factors. A great deal of research
has been carried out to make use of the various combination of existing models in
developing new hybrid models. Parameter evaluation of such hybrid models based on
the mathematical equation is very difficult. Their non-linearity and complexity make
the statistical parameter evaluation a challenging task. Software reliability models are
generally used for the development of such mathematical models which further depends
on accurate prediction and parameter optimization based on experimental data. Thus,
the motivation of this work is to develop a hybrid model using a combination of NHPP
models to handle not only pure software errors but also the errors resulting in software
due to hardware malfunctioning and manual intervention without any assumption.
This research aims to develop a hybrid model that, apart from pure software
errors, also considers induced errors in software due to environmental factors into
consideration. A direct modification in an NHPP model that can successfully handle
software errors is to combine it with other NHPP models to tackle induced errors
resulting from hardware and user. We developed 33 hybrid models by combining NHPP
models in various combinations according to their characteristics. To access these
developed models, we formulated an estimation function and ranking methodology to
select the best model based on the accuracy of estimation. The developed hybrid models
were compared with existing traditional models using thirteen comparison criteria.
Lastly, soft computing techniques like Genetic Algorithm, Simulated Aneling, and
Particle Swarm Optimization were utilized for parameter evaluation and optimization.