Dr Taha Mansouri T.Mansouri@salford.ac.uk
Lecturer in AI
Dr Taha Mansouri T.Mansouri@salford.ac.uk
Lecturer in AI
Prof Sunil Vadera S.Vadera@salford.ac.uk
Supervisor
This thesis addresses a pressing issue in the realm of IoT-based fault prediction using sensor data, focusing on the crucial yet challenging aspect of explainability within deep learning models. While deep learning has showcased remarkable advancements in fault prediction, its inherent black-box nature poses obstacles in understanding the rationale behind its predictions. This lack of transparency impedes the practical implementation and adoption of these models in critical decision-making scenarios. The thesis comprises a comprehensive investigation encapsulated within five published papers spanning over a decade, from 2011 to 2023. These papers collectively contribute to the domain of Explainable Artificial Intelligence (XAI), delving into various approaches aimed at shedding light on the inner workings of complex deep learning models. The earlier papers serve as building blocks, laying the groundwork for fundamental concepts explored and expanded upon in subsequent submissions. Each paper makes distinct contributions to the field of AI. These contributions include the introduction of a novel evolutionary algorithm, applying Fuzzy Cognitive Maps for failure and fault modelling, proposing an evolutionary algorithm for training Fuzzy Cognitive Maps, developing an explainable deep learning model for fault prediction, and utilizing insights derived from preceding research to explaining the inner processes of deep learning models. Through a meticulous analysis of these publications, this thesis effectively addresses the fundamental research questions posed. It offers insights into overcoming the challenges associated with the opacity of deep learning models, paving the way for more transparent and interpretable AI models, particularly in the domain of fault prediction using IoT sensor data.
Thesis Type | Thesis |
---|---|
Deposit Date | Jan 5, 2024 |
Keywords | Deep Learning; Explainable AI; Fault Prediction; IoT; Predictive Preventive Maintenance |
Award Date | Jan 26, 2024 |
This file is under embargo due to copyright reasons.
Contact T.Mansouri@salford.ac.uk to request a copy for personal use.
Explainable fault prediction using learning fuzzy cognitive maps
(2023)
Journal Article
A deep explainable model for fault prediction using IoT sensors
(2022)
Journal Article
Learning Fuzzy Cognitive Maps with modified asexual reproduction optimisation algorithm
(2018)
Journal Article
ARO: a new model-free optimization algorithm inspired from asexual reproduction
(2010)
Journal Article
A Newly Adopted YOLOv9 Model for Detecting Mould Regions Inside of Buildings
(2024)
Journal Article
Identifying the threshold concepts in teaching marketing: A pedagogic research
(2024)
Presentation / Conference
The Intersection of Generative AI and Healthcare: Addressing Challenges to Enhance Patient Care
(2024)
Presentation / Conference Contribution
A Comprehensive Review of AI Techniques for Addressing Algorithmic Bias in Job Hiring
(2024)
Journal Article
About USIR
Administrator e-mail: library-research@salford.ac.uk
This application uses the following open-source libraries:
Apache License Version 2.0 (http://www.apache.org/licenses/)
Apache License Version 2.0 (http://www.apache.org/licenses/)
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2025
Advanced Search