Vol. 4 No. 1 (2024): African Journal of Artificial Intelligence and Sustainable Development
Articles

A Comprehensive Survey of Explainable Artificial Intelligence in Machine Learning Models

Dr Steve Lockey
Professor, University of Queensland, Gatton Campus, Gatton, QLD, Australia
Prof. Chien-Ming
Professor, University of Queensland, Gatton Campus, Gatton, QLD, Australia
Dr Emily Chen
Professor, University of Queensland, Gatton Campus, Gatton, QLD, Australia
Dr Hassan Khosravi
Professor, University of Queensland, Gatton Campus, Gatton, QLD, Australia
Dr Nell Baghaei
Professor, University of Queensland, Gatton Campus, Gatton, QLD, Australia
Cover

Published 20-04-2024

Keywords

  • Explainable Artificial Intelligence,
  • Explainable AI,
  • Machine Learning

How to Cite

[1]
D. S. Lockey, Prof. Chien-Ming, Dr Emily Chen, Dr Hassan Khosravi, and Dr Nell Baghaei, “A Comprehensive Survey of Explainable Artificial Intelligence in Machine Learning Models”, African J. of Artificial Int. and Sust. Dev., vol. 4, no. 1, pp. 79–91, Apr. 2024, Accessed: Nov. 24, 2024. [Online]. Available: https://africansciencegroup.com/index.php/AJAISD/article/view/19

Abstract

Scientists with diverse interests, ranging from scientists, the European Union, and researchers focused on artificial intelligence, are reemphasizing the importance of interpretation capabilities of models in the face of the increasing proliferation and social impact of AI systems. The reason is that the AI systems might be acting overly deterministic and overly confident, sometimes misleading in areas of critical expertise. The interpretability is the capability of humans to understand the results and decisions of models. It is particularly concerned when the computational models make high-stakes decisions (for instance, health care) that favor uncertainty promotion or when matter a further discussion with different stakeholders about the modus operandi. For example, overly confident models may make erroneous predictions that result in inaccurate diagnosis results in health care applications. Or, in fraud detection, they may overlook the profile of the real fraudulent behavior, excluding determining indications that might lead to identifying the external sectors in the highest risk.

Downloads

Download data is not yet available.

References

  1. Pulimamidi, Rahul. "To enhance customer (or patient) experience based on IoT analytical study through technology (IT) transformation for E-healthcare." Measurement: Sensors (2024): 101087.
  2. Pargaonkar, Shravan. "The Crucial Role of Inspection in Software Quality Assurance." Journal of Science & Technology 2.1 (2021): 70-77.
  3. Rao, Deepak Dasaratha, et al. "Strategizing IoT Network Layer Security Through Advanced Intrusion Detection Systems and AI-Driven Threat Analysis." Full Length Article 12.2 (2024): 195-95.
  4. Menaga, D., Loknath Sai Ambati, and Giridhar Reddy Bojja. "Optimal trained long short-term memory for opinion mining: a hybrid semantic knowledgebase approach." International Journal of Intelligent Robotics and Applications 7.1 (2023): 119-133.
  5. Singh, Amarjeet, and Alok Aggarwal. "Securing Microservices using OKTA in Cloud Environment: Implementation Strategies and Best Practices." Journal of Science & Technology 4.1 (2023): 11-39.
  6. Singh, Vinay, et al. "Improving Business Deliveries for Micro-services-based Systems using CI/CD and Jenkins." Journal of Mines, Metals & Fuels 71.4 (2023).
  7. Reddy, Surendranadha Reddy Byrapu. "Big Data Analytics-Unleashing Insights through Advanced AI Techniques." Journal of Artificial Intelligence Research and Applications 1.1 (2021): 1-10.
  8. Raparthi, Mohan, et al. "Data Science in Healthcare Leveraging AI for Predictive Analytics and Personalized Patient Care." Journal of AI in Healthcare and Medicine 2.2 (2022): 1-11.
  9. Tatineni, Sumanth. "Applying DevOps Practices for Quality and Reliability Improvement in Cloud-Based Systems." Technix international journal for engineering research (TIJER)10.11 (2023): 374-380.