Vol. 3 No. 2 (2023): African Journal of Artificial Intelligence and Sustainable Development
Articles

Explainable AI for Transparent Decision-Making in Autonomous Vehicle Systems

Dr. Yuliya Shylenok
Associate Professor of Applied Mathematics and Informatics, Belarusian State University of Informatics and Radioelectronics (BSUIR)
Cover

Published 14-09-2023

How to Cite

[1]
Dr. Yuliya Shylenok, “Explainable AI for Transparent Decision-Making in Autonomous Vehicle Systems”, African J. of Artificial Int. and Sust. Dev., vol. 3, no. 2, pp. 320–341, Sep. 2023, Accessed: Sep. 19, 2024. [Online]. Available: https://africansciencegroup.com/index.php/AJAISD/article/view/93

Abstract

One of the ultimate goals of AVs is to achieve high levels of traffic safety. According to a report of the WHO, the Halving Global Road Traffic Deaths and Injuries report (2015), worldwide every year about 1.35 million deaths and 20 to 50 million injuries occur in traffic. AI systems in AVs are not only supposed to reduce most of such accidents caused by human error but also generally to increase the reliability of traffic substantially. AVs are supposed to collect a huge number of test kilometers for proving their absolute reliability statistically before they are allowed to participate in traffic. If the majority of (test) vehicles are rather ‘green horns’ on public roads at first, and a small fraction of ‘super professionals’ exist at best, accidents will happen all too frequently because test kilometers in traffic are countable in principle only. However, examining every single faulty decision arising from a test kilometer or infinitesimal drive in this test hall is prohibitively expensive, while AI-driven ASs especially enjoy producing statistical learning models. On top of this, due to strongly integrated hardware, software, and safety gear, faulty AV drives are indeed worthy of detailed consideration; one of the lasting issues is sensor and actuator reliability. Safety is the overriding principle in AV design – AVs are classified as ASCAS (SAE Level 5), meaning that ASs are the sole drivers and nannies. Hence, the decisions of the AS at any point in time have to be reasonable, and subsequently, the consequences have to be predictable. There is an increasing demand from the general public, legislators, and trade associations for transparency, reliability, and safety in all AI at the current time, but especially for ASs with their varied applications such as autonomous driving.

Downloads

Download data is not yet available.

References

  1. Tatineni, Sumanth, and Anjali Rodwal. “Leveraging AI for Seamless Integration of DevOps and MLOps: Techniques for Automated Testing, Continuous Delivery, and Model Governance”. Journal of Machine Learning in Pharmaceutical Research, vol. 2, no. 2, Sept. 2022, pp. 9-41, https://pharmapub.org/index.php/jmlpr/article/view/17.
  2. Prabhod, Kummaragunta Joel. "Advanced Machine Learning Techniques for Predictive Maintenance in Industrial IoT: Integrating Generative AI and Deep Learning for Real-Time Monitoring." Journal of AI-Assisted Scientific Discovery 1.1 (2021): 1-29.
  3. Tatineni, Sumanth, and Venkat Raviteja Boppana. "AI-Powered DevOps and MLOps Frameworks: Enhancing Collaboration, Automation, and Scalability in Machine Learning Pipelines." Journal of Artificial Intelligence Research and Applications 1.2 (2021): 58-88.