Published 14-05-2023
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
How to Cite
Abstract
We attempt to describe a deep neural network that is used to optimize the route selection of the AUTMOTH. The proposed DRLNDT network optimizes the route by making use of a decision transformer. This decision transformer takes a route graph as an input and outputs the future action sequence tiers. This mechanism works better than the transformer mechanism for route selection due to those action sequence tier outputs and results in a simpler task for the DRL task learner. Furthermore, we use a deep reinforcement learning mechanism to train the DRL task learner in-order that the actions of the next time step can be predicted in an efficient manner. We train our network using a graph based environment where we divide the complete environment into grids in-order to save the training computational time.
Downloads
References
- Tatineni, Sumanth, and Venkat Raviteja Boppana. "AI-Powered DevOps and MLOps Frameworks: Enhancing Collaboration, Automation, and Scalability in Machine Learning Pipelines." Journal of Artificial Intelligence Research and Applications 1.2 (2021): 58-88.
- Shahane, Vishal. "Harnessing Serverless Computing for Efficient and Scalable Big Data Analytics Workloads." Journal of Artificial Intelligence Research 1.1 (2021): 40-65.
- Abouelyazid, Mahmoud. "YOLOv4-based Deep Learning Approach for Personal Protective Equipment Detection." Journal of Sustainable Urban Futures 12.3 (2022): 1-12.
- Prabhod, Kummaragunta Joel. "Utilizing Foundation Models and Reinforcement Learning for Intelligent Robotics: Enhancing Autonomous Task Performance in Dynamic Environments." Journal of Artificial Intelligence Research 2.2 (2022): 1-20.
- Tatineni, Sumanth, and Anirudh Mustyala. "AI-Powered Automation in DevOps for Intelligent Release Management: Techniques for Reducing Deployment Failures and Improving Software Quality." Advances in Deep Learning Techniques 1.1 (2021): 74-110.