Project: #IITM-251101-192
AI-Driven Smart Sensing Techniques for Safe and Efficient Autonomous Driving in Complex Urban Environments
The rapid advancement of autonomous vehicle (AV) technologies has transformed the landscape of intelligent transportation systems, offering the promise of safer, more efficient, and sustainable urban mobility. Despite significant progress in perception and navigation algorithms, achieving robust autonomy in complex urban environments remains a critical challenge. These environments are characterized by high traffic density, unpredictable pedestrian behavior, varying weather and lighting conditions, and frequent signal interferences. Traditional sensing and perception systems, often relying on a single modality, such as, vision or LiDAR, struggle to maintain reliable performance under such dynamic and uncertain conditions. As a result, the need for adaptive, intelligent, and context-aware sensing frameworks has become increasingly evident in ensuring the safety and reliability of AV operations.
Existing research has primarily focused on improving perception accuracy or developing individual sensor technologies in isolation. However, there is a notable research gap in integrating multi-modal sensing with artificial intelligence (AI) for adaptive decision-making under real-world variability. Current sensor fusion techniques often operate on static models that fail to adjust dynamically to contextual changes, leading to performance degradation in adverse or unexpected conditions. Furthermore, most perception systems lack the ability to optimize sensing strategies based on environmental feedback, limiting their efficiency in resource-constrained or high-noise scenarios.
This research aims to design and develop AI-driven smart sensing techniques that enhance the safety and efficiency of autonomous vehicles operating in complex urban environments. The proposed framework will integrate multi-modal sensor fusion, combining LiDAR, radar, and vision data, with deep learning algorithms for robust perception, adaptive sensing, and intelligent decision-making. Emphasis will be placed on developing context-aware models capable of dynamically adjusting sensing parameters in response to traffic density, weather conditions, and environmental complexity.
The specific objectives of this study are: (1) to develop a multi-modal sensor fusion architecture leveraging deep neural networks for improved object detection and situational awareness
(2) to design adaptive sensing mechanisms that optimize sensor utilization based on environmental feedback
(3) to implement AI-based decision-making algorithms for real-time path planning and collision avoidance
and (4) to evaluate the proposed framework in simulated and real-world urban driving scenarios. The research will contribute to the development of next-generation autonomous systems capable of reliable, efficient, and context-aware operation in densely populated urban settings.