oalogo2  

AUTHOR(S):

Ashwani Kumar Aggarwal, Ajay Pal Singh Chauhan

 

TITLE

Robust Feature Extraction from Omnidirectional Outdoor Images for Computer Vision Applications

pdf PDF

ABSTRACT

Robust feature extraction from digital images is a challenging task for many computer vision applications. Several methods are available for extraction of features based on texture, shape, color, and geometry in the digital images captured using conventional camera. In the outdoor environment, the feature extraction becomes increasingly challenging because of dynamic scene, presence of outliers, occlusions, and changing illumination conditions. Omnidirectional cameras are becoming popular to capture images in outdoor environment as these gather scene information from a wide angle. This paper addresses the challenges of dynamic scenes, occlusions, outliers, and changing lighting conditions. The methodology used in this paper integrates SIFT, deep learning-based feature maps, accurate feature detection and descriptor matching under outdoor conditions. In this paper, robust feature extraction methods which consider the pixel formation in the omnidirectional images, are used to extract features from omnidirectional outdoor images for many computer vision applications. Such feature extraction methods can be useful in applications like intelligent transportation systems, mobile robots, and location-based services. The findings have significant implications for intelligent transport systems. This paper presents an approach towards enhancing robustness of image feature detection methods in dynamic environments.

KEYWORDS

Omnidirectional image, feature descriptor, image matching, outliers, illumination variation, feature selection, noise removal

 

Cite this paper

Ashwani Kumar Aggarwal, Ajay Pal Singh Chauhan. (2025) Robust Feature Extraction from Omnidirectional Outdoor Images for Computer Vision Applications. International Journal of Instrumentation and Measurement, 10, 8-13

 

cc.png
Copyright © 2024 Author(s) retain the copyright of this article.
This article is published under the terms of the Creative Commons Attribution License 4.0