A Review: Obstacle Perception Based on Panoramic Vision in Low-altitude Rotorcraft UAVs
Keywords:
Fisheye camera, Obstacle detection, Panoramic vision, Review, Low-altitude UAVAbstract
In recent years, Low-altitude Rotorcraft Unmanned Aerial Vehicles (UAVs) have shown great potential in many tasks, such as aerial mapping, search and rescue, plant protection, and more. In unknown environments, UAVs face the risk of collisions with various unforeseen obstacles. For UAVs, the ability to effectively perceive and recognize obstacles from all directions is crucial for ensuring safe flight, and a variety of sensors have been employed for this purpose. Among these, panoramic vision-based obstacle perception systems offer significant advantages. This paper focuses on key technologies for obstacle detection based on panoramic vision in UAVs. It reviews the current state of research and applications both domestically and internationally, covering aspects such as hardware platforms, key algorithms (including model correction, distortion correction, feature matching, image stitching), panoramic imaging, and omnidirectional obstacle detection. The paper also identifies major bottlenecks and outlines future development directions. Based on this work, we will continue our research on autonomous omnidirectional obstacle detection and avoidance.References
[1] H. Y. Lee, H. W. Ho and Y. Zhou, Deep learning-based monocular obstacle avoidance for unmanned aerial vehicle navigation in tree plantations: faster region-based convolutional neural network approach, Journal of Intelligent & Robotic Systems, 101(5), 2021, 1-27.
[2] Y. Yu, W. Tingting, C. Long and Z. Weiwei, Stereo vision-based obstacle avoidance strategy for quadcopter UAV, 30th Chinese Control and Decision Conference, Shenyang, China, 2018, 490-494.
[3] T. Hinzmann, C. Cadena, J. Nieto and R. Siegwart, Flexible trinocular: non-rigid multi-camera-IMU dense reconstruction for UAV navigation and mapping, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, Macau, China, 2019, 1137-1142.
[4] F. Liang, S. Kevin, K. Kunze and Y. S. Pai, PanoFlex: Adaptive panoramic vision to accommodate 360 field-of-view for humans, Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology, Parramatta, Australia, 2019, 1-2.
[5] M. Nieuwenhuisen, D. Droeschel, J. Schneider, D. Holz, T. Läbe and S. Behnke, Multimodal obstacle detection and collision avoidance for micro aerial vehicles, 2013 European Conference on Mobile Robots, Barcelona, Spain, 2013, 7-12.
[6] R. Aggarwal, A. Vohra, A. M. Namboodiri, Panoramic stereo videos with a single camera, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016, 3755-3763.
[7] H. Z. Yuan, B. P. Wang, J. Zhang and H. Li, A novel method for geometric correction of multi-cameras in panoramic video systems, 2010 International Conference on Measuring Technology and Mechatronics Automation, Changsha, China, 2010, 248-251.
[8] O. Zia, J. H. Kim, K. Han and J. W. Lee, 360 panorama generation using drone mounted fisheye cameras, 2019 IEEE International Conference on Consumer Electronics, Las Vegas, USA, 2019, 1-3.
[9] B. Akdemir, A. N. Belbachir and L. M. Svendsen, Real-time vehicle localization and tracking using monocular panomorph panoramic vision, 2018 24th International Conference on Pattern Recognition, Beijing, China, 2018, 2350-2355.
[10] Y. Zhang, X. Xu, N. Zhang and Y. Lv, A semantic SLAM system for catadioptric panoramic cameras in dynamic environments, Sensors, 21(17), 2021, 5889.
[11] I. Stamenov, A. Arianpour, S. J. Olivas, I. P. Agurok, A. R. Johnson, R. A. Stack, R. L. Morrison and J. E. Ford, Panoramic monocentric imaging using fiber-coupled focal planes, Optics Express, 22(26), 2014, 31708-31721.
[12] C. Geyer and K. Daniilidis, Catadioptric camera calibration, Proceedings of the 7th IEEE International Conference on Computer Vision, Kerkyra, Greece, 1999, 398-404.
[13] S. Yi and N. Ahuja, An omnidirectional stereo vision system using a single camera, 18th International Conference on Pattern Recognition, Hong Kong, China, 2006, 861-865.
[14] K. Tanaka and S. Tachi, Tornado: Omnistereo video imaging with rotating optics, IEEE Transactions on Visualization and Computer Graphics, 11(6), 2005, 614-625.
[15] C. Richardt, Y. Pritch, H. Zimmer and A. Sorkine-Hornung, Megastereo: Constructing high-resolution stereo panoramas, 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, USA, 2013, 1256-1263.
[16] N. K. E. L. Abbadi, S. A. Al Hassani and A. H. Abdulkhaleq, A review over panoramic image stitching techniques, Journal of Physics: Conference Series, 1999(1), 2021, 012115.
[17] C. Arth, M. Klopschitz, G. Reitmayr and D. Schmalstieg, Real-time self-localization from panoramic images on mobile devices, 2011 10th IEEE International Symposium on Mixed and Augmented Reality, Basel, Switzerland, 2011, 37-46.
[18] S. Peleg, M. Ben-Ezra and Y. Pritch, Omnistereo: Panoramic stereo imaging, IEEE Transactions on Pattern Analysis and Machine Intelligence, 23, 2001, 279-290.
[19] W. Ye, K. Yu, Y. Yu, and J. Li, Logical stitching: A panoramic image stitching method based on color calibration box, 2018 14th IEEE International Conference on Signal Processing, Beijing, China, 2018, 1139-1143.
[20] A. S. Amini, M. Varshosaz and M. Saadatseresht, Evaluating a new stereo panorama system based on stereo camera, International Journal of Scientific Research in Inventions and New Ideas, 2(1), 2014, 1-10.
[21] Y. Pritch, M. Ben-Ezra and S. Peleg, Optics for omnistereo imaging, Foundations of Image Understanding, 2011, 447-467.
[22] PanoramaStudio, https://www.tshsoft.com/en/index, 2025 (accessed 19.01.2025).
[23] PTGui, https://ptgui.com/, 2025 (accessed 19.01.2025).
[24] Hugin-Panorama photo stitcher, https://hugin.sourceforge.io/, 2025 (accessed 19.01.2025).
[25] Y. Hou, L. P. Niu, Y. M. Zhao and S. Z. Lan, Fisheye images correction based on different angle of views, 2020 IEEE 9th Joint International Information Technology and Artificial Intelligence Conference, Chongqing, China, 2020, 854-856.
[26] G. Krishnan and S. K. Nayar, Cata-fisheye camera for panoramic imaging, 2008 IEEE Workshop on Applications of Computer Vision. Copper Mountain, CO, USA, 2008, 1550-5790.
[27] Aerial Pro 360, https://diydrones.com/members/AerialPro360, 2025 (accessed 19.01.2025).
[28] RICOH360, https://www.ricoh360.com/theta/, 2025 (accessed 19.01.2025).
[29] I. C. Lo, K. T. Shih and H. H. Chen, Efficient and accurate stitching for 360° dual-fisheye images and videos, IEEE Transactions on Image Processing, 31, 2022, 251-262.
[30] V. Chapdelaine-Couture and S. Roy, The omnipolar camera: A new approach to stereo immersive capture, IEEE International Conference on Computational Photography, Cambridge, MA, USA, 2013, 1-9.
[31] H. Cheng, C. Xu, J. Wang and L. Zhao, Quad-fisheye image stitching for monoscopic panorama reconstruction, Computer Graphics Forum, 41(6), 2022, 94-109.
[32] O. Zia, J. H. Kim, K. Han and J. W. Lee, 360 panorama generation using drone mounted fisheye cameras, 2019 IEEE International Conference on Consumer Electronics, Las Vegas, NV, USA, 2019, 1-3.
[33] J. Kannala and S. S. Brandt, A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses, IEEE Transactions on Pattern Analysis & Machine Intelligence, 28(8), 2006, 1335-1340.
[34] Q. Fu, K. Y. Cai and Q. Quan, Calibration of multiple fish-eye cameras using a wand, IET Computer Vision, 9(3), 2015, 378-389.
[35] Fisheye Calibration Basics, https://www.mathworks.com/help/vision/ug/fisheye-calibration-basics.html, 2025 (accessed 19.01.2025).
[36] S. Chan, X. Zhou, C. Huang, S. Chen and Y. F. Li, An improved method for fisheye camera calibration and distortion correction, 2016 International Conference on Advanced Robotics and Mechatronics. Macau, China, 2016, 579-584.
[37] N. Wakai and T. Yamashita, Deep single fisheye image camera calibration for over 180-degree projection of field of view, Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, 1174-1183.
[38] N. Wakai, T. Azuma and K. Nobori, Multiple fisheye camera calibration and stereo measurement methods for uniform distance errors throughout imaging ranges, 17th International Conference on Machine Vision and Applications, Aichi, Japan, 2021, 1-5.
[39] G. H. Babu and N. Venkatram, A survey on analysis and implementation of state-of-the-art haze removal technique, Journal of Visual Communication and Image Representation, 72, 2020, 102912.
[40] H. Wang, Q. Xie, Q. Zhao and D. Meng, A model-driven deep neural network for single image rain removal, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, 3103-3112.
[41] M. Li, X. Cao, Q. Zhao, L. Zhang and D. Meng, Online rain/snow removal from surveillance videos, IEEE Transactions on Image Processing, 30, 2021, 2029-2044.
[42] R. Wang, D. Zou, C. Xu, L. Pei, P. Liu and W. Yu, An aerodynamic model-aided state estimator for multi-rotor UAVs, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vancouver, BC, Canada, 2017, 2164-2170.
[43] H. Zhu, X. Yin and J. Zhou, A cubic polynomial model for fisheye camera, 14th International Conference on Human-Computer Interaction: Design and Development Approaches, Orlando, FL, USA, 2011, 684-693.
[44] Y. Chang, D. Bailey and S. L. Moan, Lens distortion correction by analysing peak shape in Hough transform space, 2017 International Conference on Image and Vision Computing New Zealand, Christchurch, New Zealand, 2017, 1-6.
[45] G. Zhou, H. Li, R. Song, Q. Wang, J. Xu and B. Song, Orthorectification of fisheye image under equidistant projection model, Remote Sensing, 14(17), 2022, 4175.
[46] V. Usenko, N. Demmel and D. Cremers, The double sphere camera model, 2018 International Conference on 3D Vision, Verona, Italy, 2018, 552-560.
[47] J. Xu, D. W. Han, K. Li, J. J. Li and Z. Y. Ma, A comprehensive overview of fish-eye camera distortion correction method, arXiv preprint, arXiv:2401.00442, 2023.
[48] J. Kopf, M. Uyttendaele, O. Deussen and M. F. Cohen, Capturing and viewing gigapixel images, Association for Computing Machinery Transactions on Graphics, 26(3), 2007, 93-es.
[49] 8 Guidelines to taking panoramic photos with any camera, https://digital-photography-school.com/8-guidelines-to-taking-panoramic-photos-with-any-camera/, 2025 (accessed: 19.01.2025).
[50] 360 DJI Drone Panorama DroneBlocks, http://www.hdrpano.ch/index_htm_files/DroneBlocks.pdf, 2025 (accessed: 19.01.2025).
[51] A. Akin, O. Cogal, K. Seyid, H. Afshari, A. Schmid and Y. Leblebici, Hemispherical multiple camera system for high resolution omni-directional light field imaging, IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 3(2), 2013, 137-144.
[52] D. G. Lowe, Distinctive image features from scale-invariant keypoint, International Journal of Computer Vision, 60(2), 2004, 91-110.
[53] H. Bay, T. Tuytelaars and L. Van Gool, SURF: Speeded up robust features, 9th European Conference on Computer Vision, Graz, Austria, 2006, 404-417.
[54] E. Rublee, V. Rabaud, K. Konolige and G. Bradski, ORB: An efficient alternative to SIFT or SURF, 2011 International Conference on Computer Vision, Barcelona, Spain, 2011, 2564-2571.
[55] J. Zaragoza, T. J. Chin, M. S. Brown and D. Suter, As-projective-as-possible image stitching with moving DLT, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, 2339-2346.
[56] Z. Wang, B. Fan, G. Wang and F. Wu, Exploring local and overall ordinal information for robust feature description, IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(11), 2016, 2198-2211.
[57] B. He and S. Yu, Parallax-robust surveillance video stitching, Sensors, 16 (1), 2015, 7.
[58] C. C. Lin, S. U. Pankanti, K. Natesan Ramamurthy and A. Y. Aravkin, Adaptive as-natural-as possible image stitching, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015, 1155-1163.
[59] Image seamless stitching and straightening based on the image block, https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-ipr.2017.1064, 2025 (accessed: 19.01.2025).
[60] F. Perazzi, A. Sorkine‐Hornung, H. Zimmer, P. Kaufmann, O. Wang, S. Watson and M. Gross, Panoramic video from unstructured camera arrays, Computer Graphics Forum, 34(2), 2015, 57-68.
[61] W. Y. Lin, S. Liu, Y. Matsushita, T. T. Ng and L. F. Cheong, Smoothly varying affine stitching, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 2011, 345-352.
[62] W. Jiang and J. Gu, Video stitching with spatial-temporal content preserving warping, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Boston, Massachusetts, USA, 2015, 42-48.
[63] Y. Nie, T. Su, Z. Zhang, H. Sun and G. Li, Dynamic video stitching via shakiness removing, IEEE Transactions on Image Processing, 27(1), 2018, 164-178.
[64] A. Hamza, R. Hafiz, M. M. Khan, Y. Cho and J. Cha, Stabilization of panoramic videos from mobile multi-camera platform, Image and Vision Computing, 37, 2015, 20-30.
[65] T. Ho, I. D. Schizas, K. R. Rao and M. Budagavi, 360-degree video stitching for dual-fisheye lens cameras based on rigid moving least squares, 2017 IEEE International Conference on Image Processing, Beijing, China, 2017, 51-55.
[66] J. Li, Y. Zhao, W. Ye, K. Yu and S. Ge, Attentive deep stitching and quality assessment for 360° omnidirectional images, IEEE Journal of Selected Topics Signal Processing, 14(1), 2019, 209-221.
[67] A. Utter, Dual-Fisheye Image Stitching Tool, https://github.com/ooterness/DualFisheye, 2025 (accessed: 19.01.2025).
[68] J. Hao, J. Xie, J. Zhang and M. Liu, A stronger stitching algorithm for fisheye images based on deblurring and registration, IEEE Sensors Letters, 7(10), 2023, 1-4.
[69] C. Anita, Image fusion methods and applications: A review, Journal of Innovation and Technology, 14, 2023, 1-8.
[70] H. Kaur, D. Koundal and V. Kadyan, Image fusion techniques: A survey, Archives of computational methods in Engineering, 28(7), 2021, 4425-4447.
[71] F. Zhang and F. Liu, Parallax-tolerant image stitching, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, USA, 2014, 3262-3269.
[72] K. S. Krishnendu, Multi-focus image fusion based on spatial frequency (SF) and consistency verification (CV) in DCT domain, arXiv e-prints, arXiv: 2305.11265, 2023.
[73] L. Xue, J. Zhu, H. Zhang and R. Liu, A high-quality stitching algorithm based on fisheye images, Optik, 238, 2021, 166520.
[74] L. Zhao, 3D Obstacle Avoidance for Unmanned Autonomous System (UAS), Master of Science in Engineering, University of Nevada, Las Vegas, USA, 2015.
[75] J. P. Angelo, S. J. Chen, M. Ochoa, U. Sunar, S. Gioux and X. Intes, Review of structured light in diffuse optical imaging, Journal of Biomedical Optics, 24(7), 2019, 071602-071602.
[76] S. Ryoka and O. Hirotsugu, Binocular disparity estimation algorithm using multiple spatial frequency information and a neural network, ALife Robotics, 28, 2023, 536-540.
[77] C. Nian, W. Kaihua and W. Wenjie, Obstacle detection system of plant protection UAVs based on structural light, Journal of Applied Optics, 39(3), 2018, 343-348.
[78] K. H. Wu and W. J. Wang, Detection method of obstacle for plant protection UAV based on structured light vision, Opto-Electronic Engineering, 45(4), 2018, 170613.
[79] T. Jia, B. N. Wang, Z. X. Zhou and H. Meng, Scene Depth perception based on omnidirectional structured light, IEEE Transactions on Image Processing, 25(9), 2016, 4369-4378.
[80] C. Paniagua, L. Puig and J. J. Guerrero, Omnidirectional structured light in a flexible configuration, Sensors, 2013, 13, 13903-13916.
[81] M. Mansour, P. Davidson, O. Stepanov and R. Piché, Relative importance of binocular disparity and motion parallax for depth estimation: A computer vision approach, Remote Sensing, 11(17), 2019, 1990.
[82] Y. Pang, J. Nie, J. Xie, J. Han and X. Li, BidNet: Binocular image dehazing without explicit disparity estimation, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, 5931-5940.
[83] S. Xie, D. Wang, Y. H. Liu, OmniVidar: Omnidirectional depth estimation from multi-fisheye images, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, Canada, 2023, 21529-21538.
[84] P. Wang, M. Li, J. Cao, S. Du and Y. Li, CasOmniMVS: Cascade omnidirectional depth estimation with dynamic spherical sweeping, Applied Sciences, 14(2), 2024, 517.
[85] M. Li, X. Jin, X. Hu, J. Dai, S. Du and Y. Li, MODE: Multi-view omnidirectional depth estimation with 360° cameras, European Conference on Computer Vision, 2022, 197-213.
[86] J. M. Galbraith, G. T. Kenyon and R. W. Ziolkowski, Time-to-collision estimation from motion based on primate visual processing, IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(8), 2005, 1279-1291.
[87] Mobileye, https://www.mobileye.com/, 2025 (accessed 19.01.2025).
[88] S. Baker and I. Matthews, Lucas-Kanade 20 years on: A unifying framework, International Journal of Computer Vision, 56(3), 2004, 221-255.
[89] B. K. P. Horn and B. G. Schunck, Determining optical flow, Artificial intelligence, 17(1-3), 1981, 185-203.
[90] M. Menze, C. Heipke and A. Geiger, Discrete optimization for optical flow, 37th German Conference on Pattern Recognition (GCPR 2015), Aachen, Germany, 2015, 16-28.
[91] B. Alibouch, A. Radgui, M. Rziza and D. Aboutajdine, Optical flow estimation on omnidirectional images: An adapted phase based method, 5th International Conference on Image and Signal Processing (ICISP 2012), Agadir, Morocco, 2012, 468-475.
[92] C. Demonceaux and D. Kachi-Akkouche, Optical flow estimation in omnidirectional images using wavelet approach, 2003 Conference on Computer Vision and Pattern Recognition Workshop, Madison, WI, USA, 7, 2003, 76-76.
[93] Q. Quan, Introduction to Multicopter Design and Control, Singapore: Springer, 2017, 978-981.
[94] G. Yang and D. Ramanan, Upgrading optical flow to 3D scene flow through optical expansion, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, 1334-1343.
[95] J. Chen, J. L. Niu and D. H. Chen, Research on intelligent wheelchair obstacle avoidance based on AdaBoost, Applied Mechanics and Materials, 312, 2013, 685-689.
[96] R. Girshick, J. Donahue, T. Darrell and J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, Columbus, Ohio, 2014, 580-587.
[97] A. Salvador, X. Giró-i-Nieto, F. Marqués and S. Satoh, Faster R-CNN features for instance search, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Las Vegas Nevada, 2016, 9-16.
[98] W. Wu, H. Liu, L. Li, Y. Long, X. Wang, Z. Wang, J. Li and Y. Chang Y, Application of local fully convolutional neural network combined with YOLOv5 algorithm in small target detection of remote sensing image, PLOS One, 16(10), 2021, e0259283.
[99] D. Zou and P. Tan, Coslam: Collaborative visual slam in dynamic environments, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(2), 2012, 354-366.
[100] X. Yan, H. Deng and Q. Quan, Active infrared coded target design and pose estimation for multiple objects, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, Macau, China, 2019, 6885-6890.
[101] M. Pavliv, F. Schiano, C. Reardon, D. Floreano and G. Loianno, Tracking and relative localization of drone swarms with a vision-based headset, IEEE Robotics and Automation Letters, 6(2), 2021, 1455-1462.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Xiaoyan Jiang, Khairul Hamimah Abas, Abdul Rashid Husain

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:Authors hold and retain copyright, and grant the journal right of first publication, with the work after publication simultaneously licensed under a Creative Commons Attribution 4.0 License CC BY that permits any use, reproduction and distribution of the work and article without further permission provided that the original work is properly cited.
Authors are permitted and encouraged to post their work online in institutional repositories, website and other social media before and after publication, as it can lead to productive exchanges, as well as earlier and greater citation of published work.





