Perception Algorithms

Perception algrithms for Unmanned Systems (including SLAM, mapping, object detection, etc.)

Feedback Loop Based Visual Inertial SLAM (FLVIS)

The visual perception and Simultaneous Localization and Mapping (SLAM) technology enhance the scenario of UAV application from outdoor environment (with GNSS) to indoor (GNSS denied) environment. Our group started the perception research in 2018. Currently, we have developed a complete and versatile SLAM framework for UAV.

In this paper, we present a novel stereo visual inertial pose estimation method. Compared to the widely used filter-based or optimization-based approaches, the pose estimation process is modeled as a control system. Designed feedback or feedforward loops are introduced to achieve the stable control of the system, which include a gradient decreased feedback loop, a roll-pitch feed forward loop and a bias estimation feedback loop. This system, named FLVIS (Feedforward-feedback Loop-based Visual Inertial System), is evaluated on the popular EuRoc MAV dataset. FLVIS achieves high accuracy and robustness with respect to other state-of-the-art visual SLAM approaches. The system has also been implemented and tested on a UAV platform. The source code of this research is public to the research community.

Learning-based Autonomous Inspection UAV system (LAIS)

Deep learning is a subfield of machine learning based on a set of algorithms that attempt to understand data like human being. In our research, we apply deep learning to solve different UAV problems, such as end-to-end perception, navigation, and control.

The inspection of electrical and mechanical (E&M) devices using unmanned aerial vehicles (UAVs) has become an increasingly popular choice in the last decade due to their flexibility and mobility. UAVs have the potential to reduce human involvement in visual inspection tasks, which could increase efficiency and reduce risks. This paper presents a UAV system for autonomously performing E&M device inspection.

The proposed system relies on learning-based detection for perception, multi-sensor fusion for localization, and path planning for fully autonomous inspection. The perception method utilizes semantic and spatial information generated by a 2-D object detector. The information is then fused with depth measurements for object state estimation. No prior knowledge about the location and category of the target device is needed. The system design is validated by flight experiments using a quadrotor platform. The result shows that the proposed UAV system enables the inspection mission autonomously and ensures a stable and collision-free flight.

Autonomous Object Tracking UAV system (AUTO)

The ever-burgeoning growth of autonomous unmanned aerial vehicles (UAVs) has demonstrated a promising platform for utilization in real-world applications. In particular, UAV equipped with a vision system could be leveraged for surveillance applications. This paper proposes a learning-based UAV system for achieving autonomous surveillance, in which the UAV can be of assistance in autonomously detecting, tracking, and following a target object without human intervention. Specifically, we adopted the YOLOv4-Tiny algorithm for semantic object detection and then consolidated it with a 3D object pose estimation method and Kalman Filter to enhance the perception performance. In addition, a back-end UAV path planning for surveillance maneuver is integrated to complete the fully autonomous system. The perception module is assessed on a quadrotor UAV, while the whole system is validated through flight experiments. The experiment results verified the robustness, effectiveness, and reliability of the autonomous object tracking UAV system in performing surveillance tasks. The source code is released to the research community for future reference.