WP4-2: Difference between revisions

From COMP4DRONES
Jump to navigation Jump to search
 
Line 21: Line 21:
|-
|-
|  TRL || 3
|  TRL || 3
|-
| Contact || timothee.gavin at scalian.com
|}
|}



Latest revision as of 08:35, 10 March 2023

EZ_Land Precision landing

ID WP4-2
Contributor SCALIAN
Levels TBD
Require Precision landing sensors, Autopilot interface
Provide Sensor fusion and precision landing on static and moving plateforms in GNSS denied environments
Input
  • Mavlink messages with position from external positioning system
  • Attitude of the UAV
Output Relative position of the helipad and velocity commands for precision landing
C4D building block Payload Management, Data Management, Flight Guidance
TRL 3
Contact timothee.gavin at scalian.com

Detailed Description

Precision landing is a frequent subject for autonomous multicopters because it is an essential component for safety and autonomy in a drone system. Indeed, in many use cases, a large and static landing site is not guaranteed, and solely using the GPS signal and IMU data, a drone can have a landing offset up to 5 meters if the signal is bad. Such an offset can be very dangerous, as the drone can land outside its designed area, endangering the integrity of the drone or the people and infrastructure surrounding it [1].

The precision landing of a multi-rotor on a static or mobile area is a popular research topic in recent years, since it involves both control and perception problems. There are several approaches [2]:

  • GNSS techniques use the GPS and the INS as the main data source for navigation and control during landing. They are usually easy to integrate because all UAVs embed the needed components. But they lack accuracy as the bias error of a GPS accumulates over the time of operation, and attempting a landing with such uncertainties has a very high chance of failure.
  • Ground-based techniques guide the UAV from the ground during landing. Sometimes, the drone detection and tracking are done on the ground while the control is done aboard the UAV. They are accurate methods but often require a complex setup and a direct line of sight with the drone because good communication and low latency are essential for a safe landing, which makes them a less robust solution.
  • Vision-based approaches use onboard sensors to get feedback on the position of the landing area during the autonomous landing. There are many low-cost sensors coupled with computer vision and processing algorithms that allow tracking the position of the landing area, so these techniques are quite easy to implement and integrate into the UAV. But they are often subject to noise in the estimations, depending on the reliability of the image processing subject to specific lighting conditions or the quality of the visual cues.

The autonomous landing of a multirotor on a moving area is even more challenging. A concept for landing a multirotor on a moving target using low-cost sensors was proposed in [2]. A downward-facing camera is used to track a target on the landing platform and generate estimates of its position relative to the UAV, which are then used in the control of the UAV using PIDs. In [3] and [4], the authors developed an efficient and reliable MBPC (Model Based Predictive Control) based autonomous landing system that can accurately land in the presence of external disturbances. In [5], the authors focus on the landing of a drone on the deck of a USV. A marker is placed on the deck to easily recover its relative positioning in six degrees of freedom. To compensate for the interruption of the marker observations, an Extended Kalman filter (EKF) estimates the current position of the USV relative to the last known position. The results confirmed that the EKF provides sufficiently accurate estimates to direct the UAV in the vicinity of the autonomous vessel so that the marker becomes visible again. This problem of landing on USV was also addressed in [6], where they investigated a predictive control law that takes into account a model of the sea state and wave motion to attempt to land at the moment the landing zone reaches its vertical peak at the top of a wave. These solutions are very efficient, but often are highly dependent on the sensors used. Different technologies already exist and try to satisfy the needs of precision during the landing phase:

  • Lolas from internest is a module based on ultrasound detection. However, the final precision of this type of detection is not satisfying (20 cm), the detection can be easily blurred by a strong wind and is not compatible with a fleet of drones such as during the METIS use case (UC3-Demo1).
  • IR Lock is a module based on the detection of an infrared beacon on the helipad. The detection is robust and hardly blurred, but there is no information about the yaw and the working range of the solution is small.
  • Computer vision algorithms with markers such as Aruco Markers are very popular because of a good result and the simple integration on a drone but can easily be blurred by the light conditions (shadows, reflection, or very high luminosity) or minor default on the marker (crease).

Independently, these technologies don’t always guarantee an accurate landing. The component proposed by SCALIAN aims at exploiting the strength of several sensors to ensure a robust, safe, and precise autonomous landing. It participates in the framework of C4D by providing a modular and robust multi-sensor-fusion and control algorithm that merges the data coming from a set of sensors and allows a multi-copter to land on a static or moving platform, more accurately than with only a GPS. The component complies with the modular architecture defined in C4D, as the data fusion algorithm is independent of the set of sensors used by the UAV and can work with any type of sensor as long as it provides an estimate of the position of the landing area. Additionally, this allows the UAV to land when the communication and GPS are not available.

Technical description

In the context of UC3, the precision landing component will be deployed during the METIS use case (UC3-Demo1) to improve the safety of the system and to facilitate the operations of refill and reload of the drone when landed. It will also be deployed during the use case of ATECHSYS (UC3-Demo2) where the multicopter needs to land precisely on a rover from TwinswHeel to pick up and drop off a parcel. Paired up with the human detection component of WP4-5, the landing will be canceled if a human is detected near the landing zone, making this component safer.

The component of SCALIAN aims at exploiting the strength of several sensors to ensure a robust, safe, and precise landing. The component is based on a modular architecture that allows the users to configure which sensors are needed. With this conception, it is also easy to integrate a new type of sensor. The proposed component takes advantage of all the sensors, reducing the sensor noises and improving the accuracy while being robust to outliers. Each sensor has its own independent frequency, latency, and bias, but the fusion algorithm, initially using a linear regression method and now based on an Extended Kalman filter, can convert the raw sensor data, and merge the unsynchronized independent measurements into its estimation of the helipad position while rejecting the outliers. The algorithm is also able to extrapolate future positions of the helipad from a set of past measurements. The computed estimations are used to feed a control algorithm that computes velocity commands to servo-control the drone on the position of the landing zone during the descent using a PID.

The proposed component can perform a vertical landing on top of a static helipad, but also on a mobile target, where it will follow the landing area and land while always remaining vertical to the helipad. It can also follow an approach trajectory along a user-defined angle relative to the helipad. This allows its use in a wide variety of use-cases, from precise landing on top of a rover to rendezvous maneuver with a vessel at sea. In the following, the data flow of the proposed precision landing component is summarized. The component receives data from the dedicated sensors that track the helipad (as a mavlink message), and the estimated position of the UAV (combination of GPS and IMU). In a first module, the data of each sensor is parsed and converted into a unique coordinate system and each detection is evaluated in terms of confidence based on a model of the variation of the standard deviation of the noise in function of different parameters such as the distance to the landing area, angle of view, etc. This data feeds a second module, which aims at merging the detections using a Kalman filter to compute the most accurate position of the helipad based on the confidence of each sensor and the operating conditions. Finally, a third module transmits velocity commands to the drone to reach the detection as MAVROS messages. The proposed component is the result of a thorough development process. The needs in terms of precision for the UC3-demo1 were lower than those for UC3-demo2, and the initial implementation of the component used linear regression for the estimation of the helipad position based on buffered sensor data. With this first implementation, each sensor was considered independently without fusion, and the estimations were computed only on the data from the sensor seen as the most confident.

To be able to test and evaluate the software during the course of the development, a simulation tool able to simulate the dynamics of the drone and helipad and the emission of sensor data has been developed. This tool uses the open-source simulator Ardupilot SITL [7] for the physics simulation and the simulation of the drone, and the open-source robot operating system ROS [8] to simulate the dynamics of the helipad and simulate and compute the sensor data with its frequency, latency and noise.

Using this tool, the PID of the control module has been tuned using Software-In-The-Loop (SITL), to meet the desired requirement for the UC3-demo1. After tuning, the component reaches a precision of less than 2 cm in simulation, even under important simulated noise. The component performances were then verified and validated with Hardware-In-The-Loop (HITL) and then during flight tests, including the final demonstration of UC3-demo1 where the required precision error was less than 1 m.

To meet the need of UC3-demo2, where the required precision error is less than 10 cm, and also to allow the component to work for moving landing area, the component needed to be improved, especially in terms of sensor fusion to integrate multiple sensors in the computation of the estimation. To solve the problem of multi-sensor fusion for asynchronous sensors, with latency and bias, while reducing the noise and extrapolating the movement of the landing area, a Kalman filter has been integrated into the component.

Multiple solutions were studied to find a suitable solution: a state-of-the-art of the existing solutions of sensor-fusion has been realized, and two algorithms were implemented, both centered on a Kalman filter. The first one was using a synchronous fusion algorithm that merged the sensor data at a constant rate, and the second one used asynchronous fusion to merge the data in the estimate at the time they were received.

To compare the performances of the new solutions, the simulation tool has been improved to enable the simulation of multiple sensors, each with its own frequency, latency, bias, and noise. And to enable the generation of outliers in the computed measurements to challenge the robustness of the new solutions. This tool is parametrizable to match the models of the sensors used in the use-cases.

The tuning of the filters was made in the simulation and each solution's performance was assessed. After comparison, the asynchronous solution has been chosen because it outperformed the other solutions. Finally, the new fusion algorithm has been verified and validated, first in HITL and then during flight tests. Proving to be able to land with a precision error of less than 10 cm, which meets the requirements of the UC3-demo2.

References

[1] M. Y. B. M. Noor, M. A. Ismail, M. F. b Khyasudeen, A. Shariffuddin, N. I. Kamel, and S. R. Azzuhri, “Autonomous precision landing for commercial UAV: A review.,” in FSDM, 2017, pp. 459–468. doi: 10.3233/978-1-61499-828-0-459.
[2] Ling Kevin, “Precision Landing of a Quadrotor UAV on a Moving Target Using Low-Cost Sensors,” 2014. [Online]. Available: http://hdl.handle.net/10012/8803
[3] K. Guo, P. Tang, H. Wang, D. Lin, and X. Cui, “Autonomous Landing of a Quadrotor on a Moving Platform via Model Predictive Control,” Aerospace, vol. 9, no. 1, p. 34, Jan. 2022, doi: 10.3390/aerospace9010034.
[4] J. A. Macés-Hernández, F. Defaÿ, and C. Chauffaut, “Autonomous landing of an UAV on a moving platform using Model Predictive Control,” in 2017 11th Asian Control Conference (ASCC), 2017, pp. 2298–2303.
[5] R. Polvara, S. Sharma, J. Wan, A. Manning, and R. Sutton, “Vision-based autonomous landing of a quadrotor on the perturbed deck of an unmanned surface vehicle,” drones, vol. 2, no. 2, p. 15, 2018.
[6] G. Gillini and F. Arrichiello, “Nonlinear Model Predictive Control for the Landing of a Quadrotor on a Marine Surface Vehicle,” IFAC-PapersOnLine, vol. 53, no. 2, pp. 9328–9333, 2020.
[7] “SITL Simulator (Software in the Loop) — Dev documentation.” https://ardupilot.org/dev/docs/sitl-simulator-software-in-the-loop.html (accessed May 02, 2022).
[8] M. Quigley et al., “ROS: an open-source Robot Operating System,” in ICRA workshop on open source software, 2009, vol. 3, no. 3.2, p. 5.