WP4-2

From COMP4DRONES
Jump to navigation Jump to search

EZ_Land Precision landing

ID WP4-2
Contributor SCALIAN
Levels TBD
Require Precision landing sensors, Autopilot interface
Provide Sensor fusion and precision landing on static and moving plateforms in GNSS denied environments
Input
  • Mavlink messages with position from external positioning system
  • Attitude of the UAV
Output Relative position of the helipad and velocity commands for precision landing
C4D building block Payload Management, Data Management, Flight Guidance
TRL 3

Detailed Description

Precision landing is a frequent subject for autonomous multicopters because it is an essential component for safety and autonomy in a drone system. Indeed, in many use cases, a large and static landing site is not guaranteed, and solely using the GPS signal and IMU data, a drone can have a landing offset up to 5 meters if the signal is bad. Such an offset can be very dangerous, as the drone can land outside its designed area, endangering the integrity of the drone or the people and infrastructure surrounding it [1].

The precision landing of a multi-rotor on a static or mobile area is a popular research topic in recent years, since it involves both control and perception problems. There are several approaches [2]:

  • GNSS techniques use the GPS and the INS as the main data source for navigation and control during landing. They are usually easy to integrate because all UAVs embed the needed components. But they lack accuracy as the bias error of a GPS accumulates over the time of operation, and attempting a landing with such uncertainties has a very high chance of failure.
  • Ground-based techniques guide the UAV from the ground during landing. Sometimes, the drone detection and tracking are done on the ground while the control is done aboard the UAV. They are accurate methods but often require a complex setup and a direct line of sight with the drone because good communication and low latency are essential for a safe landing, which makes them a less robust solution.
  • Vision-based approaches use onboard sensors to get feedback on the position of the landing area during the autonomous landing. There are many low-cost sensors coupled with computer vision and processing algorithms that allow tracking the position of the landing area, so these techniques are quite easy to implement and integrate into the UAV. But they are often subject to noise in the estimations, depending on the reliability of the image processing subject to specific lighting conditions or the quality of the visual cues.

The autonomous landing of a multirotor on a moving area is even more challenging. A concept for landing a multirotor on a moving target using low-cost sensors was proposed in [2]. A downward-facing camera is used to track a target on the landing platform and generate estimates of its position relative to the UAV, which are then used in the control of the UAV using PIDs. In [3] and [4], the authors developed an efficient and reliable MBPC (Model Based Predictive Control) based autonomous landing system that can accurately land in the presence of external disturbances. In [5], the authors focus on the landing of a drone on the deck of a USV. A marker is placed on the deck to easily recover its relative positioning in six degrees of freedom. To compensate for the interruption of the marker observations, an Extended Kalman filter (EKF) estimates the current position of the USV relative to the last known position. The results confirmed that the EKF provides sufficiently accurate estimates to direct the UAV in the vicinity of the autonomous vessel so that the marker becomes visible again. This problem of landing on USV was also addressed in [6], where they investigated a predictive control law that takes into account a model of the sea state and wave motion to attempt to land at the moment the landing zone reaches its vertical peak at the top of a wave. These solutions are very efficient, but often are highly dependent on the sensors used. Different technologies already exist and try to satisfy the needs of precision during the landing phase:

  • Lolas from internest is a module based on ultrasound detection. However, the final precision of this type of detection is not satisfying (20 cm), the detection can be easily blurred by a strong wind and is not compatible with a fleet of drones such as during the METIS use case (UC3-Demo1).
  • IR Lock is a module based on the detection of an infrared beacon on the helipad. The detection is robust and hardly blurred, but there is no information about the yaw and the working range of the solution is small.
  • Computer vision algorithms with markers such as Aruco Markers are very popular because of a good result and the simple integration on a drone but can easily be blurred by the light conditions (shadows, reflection, or very high luminosity) or minor default on the marker (crease).

Independently, these technologies don’t always guarantee an accurate landing. The component proposed by SCALIAN aims at exploiting the strength of several sensors to ensure a robust, safe, and precise autonomous landing. It participates in the framework of C4D by providing a modular and robust multi-sensor-fusion and control algorithm that merges the data coming from a set of sensors and allows a multi-copter to land on a static or moving platform, more accurately than with only a GPS. The component complies with the modular architecture defined in C4D, as the data fusion algorithm is independent of the set of sensors used by the UAV and can work with any type of sensor as long as it provides an estimate of the position of the landing area. Additionally, this allows the UAV to land when the communication and GPS are not available.

References

[1] M. Y. B. M. Noor, M. A. Ismail, M. F. b Khyasudeen, A. Shariffuddin, N. I. Kamel, and S. R. Azzuhri, “Autonomous precision landing for commercial UAV: A review.,” in FSDM, 2017, pp. 459–468. doi: 10.3233/978-1-61499-828-0-459.
[2] Ling Kevin, “Precision Landing of a Quadrotor UAV on a Moving Target Using Low-Cost Sensors,” 2014. [Online]. Available: http://hdl.handle.net/10012/8803
[3] K. Guo, P. Tang, H. Wang, D. Lin, and X. Cui, “Autonomous Landing of a Quadrotor on a Moving Platform via Model Predictive Control,” Aerospace, vol. 9, no. 1, p. 34, Jan. 2022, doi: 10.3390/aerospace9010034.
[4] J. A. Macés-Hernández, F. Defaÿ, and C. Chauffaut, “Autonomous landing of an UAV on a moving platform using Model Predictive Control,” in 2017 11th Asian Control Conference (ASCC), 2017, pp. 2298–2303.
[5] R. Polvara, S. Sharma, J. Wan, A. Manning, and R. Sutton, “Vision-based autonomous landing of a quadrotor on the perturbed deck of an unmanned surface vehicle,” drones, vol. 2, no. 2, p. 15, 2018.
[6] G. Gillini and F. Arrichiello, “Nonlinear Model Predictive Control for the Landing of a Quadrotor on a Marine Surface Vehicle,” IFAC-PapersOnLine, vol. 53, no. 2, pp. 9328–9333, 2020.
[7] “SITL Simulator (Software in the Loop) — Dev documentation.” https://ardupilot.org/dev/docs/sitl-simulator-software-in-the-loop.html (accessed May 02, 2022).
[8] M. Quigley et al., “ROS: an open-source Robot Operating System,” in ICRA workshop on open source software, 2009, vol. 3, no. 3.2, p. 5.