Design and Implementation of a Nonlinear Model Predictive Controller for Preliminary Aerial Physical Interaction Applications

Traditionally, multi-rotor aerial vehicles have been used in a variety of contact-less civil applications, ranging from aerial photography, visual inspection of infrastructures or crop monitoring. In the last years they have been started however to be used, both in research and applications, for in-contact operations which involve an exchange of forces and torques with the environment in order to perform physical work.

Starting from a framework targeted for trajectory following applications, current thesis designs and then validates through simulations the steps needed to be taken until mixed motion-control can be achieved with the same framework. For this, it is first extended with an observer software solution that is being used for the estimation of the disturbance wrench acting on the robot (considered to be acting at the center of mass). This solution is chosen instead of a sensor because normally a sensor increases both the cost and weight of the entire platform. Because the existence of an interaction wrench introduces errors on the robot’s position and orientation tracking evolutions, the next extension is to include it in the control architecture in order to assure pose control while being in physical contact.

The last design step is to offer the possibility to control also the interaction/contact force, while performing a trajectory task. The control framework can be used up to what level it is needed. For example, if the user just wants to use the observer together with the controller without regulating the interaction force he can use the version before the last step; if he wants to use it also for force regulation, he will use the final version.

Then, as future work, it can be extended to give the possibility to include also the interaction torque, together with more realistic contact forces. In the end, the final goal would be to target human physical-interaction applications. However, for these, the current framework would need to be augmented with a vision perception and control layer, together with intelligent algorithms for understanding the human actions and answer accordingly.

To join the presentation via Microsoft Teams click here