Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
For the purpose of promoting a better understanding of the objects, aspects and advantages of the present disclosure, reference is made to the following detailed description taken in conjunction with the accompanying drawings.
Fig. 1 is a flowchart illustrating a visual servo control method according to an embodiment of the disclosure.
In step 110, a ground identification image of the unmanned device taken by the vision sensor at the current location is obtained. The ground mark is, for example, a marker of a specific shape or a specific color, such as a black and white checkerboard, an H-shaped landmark, etc., which may be laid on the ground or placed on a movable platform, such as a movable ground robot. The drone is for example a drone, in particular a quad-rotor drone, and the vision sensor is for example a camera.
At step 120, current visual feature point coordinates are determined based on the ground identification image. After the ground identification image is obtained, image processing is carried out, the corner point information of the ground identification can be extracted, the corner point information is used as a visual feature point, and then the coordinate of the visual feature point in an image coordinate system is determined.
For the image coordinate system, it can be mainly classified into two categories, i.e. an image physical coordinate system and an image pixel coordinate system. In the image physical coordinate system (x)s,ys) The origin is determined by the focus between the image plane and the grating, and the unit of the coordinate is millimeter; in the image pixel coordinate system (u)s,vs) In the method, the origin is determined by the upper left corner of the image, the unit of the coordinate is pixel, and the number of columns of the whole coordinate system is usDetermining the number of rows of the whole coordinate system by vsAnd (4) determining. Wherein u issAxis and xsAxial correspondence, vsAxis and ysThe axes correspond.
In step 130, a cost function of the model predictive controller is constructed by using the difference between the visual feature point coordinates of each time in the prediction time domain and the expected visual feature point coordinates and the operation parameters of the unmanned aerial vehicle controlling each time in the time domain, wherein the visual feature point coordinates of each time in the prediction time domain are obtained according to the current visual feature point coordinates and the corresponding operation parameters.
In one embodiment, a kinematic equation of the visual feature point may be established, discretized, and a correspondence between the current visual feature coordinate and the predicted visual feature coordinate point at the next time may be established. The discretization kinematic equation shows that the visual feature coordinate point of the next moment can be predicted according to the current visual feature coordinate and the operation parameter of the unmanned equipment at the current moment, so that the operation parameter of the unmanned equipment controlling each moment in the time domain can be used as the parameter for obtaining the visual feature point coordinate of each moment in the predicted time domain. The operation parameter is, for example, velocity information that can reflect the position and posture of the unmanned device.
In one embodiment, assume that the current time is time k, and the current visual trait coordinate is skAccording to the current visual feature coordinates skPredicting the visual characteristic coordinate at the k +1 th moment to be sk+1Then according to the visual characteristic coordinate s at the k +1 th momentk+1Predicting to obtain the visual characteristic coordinate s at the k +2 th momentk+2And the analogy is carried out until the vision characteristic coordinate at the k + i moment is predicted to be sk+i. For example according to the formula sk+i=sk+i-1+TLVk+i-1Calculating the coordinates of the visual feature points at each moment in the prediction time domain, wherein the value of i is 1 to NpWherein N ispFor predicting the time domain, T is the sampling period of the vision sensor, L is the image Jacobian matrix, Vk+i-1Is predicted the firstk+i-1The operational parameters of the moment.
In one embodiment, the current visual feature point coordinates and the expected visual feature point coordinates are input to a model predictive controller, and a prediction time domain parameter and a control time domain parameter are set, while a cost function for model predictive control is established that is related to the difference between the visual feature point coordinates and the expected visual feature point coordinates at each time in the prediction time domain, and the operating parameters of the unmanned aerial device at each time in the control time domain.
In step 140, the operating parameters of the drone at each time in the control time domain are predicted by minimizing the cost function of the model predictive controller.
In step 150, the operation parameter at the first time in the control time domain is used as a target operation parameter, and the unmanned device is subjected to visual servo control so as to operate to a desired position.
In one embodiment, the cascaded PID operating parameter tracking controller may be designed such that the actual operating parameters of the drone are in accordance with the target operating parameters.
In the above embodiment, by constructing the cost function of the model predictive controller, after minimizing the cost function, the operation parameters of the unmanned aerial vehicle at each time in the control time domain can be predicted, and the operation parameter at the first time in the control time domain is taken as the target operation parameter to perform visual servo control on the unmanned aerial vehicle, that is, the model predictive control algorithm is combined with the visual servo control based on the image, so that the operation of the unmanned aerial vehicle does not depend on the GPS signal any more, the problem that the traditional visual servo control algorithm based on the image is difficult in parameter adjustment can be solved, and the operation control performance of the unmanned aerial vehicle is improved.
Fig. 2 is a flowchart illustrating another embodiment of a visual servo control method according to the present disclosure.
At step 210, ground identification images taken by the cameras of the drone at the desired location and the current location are determined. The current location includes an initial location, and the desired location is, for example, a touchdown point.
In step 220, the desired visual feature point coordinates and the current visual feature point coordinates are determined based on the ground identification image.
In one embodiment, for example, four points are taken as the visual control points in the visual servo control of the drone, and the projection of the image plane is obtained through the camera imaging model as the visual feature points, so that the motion change of the drone can be obtained through the prediction control of the visual feature points. To simplify the analysis, a visual feature point may be selected for modeling analysis.
The desired visual feature point coordinates and the current visual feature point coordinates are both coordinates of the image coordinate system. For example, the current visual feature point coordinate is s (t) ═ u1,v1,..u4,v4]TThe coordinate of the expected visual feature point is s*(t)=[u* 1,v* 1,..u* 4,v* 4]T。
In step 230, it is determined whether the difference between the current visual feature point coordinate and the expected visual feature point coordinate is 0, if yes, the process is terminated, otherwise, step 240 is executed.
Because the ground sign can be shot in real time to unmanned aerial vehicle's camera, when the vision characteristic point coordinate that the ground sign image that unmanned aerial vehicle shot corresponds is zero with expectation vision characteristic point coordinate difference, it indicates that unmanned aerial vehicle reaches the expectation position, otherwise, unmanned aerial vehicle continues to shoot the ground sign image to carry out step 240.
In step 240, a cost function of the model predictive controller is constructed, a prediction time domain parameter and a control time domain parameter of the cost function are set, and the cost function of the model predictive controller is subjected to view field constraint and operation constraint.
As shown in fig. 3, the model predictive controller includes a model predictor and an optimization model. The model predictor is used for predicting the coordinates of the visual characteristic points in the prediction time domain, wherein the difference value of the output value of the model predictor and the expected visual characteristic point coordinates is used as an input parameter of the optimization model, and meanwhile, limiting conditions, such as a visual field constraint condition and a speed constraint condition, can also be input into the optimization model.
In one embodiment, a kinematic equation is first constructed, for example, let the coordinates of the visual feature points in the camera coordinate system be P ═ xc,yc,zc]TThen the visual feature point P ═ xc,yc,zc]TTo the feature point in the image coordinate system s (t) ═ u, v]TCan be expressed as formula (1).
Wherein the value of f is a calibrated camera internal parameter; z is a radical ofcThe value of (a) is the estimated distance from the camera to the ground mark.
In one embodiment, the camera is mounted on the drone in a camera coordinate system (O)cXcYcZc) The camera optical center is the origin of the whole coordinate system, wherein the camera optical axis and OcZcCoincidence, corresponding to the forward direction as the camera shooting direction, the X-axis and the O-axis of the cameracXcParallel, Y-axis and OcYcThe axes are parallel. And for a body coordinate system (O)bXbYbZb) In other words, the origin of the whole coordinate system needs to be determined, the origin of the coordinate system coincides with the center of gravity of the unmanned aerial vehicle, and the longitudinal axis of the unmanned aerial vehicle points to the front of the unmanned aerial vehicle and is ObZbThe axes are in the same direction, both in the plane of symmetry; o isbXbThe axis being directed to the right, ObYbThe direction of the axis being directed towards the ground, ObYbDirection of axis and ObXbThe axes are oriented vertically, generally in the plane of symmetry of the aircraft.
When the unmanned aerial vehicle with the camera flies, the translation rate of the feature point P in the camera coordinate system is assumed to be T ═ Tx,Ty,Tz]TThe rotation rate is omega ═ wx,wy,wz]TThe kinematic equation of the feature point P in the camera coordinate system can be expressed as formula (2), where the translation rate reflects the position of the drone and the rotation rate reflects the pose of the drone.
Wherein the content of the first and second substances,
is the velocity of the feature point P in the camera coordinate system.
Substituting equation (1) into equation (2) can obtain a kinetic equation of the feature point P in the camera coordinate system with respect to the image coordinates, as shown in equation (3).
(t) converting [ u, v ] to s]
TDeriving the time to obtain the speed information under the image coordinate system
Further, the formula (3) givesKinematic equations in the image coordinate system of equation (4).
Suppose the speed of the camera loaded on the drone is expressed as
Equation (4) can be expressed by equation (5).
Wherein the content of the first and second substances,
assuming that the visual feature sampling period of each iteration in the model predictive control algorithm is T, discretizing formula (5) obtains formula (7).
sk+1=sk+TL(sk,Zk)Vk(7)
Wherein the content of the first and second substances,
is the speed of movement of the drone, L(s)
k,Z
k) Representing the jacobian matrix of the image at time k.
As can be seen from the formula (7), the jacobian matrix L(s) of the image corresponding to the visual feature point at the current moment is obtainedk,Zk) To the operation parameter V of the unmanned aerial vehicle at the current momentkImage jacobian matrix L(s)k,Zk) And performing an integration operation with the sampling period T of the vision sensor, and performing the integration operation according to the result of the integration operation and the current vision characteristic coordinate skAnd predicting the visual feature point coordinate s of the unmanned aerial vehicle at the next momentk+1. Repeatedly executing prediction unmanned aerial vehicle at next timeThe coordinate of the visual feature point is carved until the unmanned aerial vehicle is predicted to be in the prediction time domain NpVisual feature point coordinate s of inner last momentk+iWherein i is more than or equal to 0 and less than or equal to Np。
In one embodiment, the visual characteristic point error of each moment in the prediction time domain is determined according to the difference value of the visual characteristic point coordinate of each moment in the prediction time domain and the expected visual characteristic point coordinate; determining a quadratic function of the visual characteristic point errors according to the visual characteristic point errors at each moment in the prediction time domain; determining a quadratic function of the operation parameters according to the operation parameters of the unmanned aerial vehicle at each moment in the control time domain, wherein the quadratic function of the operation parameters is an energy function; and determining a cost function of the model predictive controller based on the sum of the quadratic function of the visual feature point error and the quadratic function of the operation parameter. For example, the cost function is shown in equation (8).
Wherein the parameter NpPrediction time domain parameter, N, representing predictive controlcRepresenting the control time domain parameters, R and Q are symmetric positive definite weighting matrices.
In one embodiment, the visual feature point coordinates and the operation parameters in the cost function are constrained in consideration of the visual field constraint and the speed constraint of the drone, for example, constraint equations are established as shown in equations (9) and (10).
Wherein, Tmax,wmaxRepresents the maximum speed, u, of the dronemin,vmin,umaxAnd vmaxAnd the boundary value represents the projection coordinate of the characteristic point in the visual range of the camera on the image plane.
In step 250, the coordinates of the expected visual feature point and the coordinates of the current visual feature point are input to the constructed model prediction controller, and the flight speed of the unmanned aerial vehicle at each moment in the control time domain is obtained through prediction by minimizing the cost function of the model prediction controller.
As shown in fig. 4, a curve 1 represents a reference trajectory, a curve 2 represents a curve formed by visual feature point coordinates at each time in a prediction time domain of prediction output, and a curve 3 represents the flight speed of the unmanned aerial vehicle at each time in a control time domain of prediction output.
In
step 260, the flight speed at the first time in the control time domain is taken as the target flight speed satisfying the view field constraint and the operation constraint of the unmanned aerial vehicle. I.e. output of the optimisation model
As the target airspeed at the current time.
In one embodiment, the MATLAB/MPT tool box can be used to solve the target flight speed, that is, the above-mentioned established equations are all input into the MATLAB/MPT tool, so that the optimal solution of the flight speed of the unmanned aerial vehicle satisfying the constraints can be obtained.
In step 270, the target airspeed is input to the cascade PID airspeed tracking controller so that the actual airspeed of the drone tracks the target airspeed. As shown in fig. 3, the actual flying speed output by the drone is V through the cascade PID speed tracking controller in the drone modelc(k) At this time, the visual feature point coordinates may be output through the camera model.
As shown in fig. 5, in one embodiment, the target operating parameter is input to a position controller and an attitude controller in series, wherein the position controller and the attitude controller have proportional, integral, and derivative control functions; and feeding back the position information output by the position controller and the attitude controller which are connected in series to the position controller, and feeding back the output attitude information to the attitude controller.
In the above embodiment, by constructing the cost function of the model predictive controller and performing the view field constraint and the operation constraint on the cost function, the model predictive controller outputs the target flight speed to meet the view field constraint condition and the operation constraint condition, so that the actual flight speed of the unmanned aerial vehicle can track the target flight speed more accurately, thereby reducing the influence of noise and calibration errors on the visual servo control, and because the precise unmanned aerial vehicle dynamic model and the visual feature point dynamic model are not required in this embodiment, the anti-interference performance of the system is enhanced, so that the system has stronger robustness, better applicability and better operability.
In one embodiment, the desired location of the drone may be a landing location of the drone, thereby enabling the drone to complete autonomous landing without relying on GPS signals.
Fig. 6 is a schematic structural diagram of an embodiment of a visual servo control apparatus according to the present disclosure. The apparatus includes a ground identification image acquisition unit 610, a feature point coordinate determination unit 620, an operation parameter prediction unit 630, and a visual servo control unit 640.
The ground identification image acquisition unit 610 is configured to acquire a ground identification image taken by a vision sensor of the unmanned device at a current position. The ground mark is, for example, a marker of a specific shape or a specific color, such as a black and white checkerboard, an H-shaped landmark, etc., which may be laid on the ground or placed on a movable platform, such as a movable ground robot. The drone is for example a drone, in particular a quad-rotor drone, and the vision sensor is for example a camera.
The feature point coordinate determination unit 620 is configured to determine current visual feature point coordinates based on the ground identification image. After the ground identification image is obtained, image processing is carried out, the corner point information of the ground identification can be extracted, the corner point information is used as a visual feature point, and then the coordinate of the visual feature point in an image coordinate system is determined.
The operation parameter prediction unit 630 is configured to construct a cost function of the model prediction controller by using a difference between the coordinates of the visual feature point at each time in the prediction time domain and the coordinates of the expected visual feature point, and the operation parameters of the unmanned aerial device at each time in the control time domain, wherein the coordinates of the visual feature point at each time in the prediction time domain are obtained according to the current coordinates of the visual feature point and the corresponding operation parameters; and predicting to obtain the operation parameters of the unmanned equipment at each moment in the control time domain by minimizing the cost function of the model prediction controller. The operation parameter is, for example, velocity information that can reflect the position and posture of the unmanned device.
In one embodiment, a kinematic equation of the visual feature point may be established, discretized, and a correspondence between the current visual feature coordinate and the predicted visual feature coordinate point at the next time may be established. Assuming that the current time is the kth time and the current visual characteristic coordinate is skAccording to the current visual feature coordinates skPredicting the visual characteristic coordinate at the k +1 th moment to be sk+1Then according to the visual characteristic coordinate s at the k +1 th momentk+1Predicting to obtain the visual characteristic coordinate s at the k +2 th momentk+2And the analogy is carried out until the vision characteristic coordinate at the k + i moment is predicted to be sk+i。
In one embodiment, the visual characteristic point error of each moment in the prediction time domain is determined according to the difference value of the visual characteristic point coordinate of each moment in the prediction time domain and the expected visual characteristic point coordinate; determining a quadratic function of the visual characteristic point errors according to the visual characteristic point errors at each moment in the prediction time domain; determining a quadratic function of the operation parameters according to the operation parameters of the unmanned aerial vehicle at each moment in the control time domain, wherein the quadratic function of the operation parameters is an energy function; and determining a cost function of the model predictive controller based on the sum of the quadratic function of the visual feature point error and the quadratic function of the operation parameter.
In one embodiment, the operating parameter prediction unit 630 is further configured to constrain the cost function of the model predictive controller to obtain target operating parameters that satisfy the field of view constraints and the operating constraints of the unmanned device.
The vision servo control unit 640 is configured to perform vision servo control on the unmanned aerial vehicle so as to operate the unmanned aerial vehicle to a desired position, with the operation parameter at the first time in the control time domain as a target operation parameter.
In one embodiment, the cascaded PID operating parameter tracking controller may be designed such that the actual operating parameters of the drone are in accordance with the target operating parameters. For example, the target operating parameters are input to a position controller and an attitude controller connected in series, wherein the position controller and the attitude controller have proportional, integral and derivative control functions; and feeding back the position information output by the position controller and the attitude controller which are connected in series to the position controller, and feeding back the output attitude information to the attitude controller.
In one embodiment, since the camera of the drone shoots the ground mark in real time, when the difference between the visual feature point coordinate corresponding to the ground mark image shot by the drone and the expected visual feature point coordinate is zero, it is said that the drone reaches the expected position.
In the embodiment, by constructing the cost function of the model predictive controller, after the cost function is minimized, the operation parameters of the unmanned aerial vehicle at each time in the control time domain can be predicted, the operation parameter at the first time in the control time domain is taken as a target operation parameter, and the unmanned aerial vehicle is subjected to visual servo control, that is, the model predictive control algorithm is combined with the visual servo control based on the image, so that the operation of the unmanned aerial vehicle does not depend on a GPS signal any more, the problem that the traditional visual servo control algorithm based on the image is difficult in parameter adjustment can be solved, the effect of rolling optimization is achieved, and the operation control performance of the unmanned aerial vehicle is improved.
In addition, for the nonlinear and strongly coupled unmanned aerial vehicle under the environment with multiple constraints and dynamic uncertainty, the control effect of the visual servo control based on the image in the related technology is not stable enough, and in the embodiment, the parameters in the cost function are subjected to the field-of-view constraint and the operation constraint, so that the target operation parameters output by the model prediction controller meet the field-of-view constraint condition and the operation constraint condition, the actual operation parameters of the unmanned equipment can track the target operation parameters more accurately, the influence of noise and calibration errors on the visual servo control is reduced, the anti-interference performance of the system is enhanced, and the system is stronger in robustness, better in applicability and better in operability.
Fig. 7 is a schematic structural diagram of another embodiment of the visual servo control apparatus according to the present disclosure. The apparatus comprises a memory 710 and a processor 720, wherein:
the memory 710 may be a magnetic disk, flash memory, or any other non-volatile storage medium. The memory 710 is used for storing instructions in the embodiments corresponding to fig. 1 and 2. Processor 720, coupled to memory 710, may be implemented as one or more integrated circuits, such as a microprocessor or microcontroller. The processor 720 is configured to execute instructions stored in the memory.
In one embodiment, as also shown in FIG. 8, the apparatus 800 includes a memory 810 and a processor 820. The processor 820 is coupled to the memory 810 by a BUS 830. The device 800 may also be coupled to an external storage device 850 via a storage interface 840 for facilitating retrieval of external data, and may also be coupled to a network or another computer system (not shown) via a network interface 860, which will not be described in detail herein.
In the embodiment, the data instructions are stored in the memory, and the instructions are processed by the processor, so that the operation control performance of the unmanned equipment is improved.
In another embodiment of the present disclosure, an unmanned device, such as a drone, is protected, as shown in fig. 9, which includes the visual servo control means 910 of the above embodiments.
In another embodiment, the drone further includes a vision sensor 920, wherein the vision sensor 920 is, for example, a camera configured to capture ground identification images.
In another embodiment, a computer-readable storage medium has stored thereon computer program instructions which, when executed by a processor, implement the steps of the method in the corresponding embodiments of fig. 1, 2. As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, apparatus, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Thus far, the present disclosure has been described in detail. Some details that are well known in the art have not been described in order to avoid obscuring the concepts of the present disclosure. It will be fully apparent to those skilled in the art from the foregoing description how to practice the presently disclosed embodiments.
Although some specific embodiments of the present disclosure have been described in detail by way of example, it should be understood by those skilled in the art that the foregoing examples are for purposes of illustration only and are not intended to limit the scope of the present disclosure. It will be appreciated by those skilled in the art that modifications may be made to the above embodiments without departing from the scope and spirit of the present disclosure. The scope of the present disclosure is defined by the appended claims.