CN111624875A - Visual servo control method and device and unmanned equipment - Google Patents

Visual servo control method and device and unmanned equipment Download PDF

Info

Publication number
CN111624875A
CN111624875A CN201910143543.1A CN201910143543A CN111624875A CN 111624875 A CN111624875 A CN 111624875A CN 201910143543 A CN201910143543 A CN 201910143543A CN 111624875 A CN111624875 A CN 111624875A
Authority
CN
China
Prior art keywords
visual
moment
feature point
time domain
visual feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910143543.1A
Other languages
Chinese (zh)
Inventor
李梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Qianshi Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201910143543.1A priority Critical patent/CN111624875A/en
Publication of CN111624875A publication Critical patent/CN111624875A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/04Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
    • G05B13/042Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators in which a parameter or coefficient is automatically adjusted to optimise the performance
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The disclosure provides a visual servo control method and device and unmanned equipment, and relates to the field of unmanned control. The method comprises the following steps: acquiring a ground identification image shot by a visual sensor of the unmanned equipment at the current position; determining the coordinates of the current visual feature points based on the ground identification image; constructing a cost function of a model prediction controller by using a difference value between the coordinates of the visual feature point at each moment in the prediction time domain and the coordinates of the expected visual feature point and controlling the operation parameters of the unmanned equipment at each moment in the control time domain, wherein the coordinates of the visual feature point at each moment in the prediction time domain are obtained according to the current coordinates of the visual feature point and the corresponding operation parameters; predicting to obtain the operation parameters of the unmanned equipment at each moment in the control time domain by minimizing the cost function of the model prediction controller; the operation parameter of the first moment in the control time domain is used as a target operation parameter, the unmanned equipment is subjected to visual servo control, and the operation control performance of the unmanned equipment can be improved.

Description

Visual servo control method and device and unmanned equipment
Technical Field
The disclosure relates to the field of unmanned control, in particular to a visual servo control method and device and unmanned equipment.
Background
Visual servo algorithms based on robots or unmanned aerial vehicles are widely researched, and a visual servo system senses the position and the posture of the robots or the unmanned aerial vehicles by using one or more cameras, so that the position and the direction of the robots or the unmanned aerial vehicles are accurately controlled. The visual servoing control system can be classified into position-based visual servoing control (PBVS), image-based visual servoing control (IBVS), and hybrid visual servoing control, according to the form of constructing an error amount from feedback information.
In recent years, image-based visual servo control is applied to the unmanned aerial vehicle, but the key problem of the image-based visual servo control is how to obtain an image jacobian matrix reflecting image characteristics and the position and the attitude of the unmanned aerial vehicle, and the problem of constraint of the image-based visual servo control on a system is difficult to deal with, which may cause the performance of the overall flight control system to be reduced.
Disclosure of Invention
The technical problem to be solved by the present disclosure is to provide a visual servo control method, device and unmanned equipment, which can improve the operation control performance of the unmanned equipment.
According to an aspect of the present disclosure, a visual servo control method is provided, including: acquiring a ground identification image shot by a visual sensor of the unmanned equipment at the current position; determining the coordinates of the current visual feature points based on the ground identification image; constructing a cost function of a model prediction controller by using a difference value between the coordinates of the visual feature point at each moment in the prediction time domain and the coordinates of the expected visual feature point and controlling the operation parameters of the unmanned equipment at each moment in the control time domain, wherein the coordinates of the visual feature point at each moment in the prediction time domain are obtained according to the current coordinates of the visual feature point and the corresponding operation parameters; predicting to obtain the operation parameters of the unmanned equipment at each moment in the control time domain by minimizing the cost function of the model prediction controller; and taking the operation parameter at the first moment in the control time domain as a target operation parameter, and carrying out visual servo control on the unmanned equipment so as to enable the unmanned equipment to operate to a desired position.
In one embodiment, the method further comprises: and constraining the cost function of the model predictive controller to obtain target operation parameters meeting the view field constraint condition and the operation constraint condition of the unmanned equipment.
In one embodiment, constructing the cost function of the model predictive controller comprises: determining the visual characteristic point error of each moment in the prediction time domain according to the difference value between the visual characteristic point coordinate of each moment in the prediction time domain and the expected visual characteristic point coordinate; determining a quadratic function of the visual characteristic point errors according to the visual characteristic point errors at each moment in the prediction time domain; determining a quadratic function of the operation parameters according to the operation parameters of the unmanned equipment at each moment in the control time domain; and determining a cost function of the model predictive controller based on the sum of the quadratic function of the visual feature point error and the quadratic function of the operation parameter.
In one embodiment, obtaining visual feature point coordinates at each time instant in the prediction horizon comprises: predicting the visual feature point coordinates of the unmanned equipment at the next moment according to the current visual feature coordinates and the operation parameters of the unmanned equipment at the current moment; and repeatedly executing the visual feature point coordinates of the unmanned equipment at the next moment until the visual feature point coordinates of the unmanned equipment at the last moment in the prediction time domain are predicted.
In one embodiment, predicting the visual feature point coordinates of the drone at the next time instance comprises: acquiring an image Jacobian matrix corresponding to the visual feature point at the current moment; performing product operation on the operation parameters of the unmanned equipment at the current moment, the image Jacobian matrix and the sampling period of the visual sensor; and predicting the visual feature point coordinate of the unmanned equipment at the next moment according to the result of the product operation and the current visual feature coordinate.
In one embodiment, visually servo-controlling the drone comprises: inputting target operation parameters into a position controller and an attitude controller which are connected in series, wherein the position controller and the attitude controller have proportional, integral and differential control functions; and feeding back the position information output by the position controller and the attitude controller which are connected in series to the position controller, and feeding back the output attitude information to the attitude controller.
In one embodiment, the operational parameter is velocity information that reflects the position and attitude of the drone.
According to another aspect of the present disclosure, there is also provided a visual servo control apparatus, comprising: a ground identification image acquisition unit configured to acquire a ground identification image photographed by a vision sensor of the unmanned device at a current position; a feature point coordinate determination unit configured to determine current visual feature point coordinates based on the ground identification image; the operation parameter prediction unit is configured to construct a cost function of the model prediction controller by using a difference value between the visual feature point coordinates and the expected visual feature point coordinates of each moment in the prediction time domain and the operation parameters of the unmanned equipment for controlling each moment in the time domain, wherein the visual feature point coordinates of each moment in the prediction time domain are obtained according to the current visual feature point coordinates and the corresponding operation parameters; predicting to obtain the operation parameters of the unmanned equipment at each moment in the control time domain by minimizing the cost function of the model prediction controller; and the visual servo control unit is configured to take the operation parameter at the first moment in the control time domain as a target operation parameter and perform visual servo control on the unmanned equipment so as to enable the unmanned equipment to operate to a desired position.
In one embodiment, the operating parameter prediction unit is further configured to constrain a cost function of the model predictive controller to obtain target operating parameters that satisfy field of view constraints and operating constraints of the unmanned device.
According to another aspect of the present disclosure, there is also provided a visual servo control apparatus, comprising: a memory; and a processor coupled to the memory, the processor configured to perform the visual servoing control method as described above based on instructions stored in the memory.
According to another aspect of the present disclosure, there is also provided an unmanned aerial vehicle including the visual servo control apparatus.
In one embodiment, the drone further comprises: a vision sensor configured to capture a ground identification image.
According to another aspect of the present disclosure, a computer-readable storage medium is also proposed, on which computer program instructions are stored, which instructions, when executed by a processor, implement the steps of the above-mentioned visual servo control method.
Compared with the prior art, the method and the device have the advantages that the cost function of the model predictive controller is built, the operation parameters of the unmanned equipment at each moment in the control time domain can be predicted after the cost function is minimized, the operation parameter at the first moment in the control time domain is used as the target operation parameter, the unmanned equipment is subjected to visual servo control, namely, the model predictive control algorithm is combined with the visual servo control based on the image, so that the operation of the unmanned equipment does not depend on a GPS signal any more, the problem that the traditional visual servo control algorithm based on the image is difficult in parameter adjustment can be solved, and the operation control performance of the unmanned equipment is improved.
Other features of the present disclosure and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The present disclosure may be more clearly understood from the following detailed description, taken with reference to the accompanying drawings, in which:
fig. 1 is a flowchart illustrating a visual servo control method according to an embodiment of the disclosure.
Fig. 2 is a flowchart illustrating another embodiment of a visual servo control method according to the present disclosure.
FIG. 3 is a schematic diagram of a model predictive control framework in the visual servoing control method according to the disclosure.
Fig. 4 is a schematic diagram of the operating parameters of the drone in the prediction time domain output by the model predictive controller of the present disclosure.
FIG. 5 is a schematic diagram of a cascaded PID speed tracking controller of the present disclosure.
Fig. 6 is a schematic structural diagram of an embodiment of a visual servo control apparatus according to the present disclosure.
Fig. 7 is a schematic structural diagram of another embodiment of the visual servo control apparatus according to the present disclosure.
Fig. 8 is a schematic structural diagram of a visual servo control apparatus according to still another embodiment of the disclosure.
Fig. 9 is a schematic structural diagram of an embodiment of the unmanned aerial device of the present disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
For the purpose of promoting a better understanding of the objects, aspects and advantages of the present disclosure, reference is made to the following detailed description taken in conjunction with the accompanying drawings.
Fig. 1 is a flowchart illustrating a visual servo control method according to an embodiment of the disclosure.
In step 110, a ground identification image of the unmanned device taken by the vision sensor at the current location is obtained. The ground mark is, for example, a marker of a specific shape or a specific color, such as a black and white checkerboard, an H-shaped landmark, etc., which may be laid on the ground or placed on a movable platform, such as a movable ground robot. The drone is for example a drone, in particular a quad-rotor drone, and the vision sensor is for example a camera.
At step 120, current visual feature point coordinates are determined based on the ground identification image. After the ground identification image is obtained, image processing is carried out, the corner point information of the ground identification can be extracted, the corner point information is used as a visual feature point, and then the coordinate of the visual feature point in an image coordinate system is determined.
For the image coordinate system, it can be mainly classified into two categories, i.e. an image physical coordinate system and an image pixel coordinate system. In the image physical coordinate system (x)s,ys) The origin is determined by the focus between the image plane and the grating, and the unit of the coordinate is millimeter; in the image pixel coordinate system (u)s,vs) In the method, the origin is determined by the upper left corner of the image, the unit of the coordinate is pixel, and the number of columns of the whole coordinate system is usDetermining the number of rows of the whole coordinate system by vsAnd (4) determining. Wherein u issAxis and xsAxial correspondence, vsAxis and ysThe axes correspond.
In step 130, a cost function of the model predictive controller is constructed by using the difference between the visual feature point coordinates of each time in the prediction time domain and the expected visual feature point coordinates and the operation parameters of the unmanned aerial vehicle controlling each time in the time domain, wherein the visual feature point coordinates of each time in the prediction time domain are obtained according to the current visual feature point coordinates and the corresponding operation parameters.
In one embodiment, a kinematic equation of the visual feature point may be established, discretized, and a correspondence between the current visual feature coordinate and the predicted visual feature coordinate point at the next time may be established. The discretization kinematic equation shows that the visual feature coordinate point of the next moment can be predicted according to the current visual feature coordinate and the operation parameter of the unmanned equipment at the current moment, so that the operation parameter of the unmanned equipment controlling each moment in the time domain can be used as the parameter for obtaining the visual feature point coordinate of each moment in the predicted time domain. The operation parameter is, for example, velocity information that can reflect the position and posture of the unmanned device.
In one embodiment, assume that the current time is time k, and the current visual trait coordinate is skAccording to the current visual feature coordinates skPredicting the visual characteristic coordinate at the k +1 th moment to be sk+1Then according to the visual characteristic coordinate s at the k +1 th momentk+1Predicting to obtain the visual characteristic coordinate s at the k +2 th momentk+2And the analogy is carried out until the vision characteristic coordinate at the k + i moment is predicted to be sk+i. For example according to the formula sk+i=sk+i-1+TLVk+i-1Calculating the coordinates of the visual feature points at each moment in the prediction time domain, wherein the value of i is 1 to NpWherein N ispFor predicting the time domain, T is the sampling period of the vision sensor, L is the image Jacobian matrix, Vk+i-1Is predicted the firstk+i-1The operational parameters of the moment.
In one embodiment, the current visual feature point coordinates and the expected visual feature point coordinates are input to a model predictive controller, and a prediction time domain parameter and a control time domain parameter are set, while a cost function for model predictive control is established that is related to the difference between the visual feature point coordinates and the expected visual feature point coordinates at each time in the prediction time domain, and the operating parameters of the unmanned aerial device at each time in the control time domain.
In step 140, the operating parameters of the drone at each time in the control time domain are predicted by minimizing the cost function of the model predictive controller.
In step 150, the operation parameter at the first time in the control time domain is used as a target operation parameter, and the unmanned device is subjected to visual servo control so as to operate to a desired position.
In one embodiment, the cascaded PID operating parameter tracking controller may be designed such that the actual operating parameters of the drone are in accordance with the target operating parameters.
In the above embodiment, by constructing the cost function of the model predictive controller, after minimizing the cost function, the operation parameters of the unmanned aerial vehicle at each time in the control time domain can be predicted, and the operation parameter at the first time in the control time domain is taken as the target operation parameter to perform visual servo control on the unmanned aerial vehicle, that is, the model predictive control algorithm is combined with the visual servo control based on the image, so that the operation of the unmanned aerial vehicle does not depend on the GPS signal any more, the problem that the traditional visual servo control algorithm based on the image is difficult in parameter adjustment can be solved, and the operation control performance of the unmanned aerial vehicle is improved.
Fig. 2 is a flowchart illustrating another embodiment of a visual servo control method according to the present disclosure.
At step 210, ground identification images taken by the cameras of the drone at the desired location and the current location are determined. The current location includes an initial location, and the desired location is, for example, a touchdown point.
In step 220, the desired visual feature point coordinates and the current visual feature point coordinates are determined based on the ground identification image.
In one embodiment, for example, four points are taken as the visual control points in the visual servo control of the drone, and the projection of the image plane is obtained through the camera imaging model as the visual feature points, so that the motion change of the drone can be obtained through the prediction control of the visual feature points. To simplify the analysis, a visual feature point may be selected for modeling analysis.
The desired visual feature point coordinates and the current visual feature point coordinates are both coordinates of the image coordinate system. For example, the current visual feature point coordinate is s (t) ═ u1,v1,..u4,v4]TThe coordinate of the expected visual feature point is s*(t)=[u* 1,v* 1,..u* 4,v* 4]T
In step 230, it is determined whether the difference between the current visual feature point coordinate and the expected visual feature point coordinate is 0, if yes, the process is terminated, otherwise, step 240 is executed.
Because the ground sign can be shot in real time to unmanned aerial vehicle's camera, when the vision characteristic point coordinate that the ground sign image that unmanned aerial vehicle shot corresponds is zero with expectation vision characteristic point coordinate difference, it indicates that unmanned aerial vehicle reaches the expectation position, otherwise, unmanned aerial vehicle continues to shoot the ground sign image to carry out step 240.
In step 240, a cost function of the model predictive controller is constructed, a prediction time domain parameter and a control time domain parameter of the cost function are set, and the cost function of the model predictive controller is subjected to view field constraint and operation constraint.
As shown in fig. 3, the model predictive controller includes a model predictor and an optimization model. The model predictor is used for predicting the coordinates of the visual characteristic points in the prediction time domain, wherein the difference value of the output value of the model predictor and the expected visual characteristic point coordinates is used as an input parameter of the optimization model, and meanwhile, limiting conditions, such as a visual field constraint condition and a speed constraint condition, can also be input into the optimization model.
In one embodiment, a kinematic equation is first constructed, for example, let the coordinates of the visual feature points in the camera coordinate system be P ═ xc,yc,zc]TThen the visual feature point P ═ xc,yc,zc]TTo the feature point in the image coordinate system s (t) ═ u, v]TCan be expressed as formula (1).
Figure BDA0001979307680000081
Wherein the value of f is a calibrated camera internal parameter; z is a radical ofcThe value of (a) is the estimated distance from the camera to the ground mark.
In one embodiment, the camera is mounted on the drone in a camera coordinate system (O)cXcYcZc) The camera optical center is the origin of the whole coordinate system, wherein the camera optical axis and OcZcCoincidence, corresponding to the forward direction as the camera shooting direction, the X-axis and the O-axis of the cameracXcParallel, Y-axis and OcYcThe axes are parallel. And for a body coordinate system (O)bXbYbZb) In other words, the origin of the whole coordinate system needs to be determined, the origin of the coordinate system coincides with the center of gravity of the unmanned aerial vehicle, and the longitudinal axis of the unmanned aerial vehicle points to the front of the unmanned aerial vehicle and is ObZbThe axes are in the same direction, both in the plane of symmetry; o isbXbThe axis being directed to the right, ObYbThe direction of the axis being directed towards the ground, ObYbDirection of axis and ObXbThe axes are oriented vertically, generally in the plane of symmetry of the aircraft.
When the unmanned aerial vehicle with the camera flies, the translation rate of the feature point P in the camera coordinate system is assumed to be T ═ Tx,Ty,Tz]TThe rotation rate is omega ═ wx,wy,wz]TThe kinematic equation of the feature point P in the camera coordinate system can be expressed as formula (2), where the translation rate reflects the position of the drone and the rotation rate reflects the pose of the drone.
Figure BDA0001979307680000082
Wherein the content of the first and second substances,
Figure BDA0001979307680000083
is the velocity of the feature point P in the camera coordinate system.
Substituting equation (1) into equation (2) can obtain a kinetic equation of the feature point P in the camera coordinate system with respect to the image coordinates, as shown in equation (3).
Figure BDA0001979307680000091
(t) converting [ u, v ] to s]TDeriving the time to obtain the speed information under the image coordinate system
Figure BDA0001979307680000092
Further, the formula (3) givesKinematic equations in the image coordinate system of equation (4).
Figure BDA0001979307680000093
Figure BDA0001979307680000094
Suppose the speed of the camera loaded on the drone is expressed as
Figure BDA0001979307680000095
Equation (4) can be expressed by equation (5).
Figure BDA0001979307680000096
Wherein the content of the first and second substances,
Figure BDA0001979307680000097
assuming that the visual feature sampling period of each iteration in the model predictive control algorithm is T, discretizing formula (5) obtains formula (7).
sk+1=sk+TL(sk,Zk)Vk(7)
Wherein the content of the first and second substances,
Figure BDA0001979307680000098
is the speed of movement of the drone, L(s)k,Zk) Representing the jacobian matrix of the image at time k.
As can be seen from the formula (7), the jacobian matrix L(s) of the image corresponding to the visual feature point at the current moment is obtainedk,Zk) To the operation parameter V of the unmanned aerial vehicle at the current momentkImage jacobian matrix L(s)k,Zk) And performing an integration operation with the sampling period T of the vision sensor, and performing the integration operation according to the result of the integration operation and the current vision characteristic coordinate skAnd predicting the visual feature point coordinate s of the unmanned aerial vehicle at the next momentk+1. Repeatedly executing prediction unmanned aerial vehicle at next timeThe coordinate of the visual feature point is carved until the unmanned aerial vehicle is predicted to be in the prediction time domain NpVisual feature point coordinate s of inner last momentk+iWherein i is more than or equal to 0 and less than or equal to Np
In one embodiment, the visual characteristic point error of each moment in the prediction time domain is determined according to the difference value of the visual characteristic point coordinate of each moment in the prediction time domain and the expected visual characteristic point coordinate; determining a quadratic function of the visual characteristic point errors according to the visual characteristic point errors at each moment in the prediction time domain; determining a quadratic function of the operation parameters according to the operation parameters of the unmanned aerial vehicle at each moment in the control time domain, wherein the quadratic function of the operation parameters is an energy function; and determining a cost function of the model predictive controller based on the sum of the quadratic function of the visual feature point error and the quadratic function of the operation parameter. For example, the cost function is shown in equation (8).
Figure BDA0001979307680000101
Wherein the parameter NpPrediction time domain parameter, N, representing predictive controlcRepresenting the control time domain parameters, R and Q are symmetric positive definite weighting matrices.
In one embodiment, the visual feature point coordinates and the operation parameters in the cost function are constrained in consideration of the visual field constraint and the speed constraint of the drone, for example, constraint equations are established as shown in equations (9) and (10).
Figure BDA0001979307680000102
Figure BDA0001979307680000103
Wherein, Tmax,wmaxRepresents the maximum speed, u, of the dronemin,vmin,umaxAnd vmaxAnd the boundary value represents the projection coordinate of the characteristic point in the visual range of the camera on the image plane.
In step 250, the coordinates of the expected visual feature point and the coordinates of the current visual feature point are input to the constructed model prediction controller, and the flight speed of the unmanned aerial vehicle at each moment in the control time domain is obtained through prediction by minimizing the cost function of the model prediction controller.
As shown in fig. 4, a curve 1 represents a reference trajectory, a curve 2 represents a curve formed by visual feature point coordinates at each time in a prediction time domain of prediction output, and a curve 3 represents the flight speed of the unmanned aerial vehicle at each time in a control time domain of prediction output.
In step 260, the flight speed at the first time in the control time domain is taken as the target flight speed satisfying the view field constraint and the operation constraint of the unmanned aerial vehicle. I.e. output of the optimisation model
Figure BDA0001979307680000104
As the target airspeed at the current time.
In one embodiment, the MATLAB/MPT tool box can be used to solve the target flight speed, that is, the above-mentioned established equations are all input into the MATLAB/MPT tool, so that the optimal solution of the flight speed of the unmanned aerial vehicle satisfying the constraints can be obtained.
In step 270, the target airspeed is input to the cascade PID airspeed tracking controller so that the actual airspeed of the drone tracks the target airspeed. As shown in fig. 3, the actual flying speed output by the drone is V through the cascade PID speed tracking controller in the drone modelc(k) At this time, the visual feature point coordinates may be output through the camera model.
As shown in fig. 5, in one embodiment, the target operating parameter is input to a position controller and an attitude controller in series, wherein the position controller and the attitude controller have proportional, integral, and derivative control functions; and feeding back the position information output by the position controller and the attitude controller which are connected in series to the position controller, and feeding back the output attitude information to the attitude controller.
In the above embodiment, by constructing the cost function of the model predictive controller and performing the view field constraint and the operation constraint on the cost function, the model predictive controller outputs the target flight speed to meet the view field constraint condition and the operation constraint condition, so that the actual flight speed of the unmanned aerial vehicle can track the target flight speed more accurately, thereby reducing the influence of noise and calibration errors on the visual servo control, and because the precise unmanned aerial vehicle dynamic model and the visual feature point dynamic model are not required in this embodiment, the anti-interference performance of the system is enhanced, so that the system has stronger robustness, better applicability and better operability.
In one embodiment, the desired location of the drone may be a landing location of the drone, thereby enabling the drone to complete autonomous landing without relying on GPS signals.
Fig. 6 is a schematic structural diagram of an embodiment of a visual servo control apparatus according to the present disclosure. The apparatus includes a ground identification image acquisition unit 610, a feature point coordinate determination unit 620, an operation parameter prediction unit 630, and a visual servo control unit 640.
The ground identification image acquisition unit 610 is configured to acquire a ground identification image taken by a vision sensor of the unmanned device at a current position. The ground mark is, for example, a marker of a specific shape or a specific color, such as a black and white checkerboard, an H-shaped landmark, etc., which may be laid on the ground or placed on a movable platform, such as a movable ground robot. The drone is for example a drone, in particular a quad-rotor drone, and the vision sensor is for example a camera.
The feature point coordinate determination unit 620 is configured to determine current visual feature point coordinates based on the ground identification image. After the ground identification image is obtained, image processing is carried out, the corner point information of the ground identification can be extracted, the corner point information is used as a visual feature point, and then the coordinate of the visual feature point in an image coordinate system is determined.
The operation parameter prediction unit 630 is configured to construct a cost function of the model prediction controller by using a difference between the coordinates of the visual feature point at each time in the prediction time domain and the coordinates of the expected visual feature point, and the operation parameters of the unmanned aerial device at each time in the control time domain, wherein the coordinates of the visual feature point at each time in the prediction time domain are obtained according to the current coordinates of the visual feature point and the corresponding operation parameters; and predicting to obtain the operation parameters of the unmanned equipment at each moment in the control time domain by minimizing the cost function of the model prediction controller. The operation parameter is, for example, velocity information that can reflect the position and posture of the unmanned device.
In one embodiment, a kinematic equation of the visual feature point may be established, discretized, and a correspondence between the current visual feature coordinate and the predicted visual feature coordinate point at the next time may be established. Assuming that the current time is the kth time and the current visual characteristic coordinate is skAccording to the current visual feature coordinates skPredicting the visual characteristic coordinate at the k +1 th moment to be sk+1Then according to the visual characteristic coordinate s at the k +1 th momentk+1Predicting to obtain the visual characteristic coordinate s at the k +2 th momentk+2And the analogy is carried out until the vision characteristic coordinate at the k + i moment is predicted to be sk+i
In one embodiment, the visual characteristic point error of each moment in the prediction time domain is determined according to the difference value of the visual characteristic point coordinate of each moment in the prediction time domain and the expected visual characteristic point coordinate; determining a quadratic function of the visual characteristic point errors according to the visual characteristic point errors at each moment in the prediction time domain; determining a quadratic function of the operation parameters according to the operation parameters of the unmanned aerial vehicle at each moment in the control time domain, wherein the quadratic function of the operation parameters is an energy function; and determining a cost function of the model predictive controller based on the sum of the quadratic function of the visual feature point error and the quadratic function of the operation parameter.
In one embodiment, the operating parameter prediction unit 630 is further configured to constrain the cost function of the model predictive controller to obtain target operating parameters that satisfy the field of view constraints and the operating constraints of the unmanned device.
The vision servo control unit 640 is configured to perform vision servo control on the unmanned aerial vehicle so as to operate the unmanned aerial vehicle to a desired position, with the operation parameter at the first time in the control time domain as a target operation parameter.
In one embodiment, the cascaded PID operating parameter tracking controller may be designed such that the actual operating parameters of the drone are in accordance with the target operating parameters. For example, the target operating parameters are input to a position controller and an attitude controller connected in series, wherein the position controller and the attitude controller have proportional, integral and derivative control functions; and feeding back the position information output by the position controller and the attitude controller which are connected in series to the position controller, and feeding back the output attitude information to the attitude controller.
In one embodiment, since the camera of the drone shoots the ground mark in real time, when the difference between the visual feature point coordinate corresponding to the ground mark image shot by the drone and the expected visual feature point coordinate is zero, it is said that the drone reaches the expected position.
In the embodiment, by constructing the cost function of the model predictive controller, after the cost function is minimized, the operation parameters of the unmanned aerial vehicle at each time in the control time domain can be predicted, the operation parameter at the first time in the control time domain is taken as a target operation parameter, and the unmanned aerial vehicle is subjected to visual servo control, that is, the model predictive control algorithm is combined with the visual servo control based on the image, so that the operation of the unmanned aerial vehicle does not depend on a GPS signal any more, the problem that the traditional visual servo control algorithm based on the image is difficult in parameter adjustment can be solved, the effect of rolling optimization is achieved, and the operation control performance of the unmanned aerial vehicle is improved.
In addition, for the nonlinear and strongly coupled unmanned aerial vehicle under the environment with multiple constraints and dynamic uncertainty, the control effect of the visual servo control based on the image in the related technology is not stable enough, and in the embodiment, the parameters in the cost function are subjected to the field-of-view constraint and the operation constraint, so that the target operation parameters output by the model prediction controller meet the field-of-view constraint condition and the operation constraint condition, the actual operation parameters of the unmanned equipment can track the target operation parameters more accurately, the influence of noise and calibration errors on the visual servo control is reduced, the anti-interference performance of the system is enhanced, and the system is stronger in robustness, better in applicability and better in operability.
Fig. 7 is a schematic structural diagram of another embodiment of the visual servo control apparatus according to the present disclosure. The apparatus comprises a memory 710 and a processor 720, wherein:
the memory 710 may be a magnetic disk, flash memory, or any other non-volatile storage medium. The memory 710 is used for storing instructions in the embodiments corresponding to fig. 1 and 2. Processor 720, coupled to memory 710, may be implemented as one or more integrated circuits, such as a microprocessor or microcontroller. The processor 720 is configured to execute instructions stored in the memory.
In one embodiment, as also shown in FIG. 8, the apparatus 800 includes a memory 810 and a processor 820. The processor 820 is coupled to the memory 810 by a BUS 830. The device 800 may also be coupled to an external storage device 850 via a storage interface 840 for facilitating retrieval of external data, and may also be coupled to a network or another computer system (not shown) via a network interface 860, which will not be described in detail herein.
In the embodiment, the data instructions are stored in the memory, and the instructions are processed by the processor, so that the operation control performance of the unmanned equipment is improved.
In another embodiment of the present disclosure, an unmanned device, such as a drone, is protected, as shown in fig. 9, which includes the visual servo control means 910 of the above embodiments.
In another embodiment, the drone further includes a vision sensor 920, wherein the vision sensor 920 is, for example, a camera configured to capture ground identification images.
In another embodiment, a computer-readable storage medium has stored thereon computer program instructions which, when executed by a processor, implement the steps of the method in the corresponding embodiments of fig. 1, 2. As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, apparatus, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Thus far, the present disclosure has been described in detail. Some details that are well known in the art have not been described in order to avoid obscuring the concepts of the present disclosure. It will be fully apparent to those skilled in the art from the foregoing description how to practice the presently disclosed embodiments.
Although some specific embodiments of the present disclosure have been described in detail by way of example, it should be understood by those skilled in the art that the foregoing examples are for purposes of illustration only and are not intended to limit the scope of the present disclosure. It will be appreciated by those skilled in the art that modifications may be made to the above embodiments without departing from the scope and spirit of the present disclosure. The scope of the present disclosure is defined by the appended claims.

Claims (13)

1. A visual servo control method, comprising:
acquiring a ground identification image shot by a visual sensor of the unmanned equipment at the current position;
determining current visual feature point coordinates based on the ground identification image;
constructing a cost function of a model prediction controller by using a difference value between a visual feature point coordinate and an expected visual feature point coordinate of each moment in a prediction time domain and an operation parameter of the unmanned equipment controlling each moment in the time domain, wherein the visual feature point coordinate of each moment in the prediction time domain is obtained according to the current visual feature point coordinate and the corresponding operation parameter;
predicting to obtain the operation parameters of the unmanned equipment at each moment in the control time domain by minimizing the cost function of the model predictive controller;
and taking the operation parameter at the first moment in the control time domain as a target operation parameter, and carrying out visual servo control on the unmanned equipment so as to enable the unmanned equipment to operate to a desired position.
2. The visual servo control method of claim 1, further comprising:
and constraining the cost function of the model predictive controller to obtain target operation parameters meeting the view field constraint condition and the operation constraint condition of the unmanned equipment.
3. The visual servo control method of claim 1, wherein constructing a cost function of a model predictive controller comprises:
determining the visual characteristic point error of each moment in the prediction time domain according to the difference value between the visual characteristic point coordinate of each moment in the prediction time domain and the expected visual characteristic point coordinate;
determining a quadratic function of the visual characteristic point errors according to the visual characteristic point errors at each moment in the prediction time domain;
determining a quadratic function of the operation parameters according to the operation parameters of the unmanned equipment at each moment in the control time domain;
determining a cost function of the model predictive controller based on a sum of the quadratic function of the visual feature point error and the quadratic function of the operating parameter.
4. The visual servo control method of claim 1, wherein obtaining visual feature point coordinates at each time instant within a predicted time domain comprises:
predicting the visual feature point coordinate of the unmanned equipment at the next moment according to the current visual feature coordinate and the operation parameter of the unmanned equipment at the current moment;
and repeatedly predicting the visual feature point coordinates of the unmanned equipment at the next moment until the visual feature point coordinates of the unmanned equipment at the last moment in the prediction time domain are predicted.
5. The visual servo control method of claim 4, wherein predicting visual feature point coordinates of the drone at a next time instance comprises:
acquiring an image Jacobian matrix corresponding to the visual feature point at the current moment;
performing a product operation on the operation parameters of the unmanned equipment at the current moment, the image Jacobian matrix and the sampling period of the visual sensor;
and predicting the visual feature point coordinate of the unmanned equipment at the next moment according to the result of the product operation and the current visual feature coordinate.
6. The visual servoing control method of any of claims 1-5, wherein visually servoing the unmanned device comprises:
inputting the target operation parameters into a position controller and an attitude controller which are connected in series, wherein the position controller and the attitude controller have proportional, integral and differential control functions;
and feeding back the position information output by the position controller and the attitude controller which are connected in series to the position controller, and feeding back the output attitude information to the attitude controller.
7. The visual servo control method of any of claims 1-5, wherein the operational parameter is velocity information capable of reflecting a position and an attitude of the unmanned device.
8. A visual servo control device, comprising:
a ground identification image acquisition unit configured to acquire a ground identification image photographed by a vision sensor of the unmanned device at a current position;
a feature point coordinate determination unit configured to determine current visual feature point coordinates based on the ground identification image;
the operation parameter prediction unit is configured to construct a cost function of the model prediction controller by using a difference value between a visual feature point coordinate and an expected visual feature point coordinate of each moment in a prediction time domain and an operation parameter of the unmanned equipment of each moment in a control time domain, wherein the visual feature point coordinate of each moment in the prediction time domain is obtained according to the current visual feature point coordinate and the corresponding operation parameter; predicting to obtain the operation parameters of the unmanned equipment at each moment in the control time domain by minimizing the cost function of the model predictive controller;
and the visual servo control unit is configured to take the operation parameter at the first moment in the control time domain as a target operation parameter, and perform visual servo control on the unmanned equipment so as to enable the unmanned equipment to operate to a desired position.
9. The visual servo control device of claim 8,
the operating parameter prediction unit is further configured to constrain a cost function of the model predictive controller to obtain target operating parameters that satisfy a field of view constraint and an operating constraint of the unmanned aerial device.
10. A visual servo control device, comprising:
a memory; and
a processor coupled to the memory, the processor configured to perform the visual servo control method of any of claims 1 to 7 based on instructions stored in the memory.
11. An unmanned device comprising the visual servo control apparatus of any of claims 1-10.
12. The drone of claim 11, further comprising:
a vision sensor configured to capture a ground identification image.
13. A computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the visual servo control method of any of claims 1 to 7.
CN201910143543.1A 2019-02-27 2019-02-27 Visual servo control method and device and unmanned equipment Pending CN111624875A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910143543.1A CN111624875A (en) 2019-02-27 2019-02-27 Visual servo control method and device and unmanned equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910143543.1A CN111624875A (en) 2019-02-27 2019-02-27 Visual servo control method and device and unmanned equipment

Publications (1)

Publication Number Publication Date
CN111624875A true CN111624875A (en) 2020-09-04

Family

ID=72258740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910143543.1A Pending CN111624875A (en) 2019-02-27 2019-02-27 Visual servo control method and device and unmanned equipment

Country Status (1)

Country Link
CN (1) CN111624875A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112462797A (en) * 2020-11-30 2021-03-09 深圳技术大学 Visual servo control method and system using grey prediction model
CN113591542A (en) * 2021-06-04 2021-11-02 江汉大学 Visual servo control method, device and equipment for robot
CN116512237A (en) * 2022-11-28 2023-08-01 广东建石科技有限公司 Industrial robot vision servo method, device, electronic equipment and storage medium
CN117506937A (en) * 2024-01-04 2024-02-06 中铁十四局集团大盾构工程有限公司 Weldment autonomous placement method based on multi-stage visual servo control

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102880062A (en) * 2012-09-04 2013-01-16 北京化工大学 Intelligent trolley 2.5-dimensional visual servo control method based on nonlinear model prediction
CN104942809A (en) * 2015-06-23 2015-09-30 广东工业大学 Mechanical arm dynamic fuzzy approximator based on visual servo system
WO2016193781A1 (en) * 2015-05-29 2016-12-08 Benemérita Universidad Autónoma De Puebla Motion control system for a direct drive robot through visual servoing
CN106371461A (en) * 2016-09-08 2017-02-01 河海大学常州校区 Visual servo based video tracking flight object control system and method
CN107367943A (en) * 2017-09-01 2017-11-21 嘉应学院 A kind of dimension rotation correlation filtering Visual servoing control method
CN107861501A (en) * 2017-10-22 2018-03-30 北京工业大学 Underground sewage treatment works intelligent robot automatic positioning navigation system
CN109358507A (en) * 2018-10-29 2019-02-19 东北大学 A kind of visual servo adaptive tracking control method of time-varying performance boundary constraint

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102880062A (en) * 2012-09-04 2013-01-16 北京化工大学 Intelligent trolley 2.5-dimensional visual servo control method based on nonlinear model prediction
WO2016193781A1 (en) * 2015-05-29 2016-12-08 Benemérita Universidad Autónoma De Puebla Motion control system for a direct drive robot through visual servoing
CN104942809A (en) * 2015-06-23 2015-09-30 广东工业大学 Mechanical arm dynamic fuzzy approximator based on visual servo system
CN106371461A (en) * 2016-09-08 2017-02-01 河海大学常州校区 Visual servo based video tracking flight object control system and method
CN107367943A (en) * 2017-09-01 2017-11-21 嘉应学院 A kind of dimension rotation correlation filtering Visual servoing control method
CN107861501A (en) * 2017-10-22 2018-03-30 北京工业大学 Underground sewage treatment works intelligent robot automatic positioning navigation system
CN109358507A (en) * 2018-10-29 2019-02-19 东北大学 A kind of visual servo adaptive tracking control method of time-varying performance boundary constraint

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
仲训杲: "基于图像的非标定视觉反馈控制机器人全局定位方法", 《厦门大学学报 ( 自然科学版 )》, vol. 57, no. 3, 31 May 2018 (2018-05-31), pages 413 - 420 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112462797A (en) * 2020-11-30 2021-03-09 深圳技术大学 Visual servo control method and system using grey prediction model
CN112462797B (en) * 2020-11-30 2023-03-07 深圳技术大学 Visual servo control method and system using grey prediction model
CN113591542A (en) * 2021-06-04 2021-11-02 江汉大学 Visual servo control method, device and equipment for robot
CN113591542B (en) * 2021-06-04 2024-01-26 江汉大学 Visual servo control method, device and equipment for robot
CN116512237A (en) * 2022-11-28 2023-08-01 广东建石科技有限公司 Industrial robot vision servo method, device, electronic equipment and storage medium
CN116512237B (en) * 2022-11-28 2023-09-19 广东建石科技有限公司 Industrial robot vision servo method, device, electronic equipment and storage medium
CN117506937A (en) * 2024-01-04 2024-02-06 中铁十四局集团大盾构工程有限公司 Weldment autonomous placement method based on multi-stage visual servo control
CN117506937B (en) * 2024-01-04 2024-03-12 中铁十四局集团大盾构工程有限公司 Weldment autonomous placement method based on multi-stage visual servo control

Similar Documents

Publication Publication Date Title
Jung et al. Perception, guidance, and navigation for indoor autonomous drone racing using deep learning
CN111624875A (en) Visual servo control method and device and unmanned equipment
CN110362098B (en) Unmanned aerial vehicle visual servo control method and device and unmanned aerial vehicle
Lin et al. A robust real-time embedded vision system on an unmanned rotorcraft for ground target following
Ludington et al. Augmenting UAV autonomy
Bošnak et al. Quadrocopter hovering using position-estimation information from inertial sensors and a high-delay video system
Johnson et al. Real-time vision-based relative aircraft navigation
Gur fil et al. Partial aircraft state estimation from visual motion using the subspace constraints approach
Wang et al. Vision-based tracking control of underactuated water surface robots without direct position measurement
Hoang et al. Vision-based target tracking and autonomous landing of a quadrotor on a ground vehicle
CN114387462A (en) Dynamic environment sensing method based on binocular camera
Vela et al. Vision-based range regulation of a leader-follower formation
Wang et al. Precision uav landing control based on visual detection
Zhang et al. Pose measurement for non-cooperative target based on visual information
Luo et al. Docking navigation method for UAV autonomous aerial refueling
Rogelio et al. Alignment control using visual servoing and mobilenet single-shot multi-box detection (SSD): A review
Li et al. Robocentric model-based visual servoing for quadrotor flights
Magree et al. Factored extended Kalman filter for monocular vision-aided inertial navigation
Guo et al. Nonlinear vision-based observer for visual servo control of an aerial robot in global positioning system denied environments
Bobkov et al. Vision-based navigation method for a local maneuvering of the autonomous underwater vehicle
CN115311353B (en) Multi-sensor multi-handle controller graph optimization tight coupling tracking method and system
Ryan et al. Probabilistic correspondence in video sequences for efficient state estimation and autonomous flight
CN108733076B (en) Method and device for grabbing target object by unmanned aerial vehicle and electronic equipment
Din et al. Embedded low power controller for autonomous landing of UAV using artificial neural network
De Croon et al. Time-to-contact estimation in landing scenarios using feature scales

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
TA01 Transfer of patent application right

Effective date of registration: 20210311

Address after: 101, 1st floor, building 2, yard 20, Suzhou street, Haidian District, Beijing 100080

Applicant after: Beijing Jingbangda Trading Co.,Ltd.

Address before: 100086 8th Floor, 76 Zhichun Road, Haidian District, Beijing

Applicant before: BEIJING JINGDONG SHANGKE INFORMATION TECHNOLOGY Co.,Ltd.

Applicant before: BEIJING JINGDONG CENTURY TRADING Co.,Ltd.

Effective date of registration: 20210311

Address after: Room a1905, 19 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Beijing Jingdong Qianshi Technology Co.,Ltd.

Address before: 101, 1st floor, building 2, yard 20, Suzhou street, Haidian District, Beijing 100080

Applicant before: Beijing Jingbangda Trading Co.,Ltd.

TA01 Transfer of patent application right
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination