CN113721188A - Multi-unmanned aerial vehicle self-positioning and target positioning method in rejection environment - Google Patents

Multi-unmanned aerial vehicle self-positioning and target positioning method in rejection environment Download PDF

Info

Publication number
CN113721188A
CN113721188A CN202110903143.3A CN202110903143A CN113721188A CN 113721188 A CN113721188 A CN 113721188A CN 202110903143 A CN202110903143 A CN 202110903143A CN 113721188 A CN113721188 A CN 113721188A
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
observation
base station
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110903143.3A
Other languages
Chinese (zh)
Other versions
CN113721188B (en
Inventor
张福彪
杨希雯
林德福
王亚凯
陈祺
周天泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202110903143.3A priority Critical patent/CN113721188B/en
Publication of CN113721188A publication Critical patent/CN113721188A/en
Application granted granted Critical
Publication of CN113721188B publication Critical patent/CN113721188B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/02Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
    • G01S5/0294Trajectory determination or predictive filtering, e.g. target tracking or Kalman filtering
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/0009Transmission of position information to remote stations
    • G01S5/0045Transmission from base station to mobile station
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/02Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
    • G01S5/0284Relative positioning

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Navigation (AREA)

Abstract

The invention discloses a method for self-positioning and target positioning of multiple unmanned aerial vehicles in a rejection environment, which comprises the steps of setting a base station with a known absolute position on the ground, providing relative distance and angle information between the unmanned aerial vehicles communicated with the base station by the base station, fusing the information provided by the base station, the relative position information measured between the unmanned aerial vehicles and the target information obtained by detection of the unmanned aerial vehicles by using a Kalman filtering algorithm, thereby obtaining the information of the unmanned aerial vehicles and the target position, then calculating course angles of observing the unmanned aerial vehicles and turning the unmanned aerial vehicles, controlling the unmanned aerial vehicles to further adjust the positions according to the course angles, obtaining new detection information again, continuously repeating the process, and gradually improving the accuracy of the obtained information of the unmanned aerial vehicles and the target position so as to meet the use requirements.

Description

Multi-unmanned aerial vehicle self-positioning and target positioning method in rejection environment
Technical Field
The invention relates to a method for positioning self and targets by an unmanned aerial vehicle, in particular to a method for self positioning and target positioning of multiple unmanned aerial vehicles in a rejection environment.
Background
Many rotor unmanned aerial vehicle has now been widely used in military and civilian field owing to have mobility, the good characteristics of flexibility. The detection and positioning technology for ground targets is one of the key technologies applied to the existing multi-rotor unmanned aerial vehicle.
With the development of multi-machine cooperation technology, the positioning problem based on multiple unmanned aerial vehicles has become a current research hotspot. The multi-machine positioning problem can be divided into a multi-machine self-positioning problem and a multi-machine target positioning problem. The problem of self-positioning of the multiple unmanned aerial vehicles is solved in that under a rejection environment, each unmanned aerial vehicle improves self absolute position positioning accuracy by measuring relative position information between the base station and the adjacent unmanned aerial vehicle. The problem of multi-unmanned aerial vehicle cooperative target positioning refers to that a plurality of unmanned aerial vehicles acquire relative information between the targets from different positions, so that the targets can be quickly and accurately positioned, but in an actual application scene, the navigation system of the unmanned aerial vehicles is out of work due to the interference of same-frequency signals or shielding of buildings and the like, and therefore the targets cannot be accurately positioned. The two problems are coupled, and in the target positioning problem, the accuracy of target positioning depends on the positioning accuracy of the unmanned aerial vehicle and the accuracy of relative information measurement; the positioning result of each unmanned aerial vehicle cooperative target can also improve the self-positioning quality of the unmanned aerial vehicle;
however, in the prior art, there is no effective solution for positioning the drone and the target in the rejection environment, so a method for providing an accurate position of the drone and the target in the rejection environment is needed.
For the reasons, the inventor of the present invention has made an intensive study on the existing unmanned aerial vehicle and the target positioning method, so as to design a method for self-positioning and target positioning of multiple unmanned aerial vehicles in a rejection environment, which can solve the above problems.
Disclosure of Invention
In order to overcome the problems, the inventor of the present invention has made intensive research and designs a method for self-positioning and target-positioning of multiple unmanned aerial vehicles in a rejection environment, where the rejection environment is an environment in which satellite signals are shielded, and a system that performs positioning by using satellites in the environment cannot normally receive satellite signals and cannot normally operate; in the method, a base station with a known absolute position is arranged on the ground, the base station provides relative distance and angle information between unmanned aerial vehicles communicated with the base station, the information provided by the base station, the relative position information measured between the unmanned aerial vehicles and target information obtained by detection of the unmanned aerial vehicles are fused by using a Kalman filtering algorithm, so that the unmanned aerial vehicles and the target position information are obtained, course angles of the unmanned aerial vehicles and the transfer unmanned aerial vehicles are calculated, the unmanned aerial vehicles and the transfer unmanned aerial vehicles are controlled to further adjust the positions, new detection information is obtained by detecting the new positions again, the process is continuously repeated, the accuracy of the obtained information of the unmanned aerial vehicles and the target positions is gradually improved, and the use requirements are met, so that the method is completed.
Specifically, the invention aims to provide a method for self-positioning and target-positioning of multiple unmanned aerial vehicles in a rejection environment, which comprises the following steps:
at least two observation drones, at least one transit drone and a base station of known position are provided,
the observation unmanned aerial vehicle is used for detecting and obtaining: the distance between each observation unmanned aerial vehicle and the target, and the distance between any two observation unmanned aerial vehicles;
the transfer unmanned aerial vehicle is used for detecting and obtaining: transferring the distance between the unmanned aerial vehicle and each observation unmanned aerial vehicle;
the base station is used for acquiring: the distance between the base station and the transfer unmanned aerial vehicle, the distance between the base station and each observation unmanned aerial vehicle, and the line-of-sight angle between the base station and the transfer unmanned aerial vehicle;
the method comprises the following steps:
step 1, obtaining an observation vector through observation unmanned aerial vehicle, transit unmanned aerial vehicle and base station detection;
step 2, performing fusion processing on the observation vectors by using a Kalman filtering algorithm;
step 3, solving course angles of the observation unmanned aerial vehicle and the transfer unmanned aerial vehicle, and controlling the observation unmanned aerial vehicle and the transfer unmanned aerial vehicle to reach the next waypoint according to the course angles;
and 4, repeating the steps 1, 2 and 3 until the sum of diagonal elements of the error variance matrix of the state estimation is reduced to a set threshold value.
Wherein, in step 3, the state estimation error variance matrix P at the k + n th time is predicted at the k timek+nThe inverse of, i.e. the information matrix Jk+nSolving the resulting information matrix Jk+nThe course angle of the observation unmanned aerial vehicle and the transfer unmanned aerial vehicle of maximize.
In step 3, obtaining the course angles of the observation unmanned aerial vehicle and the transfer unmanned aerial vehicle according to the following formula (one);
Figure BDA0003200513770000031
wherein the content of the first and second substances,
Figure BDA0003200513770000032
representing a vector containing three heading angles;
Jk+nrepresenting an information matrix, obtained by the following equation (two):
Figure BDA0003200513770000033
wherein, the P0Representing an initial value of a state estimation error variance matrix in a Kalman filtering algorithm;
said IiThe information quantity related to the state vector to be estimated in the observation vector representing the time i is obtained by the following formula (three):
Figure BDA0003200513770000034
wherein HiAnd obtaining a Jacobian matrix obtained by performing partial derivation on the state vector for the observation vector through the following formula (IV):
Figure BDA0003200513770000041
the observation vectors comprise the distance between each observation unmanned aerial vehicle and the target, the distance between any two observation unmanned aerial vehicles, the distance between the transfer unmanned aerial vehicle and each observation unmanned aerial vehicle, the distance between the base station and the transfer unmanned aerial vehicle, the distance between the base station and each observation unmanned aerial vehicle, and the line-of-sight angle between the base station and the transfer unmanned aerial vehicle;
the state vector comprises the position of a target, the position of a transfer unmanned aerial vehicle and the position of each observation unmanned aerial vehicle;
said XiThe state vector representing the ith moment contains the position information of the unmanned aerial vehicle and the target;
said XbRepresents a base station location;
the R represents a noise variance matrix of the observation sensor;
wherein, in step 4, the error variance P of the state estimation is obtained by the following formula (five):
Figure BDA0003200513770000042
wherein, PkRepresenting the error variance, P, of the state estimate at time k- kRepresenting a predicted state error variance matrix at time k;
Figure BDA0003200513770000043
a represents the system state transition matrix, Q represents the state error variance matrix,
Figure BDA0003200513770000045
to representA predicted k-1 time state error variance matrix;
Kkrepresenting the kalman gain:
Figure BDA0003200513770000044
in step 4, the set threshold is 50-80.
The invention has the advantages that:
(1) according to the method for self-positioning and target-positioning of the multiple unmanned aerial vehicles in the rejection environment, the method is simple in iteration process, high in iteration speed and simple and controllable in integral execution process;
(2) according to the method for self-positioning and target-positioning of the multiple unmanned aerial vehicles in the rejection environment, the target position and the position information of the unmanned aerial vehicles can be obtained in real time in the execution process, the obtained information is more and more accurate, and relatively accurate position information can be provided before accurate information is not obtained so as to meet special information requirements.
Drawings
Fig. 1 is a logic diagram of the overall method for self-positioning and target positioning of multiple drones in a rejection environment according to a preferred embodiment of the invention;
FIG. 2 is a schematic diagram illustrating a viewing relationship on a two-dimensional plane according to a preferred embodiment of the present invention;
fig. 3 shows three drone waypoint trajectory diagrams in an embodiment of the invention;
FIG. 4 is a schematic diagram illustrating variation in target positioning error in an embodiment of the present invention;
fig. 5 shows schematic diagrams of changes in positioning errors of three drones in an embodiment of the invention;
FIG. 6 is a diagram illustrating a variation rule of the sum of diagonal elements of the error variance matrix in the embodiment of the present invention.
Detailed Description
The invention is explained in more detail below with reference to the figures and examples. The features and advantages of the present invention will become more apparent from the description.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration". Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
According to the method for self-positioning and target-positioning of the multiple unmanned aerial vehicles in the rejection environment, the rejection environment is an environment for shielding satellite signals, and a system for positioning by using a satellite in the environment cannot normally receive the satellite signals and cannot normally work; the method comprises the following steps:
at least two observation drones, at least one transit drone and a base station of known position are provided,
the observation unmanned aerial vehicle is used for detecting and obtaining: the distance between each observation unmanned aerial vehicle and the target, and the distance between at least two observation unmanned aerial vehicles;
the transfer unmanned aerial vehicle is used for detecting and obtaining: transferring the distance between the unmanned aerial vehicle and each observation unmanned aerial vehicle;
the base station is used for acquiring: the distance between the base station and the transfer unmanned aerial vehicle, the distance between the base station and each observation unmanned aerial vehicle, and the line-of-sight angle between the base station and the transfer unmanned aerial vehicle;
as shown in fig. 1, the method comprises the steps of:
step 1, obtaining an observation vector Z through observation unmanned aerial vehicle, transfer unmanned aerial vehicle and base station detectionk
Step 2, carrying out fusion processing on the observation vector by using a Kalman filtering algorithm to obtain a state vector
Step 3, solving course angles of the observation unmanned aerial vehicle and the transfer unmanned aerial vehicle, and respectively controlling the observation unmanned aerial vehicle and the transfer unmanned aerial vehicle to reach new waypoints according to the course angles;
and 4, repeating the steps 1, 2 and 3 until the sum of diagonal elements of the error variance matrix of the state estimation is reduced to a set threshold value.
Preferably, each unmanned aerial vehicle is provided with a laser range finder for detecting the distance between the unmanned aerial vehicle and another unmanned aerial vehicle or a target, and each unmanned aerial vehicle is also provided with a photoelectric pod which can directly acquire an azimuth angle so as to obtain a line-of-sight angle between the base station and the transfer unmanned aerial vehicle; the base station is also provided with a laser distance meter for measuring distance.
In a preferred embodiment, in step 1, the observation vector ZkIncluding angular surveying and distance measurement information, particularly, observe to including every distance of observing between unmanned aerial vehicle and the target, arbitrary two distances of observing between unmanned aerial vehicle each other, the distance between transfer unmanned aerial vehicle and every observation unmanned aerial vehicle, the distance between basic station and the transfer unmanned aerial vehicle, the distance between basic station and every observation unmanned aerial vehicle, the line of sight angle between basic station and the transfer unmanned aerial vehicle. When the target, the observation unmanned aerial vehicle, the transfer unmanned aerial vehicle and the base station are all in the same two-dimensional plane, the observation unmanned aerial vehicle is set to be two, and the transfer unmanned aerial vehicle is set to be one, as shown in fig. 2, the observation vector Z at the momentkCan be expressed as:
Zk=[ρ2T,ρ3T,φ10,ρ10,ρ20,ρ30,ρ12,ρ13,ρ23]T
ρ2Tand ρ3TRespectively representing the distances between the observation unmanned aerial vehicles and the target, which are obtained by the detection of the two observation unmanned aerial vehicles;
φ10the line-of-sight angle between the base station and the transfer unmanned aerial vehicle is represented;
ρ10representing the distance between the base station and the transfer unmanned aerial vehicle;
ρ20and ρ30Respectively representing the distances between the base station and the two observation unmanned aerial vehicles;
ρ12and ρ13Respectively representing the distances between the transfer unmanned aerial vehicle and the two observation unmanned aerial vehicles;
ρ23representing the distance between two observing drones.
Of the base stationThe position information being known, i.e. Xb=[x0,y0]T
The observation vector ZkEquivalent to the following observation equation:
Figure BDA0003200513770000081
wherein, VkRepresenting measurement noise, and obeying normal distribution with the mean value of 0 and the variance of R;
(xT,yT) Indicating the target position, (x)1,y1) Represents the transit drone position, (x)2,y2) And (x)3,y3) Respectively representing observing drone positions.
Preferably, in step 1, when the observation unmanned aerial vehicle, the transfer unmanned aerial vehicle and the base station are used for detection for the first time, a set of target position, transfer unmanned aerial vehicle position and observation unmanned aerial vehicle position obtained through estimation is given for resolving and obtaining an initial observation equation; and gradually obtaining more accurate position information in the subsequent loop iteration process. The initial position of the target is calculated by triangular intersection according to the distance obtained by the unmanned aerial vehicles of the two detection targets, the initial position of the unmanned aerial vehicle is given by the inertial navigation equipment on the unmanned aerial vehicle, the inertial navigation drift error without satellite navigation assistance in a rejection environment is large, and the error can be eliminated in a short time by the method.
In the application, the observation vector is obtained in real time, the detection/obtaining frequency is 1Hz, the unmanned aerial vehicle can fly for 1s every time the unmanned aerial vehicle receives a course angle instruction, and the observation vector is obtained by detecting again after the course angle instruction is completed.
When flying according to the course angle, the unmanned aerial vehicle flies at a constant speed according to the preset speed, and the receiving frequency of the course angle is 1 Hz.
In this application, transfer unmanned aerial vehicle and observation unmanned aerial vehicle all obtain the data information who is used for constituting the observation vector through the detector detection at the same moment, and the position that is located when transfer unmanned aerial vehicle and observation unmanned detection information is promptly the waypoint, at the same moment, the waypoint at every unmanned aerial vehicle place all is different. The method comprises the steps that a plurality of course angles are obtained by resolving each time, each unmanned aerial vehicle corresponds to one course angle, the course angles obtained by the unmanned aerial vehicles can be different from one another, when the unmanned aerial vehicles receive the course angles, the unmanned aerial vehicles can move and fly according to the course angles, the flying time of the unmanned aerial vehicles is 1s, therefore, a new space position is reached, the unmanned aerial vehicles detect the new space position again, and a new observation vector is obtained, wherein the new space position is a new navigation point.
Preferably, each unmanned aerial vehicle is provided with a data transmission radio station, so that information observed by the unmanned aerial vehicle is transmitted to the base station in real time, and a course angle instruction transmitted by the base station is received; and a data transmission radio station is also arranged on the base station, so that the observation vector is received in real time, and the course angle instruction is transmitted to each unmanned aerial vehicle.
In a preferred embodiment, in step 2, the fusion process is performed by an extended kalman filter algorithm, by which a state vector can be obtained.
In a preferred embodiment, in step 3, the state estimation error variance matrix P at time k + n is predicted at time kk+nThe inverse of, i.e. the information matrix Jk+nSolving the resulting information matrix Jk+nThe course angle of the observation unmanned aerial vehicle and the transfer unmanned aerial vehicle of maximize.
Preferably, the state vector at time k is:
Xk=[xT,yT,x1,y1,x2,y2,x3,y3]k T
the state vector at time k +1 is correspondingly estimated as:
Xk+1=Xk+[0,0,v0 cosψk1,v0 sinψk1,v0 cosψk2,v0 sinψk2,v0 cosψk3,v0 sinψk3]Tk
in the application, when the course angle is optimized, the values of an objective function at n moments (k +1 to k + n moments) later need to be predicted, and the expression of the objective function comprises Xk+1To Xk+nThe state vector at the subsequent time instant needs to be estimated.
Wherein, ω iskRepresenting process noise,. psik1Representing the course angle of the transfer unmanned aerial vehicle at the moment k; psik2And psik3Respectively representing the heading angles of the two observed drones at the moment k.
Preferably, in step 3, the heading angles of the observation drone and the transit drone are obtained by the following formula (one);
Figure BDA0003200513770000101
wherein the content of the first and second substances,
Figure BDA0003200513770000102
the optimization vector representing the composition of three heading angles, i.e.. psik1、ψk2And psik3Forming an optimized vector;
Figure BDA0003200513770000103
is a mathematical symbol, which means that the objective function in the brackets on the right side is maximized;
Jk+nrepresenting an information matrix, obtained by the following equation (two):
Figure BDA0003200513770000104
wherein, the P0The initial error variance of the state in the Kalman filtering algorithm is represented, and the value of the initial error variance is as follows:
[500,0,0,0,0,0,0,0;0,500,0,0,0,0,0,0;0,0,100,0,0,0,0,0;0,0,0,100,0,0,0,0;0,0,0,0,100,0,0,0;0,0,0,0,0,100,0,0;0,0,0,0,0,0,100,0;0,0,0,0,0,0,0,100;]
said IiThe information quantity related to the state vector to be estimated in the observation vector representing the time i is obtained by the following formula (three):
Figure BDA0003200513770000105
wherein HiAnd obtaining a Jacobian matrix obtained by performing partial derivation on the state vector for the observation vector at the moment i through the following formula (IV):
Figure BDA0003200513770000111
the observation vector is:
Zk=[ρ2T,ρ3T,φ10,ρ10,ρ20,ρ30,ρ12,ρ13,ρ23]T
the state vector is Xk=[xT,yT,x1,y1,x2,y2,x3,y3]k T
Said XiThe state vector representing the ith moment comprises unmanned aerial vehicle position information and target position information; preferably, the system comprises three unmanned aerial vehicle position information and one target position information;
said XbRepresents a base station location;
the R represents a sensor noise variance matrix;
in a preferred embodiment, in step 4, the error variance of the state estimate is obtained by the following equation (five):
Figure BDA0003200513770000112
wherein, PkError representing state estimation at time kVariance;
P- krepresenting a predicted state error variance matrix at time k;
Figure BDA0003200513770000113
a represents the system state transition matrix, Q represents the state error variance matrix,
Figure BDA0003200513770000115
representing a predicted state error variance matrix at time k-1;
Kkrepresenting the kalman gain:
Figure BDA0003200513770000114
in a preferred embodiment, in step 4, the set threshold is 30 to 100, preferably 50 to 80, and more preferably 50, in this application, when the sum of diagonal elements of the error variance matrix of the state estimation is smaller than the threshold, that is, the sum of error variances corresponding to each state is smaller than the threshold, at this time, each state quantity has converged, the positioning error reaches the meter level, and the error at this time is considered to be within the acceptable range.
In a preferred embodiment, when the sum of diagonal elements of the error variance matrix of the state estimation is smaller than the set threshold, the corresponding state vector is an accurate output value, so as to obtain accurate target position information and unmanned aerial vehicle position information.
In a preferred embodiment, when the target, the observation unmanned aerial vehicle, the transfer unmanned aerial vehicle and the base station are all in the same three-dimensional space, the base station and the target are fixed on the ground and are immobile, the observation unmanned aerial vehicle and the transfer unmanned aerial vehicle can fly or hover in the three-dimensional space, and the observation vector, the observation equation and the state vector at the moment are as follows:
Zk=[ρ2T,ρ3T,φ10,ρ10,ρ20,ρ30,ρ12,ρ13,ρ23]T
Figure BDA0003200513770000121
Xk=[xT,yT,x1,y1,x2,y2,x3,y3]k T
examples
Selecting three same unmanned aerial vehicles, wherein the flying speeds of the three same unmanned aerial vehicles are all 10m/s and are respectively numbered as UAV1, UAV2 and UAV3, and the corresponding initial coordinates are respectively [10m, 10m ], [0m, -20m ], [ -20m, 0m ]; the base station coordinates are set to [0m, 0m ], and the target real coordinates are [200m, 200m ]. UAV1 is a transit drone, and UAVs 2 and UAV3 are observation drones.
Wherein, the UAV1, the UAV2 and the UAV3 are all provided with a laser range finder and a data transmission station; the base station is provided with a laser range finder, a photoelectric pod and a data transmission station.
In the control process, the turning rate constraint of three unmanned aerial vehicles is 10 degrees/s, and the initial value of the state vector is as follows:
X0=[180,220,12,12,6,-22,-23,7]T
that is, the target coordinates at the initial time of the control process are [180m, 220m ], the coordinates of UAV1 are [12m, 12m ], the coordinates of UAV2 are [6m, -22m ], and the coordinates of UAV3 are [ -23m, 7m ].
The three unmanned aerial vehicles are controlled as follows:
step 1, finding observation vectors through UAV1, UAV2, UAV3 and a base station, namely, a distance between UAV1 and UAV2 and UAV3, a distance between the base station and UAV1, UAV2, and UAV3, a line-of-sight angle between the base station and UAV1, a distance between UAV2 and a target, a distance between UAV3 and the target, and a distance between UAV2 and UAV 3;
step 2, performing fusion processing on the measurement vector by using a Kalman filtering algorithm to obtain a state vector;
the state vector comprises target position information and position information of all unmanned aerial vehicles, namely coordinate information of the target, the UAV1, the UAV2 and the UAV 3;
performing fusion processing through an extended Kalman filtering algorithm;
step 3, solving the heading angles of the UAV1, the UAV2 and the UAV3, and controlling the UAV1, the UAV2 and the UAV3 to reach a new waypoint according to the heading angles;
wherein the heading angles of the observation unmanned aerial vehicle UAV2, UAV3, and the transit unmanned aerial vehicle UAV1 are obtained by the following equation (one);
Figure BDA0003200513770000131
wherein the content of the first and second substances,
Figure BDA0003200513770000132
representing an optimization vector consisting of three course angles;
Jk+nobtained by the following formula (II)
Figure BDA0003200513770000141
Wherein, P0Take a value of
[500,0,0,0,0,0,0,0;0,500,0,0,0,0,0,0;0,0,100,0,0,0,0,0;0,0,0,100,0,0,0,0;0,0,0,0,100,0,0,0;0,0,0,0,0,100,0,0;0,0,0,0,0,0,100,0;0,0,0,0,0,0,0,100;]
Ii is obtained by the following formula (III)
Figure BDA0003200513770000142
Wherein HiObtained by the following formula (IV)
Figure BDA0003200513770000143
Obtaining: the course angle of the UAV1 is-120 degrees, the course angle of the UAV2 is 50 degrees, the course angle of the UAV3 is 30 degrees, the UAV1, the UAV2 and the UAV3 are controlled to move for 1 second at the speed of 10m/s according to the course angle, then the observation vector is obtained through re-detection, and a new course angle is obtained according to the observation vector;
step 4, repeating the step 1, the step 2 and the step 3 for 50 times; the obtained motion trajectories of the unmanned aerial vehicle UAV1, UAV2, and UAV3, that is, the waypoints of the unmanned aerial vehicle, are shown in fig. 3, and accordingly, the target positioning error variation corresponding to each waypoint is shown in fig. 4; the positioning error variation of the three drones is shown in fig. 5;
as can be seen from FIG. 4, the target positioning error can be converged within 10 meters within 15 seconds;
as can be seen from fig. 5, the position error of the drone remains within 3 meters without the absolute coordinates provided by satellite navigation.
Further, in step 4, the error variance of the state estimation is solved in real time by the following equation (five),
Figure BDA0003200513770000151
wherein, PkError variance representing state estimation at time k; p- kRepresenting a predicted state error variance matrix at time k;
Figure BDA0003200513770000152
a represents the system state transition matrix, Q represents the state error variance matrix,
Figure BDA0003200513770000154
representing a predicted state error variance matrix at time k-1;
Kkrepresenting the kalman gain:
Figure BDA0003200513770000153
the sum of the diagonal elements of the error variance matrix for each waypoint's corresponding state estimate is shown in figure 6,
therefore, when the sum of diagonal elements of the error variance matrix of the state estimation reaches 50, the iterative computation is stopped, the corresponding target positioning error is 5m, the positioning errors of the unmanned aerial vehicle UAV1, UAV2, and UAV3 are 2.5m, 0.8m, and 2.4m, respectively, and when the sum of diagonal elements of the error variance matrix of the state estimation reaches 50, the position information of the target, UAV1, UAV2, and UAV3 obtained when the iterative computation is stopped can meet the control requirement.
The present invention has been described above in connection with preferred embodiments, but these embodiments are merely exemplary and merely illustrative. On the basis of the above, the invention can be subjected to various substitutions and modifications, and the substitutions and the modifications are all within the protection scope of the invention.

Claims (5)

1. A method for self-positioning and target-positioning of multiple unmanned aerial vehicles in a rejection environment is characterized in that:
at least two observation drones, at least one transit drone and a base station of known position are provided,
the observation unmanned aerial vehicle is used for detecting and obtaining: the distance between each observation unmanned aerial vehicle and the target, and the distance between any two observation unmanned aerial vehicles;
the transfer unmanned aerial vehicle is used for detecting and obtaining: transferring the distance between the unmanned aerial vehicle and each observation unmanned aerial vehicle;
the base station is used for acquiring: the distance between the base station and the transfer unmanned aerial vehicle, the distance between the base station and each observation unmanned aerial vehicle, and the line-of-sight angle between the base station and the transfer unmanned aerial vehicle;
the method comprises the following steps:
step 1, obtaining an observation vector through observation unmanned aerial vehicle, transit unmanned aerial vehicle and base station detection;
step 2, carrying out fusion processing on the observation vector by using a Kalman filtering algorithm to obtain a state vector
Step 3, acquiring course angles of the observation unmanned aerial vehicle and the transfer unmanned aerial vehicle, and respectively controlling the observation unmanned aerial vehicle and the transfer unmanned aerial vehicle to reach new waypoints according to the course angles;
and 4, repeating the steps 1, 2 and 3 until the sum of diagonal elements of the error variance matrix of the state estimation is reduced to a set threshold value.
2. The multi-UAV self-localization and target-localization method in a rejection environment of claim 1,
in step 3, the state estimation error variance matrix P at the k + n th time is predicted at time kk+nThe inverse of, i.e. the information matrix Jk+nSolving the resulting information matrix Jk+nThe course angle of the observation unmanned aerial vehicle and the transfer unmanned aerial vehicle of maximize.
3. The multi-UAV self-localization and target-localization method in a rejection environment of claim 2,
in step 3, obtaining course angles of the observation unmanned aerial vehicle and the transfer unmanned aerial vehicle according to the following formula (I);
Figure FDA0003200513760000021
wherein.
Figure FDA0003200513760000022
Representing a vector containing three heading angles;
Jk+nrepresenting an information matrix, obtained by the following equation (two):
Figure FDA0003200513760000023
wherein, the P0Representing an initial value of a state estimation error variance matrix in a Kalman filtering algorithm;
said IiIndicating time of day iThe information amount related to the state vector to be estimated in the observation vector is obtained by the following formula (three):
Figure FDA0003200513760000024
wherein HiAnd obtaining a Jacobian matrix obtained by performing partial derivation on the state vector for the observation vector through the following formula (IV):
Figure FDA0003200513760000025
the observation vectors comprise the distance between each observation unmanned aerial vehicle and the target, the distance between any two observation unmanned aerial vehicles, the distance between the transfer unmanned aerial vehicle and each observation unmanned aerial vehicle, the distance between the base station and the transfer unmanned aerial vehicle, the distance between the base station and each observation unmanned aerial vehicle, and the line-of-sight angle between the base station and the transfer unmanned aerial vehicle;
the state vector comprises the position of a target, the position of a transfer unmanned aerial vehicle and the position of each observation unmanned aerial vehicle;
said XiThe state vector representing the ith moment contains the position information of the unmanned aerial vehicle and the target;
said XbRepresents a base station location;
the R represents the noise variance matrix of the observation sensor.
4. The multi-UAV self-localization and target-localization method in a rejection environment of claim 1,
in step 4, the error variance P of the state estimate is obtained by the following equation (five):
Figure FDA0003200513760000031
wherein, PkRepresenting the error variance, P, of the state estimate at time k- kRepresenting a predicted state error variance matrix at time k;
Figure FDA0003200513760000032
a represents the system state transition matrix, Q represents the state error variance matrix,
Figure FDA0003200513760000033
representing a predicted state error variance matrix at time k-1;
Kkrepresenting the kalman gain:
Figure FDA0003200513760000034
5. the multi-UAV self-localization and target-localization method in a rejection environment of claim 1,
in step 4, the set threshold is 50-80.
CN202110903143.3A 2021-08-06 2021-08-06 Multi-unmanned aerial vehicle self-positioning and target positioning method under refusing environment Active CN113721188B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110903143.3A CN113721188B (en) 2021-08-06 2021-08-06 Multi-unmanned aerial vehicle self-positioning and target positioning method under refusing environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110903143.3A CN113721188B (en) 2021-08-06 2021-08-06 Multi-unmanned aerial vehicle self-positioning and target positioning method under refusing environment

Publications (2)

Publication Number Publication Date
CN113721188A true CN113721188A (en) 2021-11-30
CN113721188B CN113721188B (en) 2024-06-11

Family

ID=78675106

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110903143.3A Active CN113721188B (en) 2021-08-06 2021-08-06 Multi-unmanned aerial vehicle self-positioning and target positioning method under refusing environment

Country Status (1)

Country Link
CN (1) CN113721188B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024120187A1 (en) * 2022-12-05 2024-06-13 中国科学院深圳先进技术研究院 Method for estimating dynamic target of unmanned aerial vehicle in information rejection environment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107084714A (en) * 2017-04-29 2017-08-22 天津大学 A kind of multi-robot Cooperation object localization method based on RoboCup3D
CN109884586A (en) * 2019-03-07 2019-06-14 广东工业大学 Unmanned plane localization method, device, system and storage medium based on ultra-wide band
CN111273687A (en) * 2020-02-17 2020-06-12 上海交通大学 Multi-unmanned aerial vehicle collaborative relative navigation method based on GNSS observed quantity and inter-aircraft distance measurement
CN111612810A (en) * 2020-04-03 2020-09-01 北京理工大学 Target estimation method based on multi-source information fusion
CN112197761A (en) * 2020-07-24 2021-01-08 北京理工大学 High-precision multi-gyroplane co-location method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107084714A (en) * 2017-04-29 2017-08-22 天津大学 A kind of multi-robot Cooperation object localization method based on RoboCup3D
CN109884586A (en) * 2019-03-07 2019-06-14 广东工业大学 Unmanned plane localization method, device, system and storage medium based on ultra-wide band
CN111273687A (en) * 2020-02-17 2020-06-12 上海交通大学 Multi-unmanned aerial vehicle collaborative relative navigation method based on GNSS observed quantity and inter-aircraft distance measurement
CN111612810A (en) * 2020-04-03 2020-09-01 北京理工大学 Target estimation method based on multi-source information fusion
CN112197761A (en) * 2020-07-24 2021-01-08 北京理工大学 High-precision multi-gyroplane co-location method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
X YANG 等: "HIgh Accuracy Active Stand-off Target Geolocation Using UAV Platform", 《2019 IEEE INTERNATIONAL CONFERENCE ON SIGNAL, INFORMATION AND DATA PROCESSING》, vol. 2019 *
刘重 等: "基于通信与观测联合优化的多无人机协同目标跟踪控制", 《控制与决策》, vol. 33, no. 10, pages 2 - 3 *
杨健 等: "基于协同再入飞行器的舰船目标定位方法研究", 《计算机仿真》, vol. 32, no. 03, pages 2 - 4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024120187A1 (en) * 2022-12-05 2024-06-13 中国科学院深圳先进技术研究院 Method for estimating dynamic target of unmanned aerial vehicle in information rejection environment

Also Published As

Publication number Publication date
CN113721188B (en) 2024-06-11

Similar Documents

Publication Publication Date Title
US11218689B2 (en) Methods and systems for selective sensor fusion
Hening et al. 3D LiDAR SLAM integration with GPS/INS for UAVs in urban GPS-degraded environments
US10914590B2 (en) Methods and systems for determining a state of an unmanned aerial vehicle
CN105184776B (en) Method for tracking target
US8315794B1 (en) Method and system for GPS-denied navigation of unmanned aerial vehicles
US10240930B2 (en) Sensor fusion
EP3158417B1 (en) Sensor fusion using inertial and image sensors
CN107924196B (en) Method for automatically assisting an aircraft landing
EP3454008A1 (en) Survey data processing device, survey data processing method, and survey data processing program
CN109911188A (en) The bridge machinery UAV system of non-satellite navigator fix environment
CN105698762A (en) Rapid target positioning method based on observation points at different time on single airplane flight path
CN111426320B (en) Vehicle autonomous navigation method based on image matching/inertial navigation/milemeter
Hosseinpoor et al. Pricise target geolocation and tracking based on UAV video imagery
CN111102981B (en) High-precision satellite relative navigation method based on UKF
CN111983660A (en) System and method for positioning quad-rotor unmanned aerial vehicle in GNSS rejection environment
WO2018214121A1 (en) Method and apparatus for controlling unmanned aerial vehicle
Ouyang et al. Cooperative navigation of UAVs in GNSS-denied area with colored RSSI measurements
Andert et al. Optical-aided aircraft navigation using decoupled visual SLAM with range sensor augmentation
CN109186614B (en) Close-range autonomous relative navigation method between spacecrafts
Mostafa et al. Optical flow based approach for vision aided inertial navigation using regression trees
CN113721188B (en) Multi-unmanned aerial vehicle self-positioning and target positioning method under refusing environment
Hosseinpoor et al. Pricise target geolocation based on integeration of thermal video imagery and rtk GPS in UAVS
Miller et al. Optical Flow as a navigation means for UAV
CN112747745A (en) Target characteristic parameter measuring device and method
Andert et al. Optical aircraft navigation with multi-sensor SLAM and infinite depth features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant