CN113721188B - Multi-unmanned aerial vehicle self-positioning and target positioning method under refusing environment - Google Patents

Multi-unmanned aerial vehicle self-positioning and target positioning method under refusing environment Download PDF

Info

Publication number
CN113721188B
CN113721188B CN202110903143.3A CN202110903143A CN113721188B CN 113721188 B CN113721188 B CN 113721188B CN 202110903143 A CN202110903143 A CN 202110903143A CN 113721188 B CN113721188 B CN 113721188B
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
observation
base station
transfer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110903143.3A
Other languages
Chinese (zh)
Other versions
CN113721188A (en
Inventor
张福彪
杨希雯
林德福
王亚凯
陈祺
周天泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202110903143.3A priority Critical patent/CN113721188B/en
Publication of CN113721188A publication Critical patent/CN113721188A/en
Application granted granted Critical
Publication of CN113721188B publication Critical patent/CN113721188B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/02Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
    • G01S5/0294Trajectory determination or predictive filtering, e.g. target tracking or Kalman filtering
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/0009Transmission of position information to remote stations
    • G01S5/0045Transmission from base station to mobile station
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/02Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
    • G01S5/0284Relative positioning

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Navigation (AREA)

Abstract

The invention discloses a method for positioning and target positioning of multiple unmanned aerial vehicles under a refusing environment, wherein a base station with known absolute positions is arranged on the ground, the base station provides relative distance and angle information between unmanned aerial vehicles in communication with the base station, the information provided by the base station and the relative position information measured between the unmanned aerial vehicles and the target information detected by the unmanned aerial vehicles are fused by a Kalman filtering algorithm, so that the unmanned aerial vehicles and the target position information are obtained, the course angles of the unmanned aerial vehicles and the transfer unmanned aerial vehicles are calculated, the unmanned aerial vehicles are controlled to further adjust the positions according to the information, new detection information is obtained again, the process is continuously repeated, and the accuracy of the obtained unmanned aerial vehicles and the target position information is gradually improved, so that the use requirements are met.

Description

Multi-unmanned aerial vehicle self-positioning and target positioning method under refusing environment
Technical Field
The invention relates to a method for positioning a target and a self-body through an unmanned aerial vehicle, in particular to a method for positioning the target and the self-body of a plurality of unmanned aerial vehicles in a refusing environment.
Background
The multi-rotor unmanned aerial vehicle has the characteristics of good maneuverability and flexibility, and is widely applied to the military and civil fields. The detection and positioning technology of ground targets is one of the key technologies applied to the current multi-rotor unmanned aerial vehicle.
With the development of multi-machine cooperative technology, the problem of positioning based on multiple unmanned aerial vehicles has become a current research hotspot. The multi-machine positioning problem can be divided into a multi-machine self-positioning problem and a multi-machine target positioning problem. The problem of positioning of multiple unmanned aerial vehicles is solved, and under the refusing environment, each unmanned aerial vehicle improves the absolute position positioning accuracy of the unmanned aerial vehicle by measuring the relative position information between the base station and the adjacent unmanned aerial vehicle. The problem of the cooperative target positioning of multiple unmanned aerial vehicles refers to that relative information between the multiple unmanned aerial vehicles and the target is acquired from different positions, so that the target can be positioned quickly and accurately, but in an actual application scene, the navigation system of the unmanned aerial vehicle is invalid possibly due to the same-frequency signal interference or building shielding and other reasons, and the target cannot be positioned accurately. These two problems are coupled, in which the accuracy of the target positioning depends on the accuracy of the positioning of the unmanned aerial vehicle itself and the accuracy of the relative information measurement; the result of the cooperative target positioning of each unmanned aerial vehicle can also improve the self-positioning quality of the unmanned aerial vehicle in turn;
However, in the prior art, there is no effective solution for unmanned aerial vehicle and target positioning in the refused environment, so a method for providing accurate positions of unmanned aerial vehicle and target in the refused environment is needed.
For the above reasons, the present inventors have conducted intensive studies on the existing unmanned aerial vehicle and the target positioning method, so as to expect to design a multi-unmanned aerial vehicle self-positioning and target positioning method in a refusing environment, which can solve the above problems.
Disclosure of Invention
In order to overcome the problems, the inventor has conducted intensive researches and designs a multi-unmanned aerial vehicle self-positioning and target positioning method in a refusing environment, wherein the refusing environment is an environment for shielding satellite signals, and a system for positioning by using satellites cannot normally receive satellite signals and cannot normally work; according to the method, a base station with a known absolute position is set on the ground, the base station provides relative distance and angle information between unmanned aerial vehicles in communication with the base station, the information provided by the base station and the relative position information measured between the unmanned aerial vehicles and target information detected by the unmanned aerial vehicles are fused by a Kalman filtering algorithm, so that unmanned aerial vehicles and target position information are obtained, heading angles of the unmanned aerial vehicles are calculated, the unmanned aerial vehicles are controlled to be observed and the unmanned aerial vehicles are transferred, the positions of the unmanned aerial vehicles are further adjusted, new positions are reached, new detection information is obtained through detection again, the process is continuously repeated, and the accuracy of the obtained unmanned aerial vehicles and the target position information is gradually improved, so that the use requirements are met, and the method is completed.
Specifically, the invention aims to provide a multi-unmanned aerial vehicle self-positioning and target positioning method in a refusing environment, wherein:
at least two observing unmanned aerial vehicles, at least one transit unmanned aerial vehicle and a base station with a known position are arranged,
The unmanned observation plane is used for obtaining detection: the distance between each unmanned observation plane and the target and the distance between any two unmanned observation planes;
the transfer unmanned aerial vehicle is used for detecting and obtaining: transferring the distance between the unmanned aerial vehicle and each unmanned aerial vehicle;
the base station is configured to obtain: the distance between the base station and the transfer unmanned aerial vehicle, the distance between the base station and each observation unmanned aerial vehicle and the line of sight angle between the base station and the transfer unmanned aerial vehicle;
The method comprises the following steps:
Step 1, detecting and obtaining an observation vector through an observation unmanned aerial vehicle, a transfer unmanned aerial vehicle and a base station;
Step 2, fusion processing is carried out on the observation vector by using a Kalman filtering algorithm;
step3, solving course angles of the observation unmanned aerial vehicle and the transfer unmanned aerial vehicle, and controlling the observation unmanned aerial vehicle and the transfer unmanned aerial vehicle to reach the next navigation point according to the course angles;
and 4, repeating the steps 1, 2 and 3 until the sum of diagonal elements of the error variance matrix of the state estimation is reduced to a set threshold value.
In step 3, the inverse matrix of the state estimation error variance matrix P k+n at the k+n time, i.e., the information matrix J k+n, is predicted at the k time, and the heading angles of the unmanned observation and unmanned transfer robots that maximize the information matrix J k+n are solved.
In the step 3, the course angles of the observation unmanned aerial vehicle and the transfer unmanned aerial vehicle are obtained through the following formula (I);
Wherein, Representing a vector containing three heading angles;
J k+n denotes an information matrix, obtained by the following formula (two):
wherein, P 0 represents the initial value of the state estimation error variance matrix in the Kalman filtering algorithm;
The I i represents the information quantity related to the state vector to be estimated in the observation vector at the moment I, and is obtained by the following formula (III):
Wherein H i is a jacobian matrix obtained by performing bias derivation on the state vector by using the observation vector, and is obtained by the following formula (four):
The observation vector comprises a distance between each observation unmanned aerial vehicle and a target, a distance between any two observation unmanned aerial vehicles, a distance between the transfer unmanned aerial vehicle and each observation unmanned aerial vehicle, a distance between the base station and the transfer unmanned aerial vehicle, a distance between the base station and each observation unmanned aerial vehicle and a line of sight angle between the base station and the transfer unmanned aerial vehicle;
the state vector comprises the position of the target, the position of the transfer unmanned aerial vehicle and the position of each observation unmanned aerial vehicle;
X i represents a state vector at the ith moment and contains position information of the unmanned aerial vehicle and the target;
X b represents a base station position;
the R represents a noise variance matrix of the observation sensor;
Wherein, in step 4, the error variance P of the state estimation is obtained by the following formula (five):
wherein, P k represents the error variance of the state estimation at k time, and P - k represents the predicted state error variance matrix at k time;
a represents a system state transition matrix, Q represents a state error variance matrix, Representing a predicted k-1 moment state error variance matrix;
K k denotes the kalman gain:
In step 4, the set threshold is 50 to 80.
The invention has the beneficial effects that:
(1) According to the method for self-positioning and target positioning of the multiple unmanned aerial vehicles in the refusing environment, which is provided by the invention, the iteration process is simple, the iteration speed is high, and the whole execution process is simple and controllable;
(2) According to the method for positioning and target positioning of multiple unmanned aerial vehicles in the refusing environment, which is provided by the invention, the target position and the unmanned aerial vehicle position information can be obtained in real time in the execution process, the obtained information is more and more accurate, namely, relatively accurate position information can be provided before the accurate information is not obtained, so that the special information requirement is met.
Drawings
Fig. 1 is a logic diagram showing a multi-unmanned aerial vehicle self-positioning and target positioning method in a refusal environment according to a preferred embodiment of the present invention;
FIG. 2 shows a schematic view of an observed relationship in a two-dimensional plane according to a preferred embodiment of the present invention;
FIG. 3 illustrates three unmanned aerial vehicle waypoint trace diagrams in an embodiment of the invention;
FIG. 4 is a schematic diagram showing the change of the target positioning error in the embodiment of the invention;
fig. 5 shows a schematic diagram of the change of positioning errors of three unmanned aerial vehicles in an embodiment of the invention;
FIG. 6 is a schematic diagram showing a change rule of the sum of diagonal elements of the error variance matrix in the embodiment of the invention.
Detailed Description
The invention is further described in detail below by means of the figures and examples. The features and advantages of the present invention will become more apparent from the description.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
According to the method for positioning and target positioning of multiple unmanned aerial vehicles in the refusing environment, which is provided by the invention, the refusing environment is an environment for shielding satellite signals, and a system for positioning by utilizing satellites in the environment cannot normally receive satellite signals and cannot normally work; the method comprises the following steps:
at least two observing unmanned aerial vehicles, at least one transit unmanned aerial vehicle and a base station with a known position are arranged,
The unmanned observation plane is used for obtaining detection: a distance between each unmanned observation vehicle and the target, and a distance between at least two unmanned observation vehicles;
the transfer unmanned aerial vehicle is used for detecting and obtaining: transferring the distance between the unmanned aerial vehicle and each unmanned aerial vehicle;
the base station is configured to obtain: the distance between the base station and the transfer unmanned aerial vehicle, the distance between the base station and each observation unmanned aerial vehicle and the line of sight angle between the base station and the transfer unmanned aerial vehicle;
As shown in fig. 1, the method comprises the steps of:
Step 1, detecting by an observation unmanned aerial vehicle, a transfer unmanned aerial vehicle and a base station to obtain an observation vector Z k;
Step 2, fusion processing is carried out on the observation vector by using a Kalman filtering algorithm to obtain a state vector
Step 3, solving the course angles of the observation unmanned aerial vehicle and the transfer unmanned aerial vehicle, and respectively controlling the observation unmanned aerial vehicle and the transfer unmanned aerial vehicle to reach new navigation points according to the course angles;
and 4, repeating the steps 1, 2 and 3 until the sum of diagonal elements of the error variance matrix of the state estimation is reduced to a set threshold value.
Preferably, each unmanned aerial vehicle is provided with a laser range finder for detecting the distance between other unmanned aerial vehicles or targets, and each unmanned aerial vehicle is further provided with an optoelectronic pod which can directly acquire an azimuth angle so as to acquire a sight angle between a base station and the transit unmanned aerial vehicle; a laser range finder for measuring distance is also provided on the base station.
In a preferred embodiment, in step 1, the observation vector Z k includes angle measurement and distance measurement information, specifically, the observation direction includes a distance between each of the observation robots and the target, a distance between any two of the observation robots, a distance between the relay unmanned and each of the observation robots, a distance between the base station and the relay unmanned, a distance between the base station and each of the observation robots, and a line of sight angle between the base station and the relay unmanned. When the target, the unmanned observation plane, the unmanned transfer plane and the base station are all in the same two-dimensional plane, and the unmanned observation plane is set to two, and the unmanned transfer plane is set to one, as shown in fig. 2, the observation vector Z k at this time may be expressed as:
Zk=[ρ2T3T10102030121323]T
ρ 2T and ρ 3T respectively represent the distances between the two unmanned observation robots and the target obtained by detection of the unmanned observation robots;
Phi 10 represents the line of sight angle between the base station and the transfer unmanned aerial vehicle;
ρ 10 represents the distance between the base station and the transfer drone;
ρ 20 and ρ 30 represent the distances between the base station and the two drones, respectively;
ρ 12 and ρ 13 represent the distance between the transit unmanned aerial vehicle and the two viewing unmanned aerial vehicles, respectively;
ρ 23 represents the distance between two drones.
The location information of the base station is known, i.e., X b=[x0,y0]T;
The observation vector Z k is equivalent to the following observation equation:
Wherein V k represents the normal distribution of measurement noise, obeying the mean value of 0 and the variance of R;
(x T,yT) represents a target position, (x 1,y1) represents a transit unmanned aerial vehicle position, (x 2,y2) and (x 3,y3) represent an observation unmanned aerial vehicle position, respectively.
Preferably, in step 1, when the observation unmanned aerial vehicle, the transfer unmanned aerial vehicle and the base station are detected for the first time, a set of estimated target positions, transfer unmanned aerial vehicle positions and observation unmanned aerial vehicle positions are given to calculate and obtain an initial observation equation; the more accurate position information is obtained step by step in the subsequent loop iteration process. The initial position of the target is calculated by utilizing a triangular intersection according to the distance obtained by the unmanned aerial vehicle for detecting the target, the initial position of the unmanned aerial vehicle is given by inertial navigation equipment on the unmanned aerial vehicle, the inertial navigation drift error without satellite navigation assistance in a refusing environment is relatively large, and the error can be eliminated in a short time by the method.
In the application, the observation vector is obtained in real time, the detection/obtaining frequency is 1Hz, and the unmanned plane flies for 1s according to the course angle instruction once receiving the course angle instruction, and the observation vector is obtained by detecting again after finishing the course angle instruction.
When flying according to the course angle, the unmanned aerial vehicle flies at a fixed speed according to a preset speed, and the receiving frequency of the course angle is 1Hz.
In the application, the transfer unmanned aerial vehicle and the observation unmanned aerial vehicle are detected by the detector at the same moment to obtain the data information for forming the observation vector, the positions of the transfer unmanned aerial vehicle and the observation unmanned aerial vehicle when the information is detected are the waypoints, and the waypoints of the unmanned aerial vehicles are different at the same moment. The course angle obtained by each calculation is multiple, each unmanned aerial vehicle corresponds to one course angle, the course angles obtained by the unmanned aerial vehicles can be different from each other, when the unmanned aerial vehicles receive the course angle, the unmanned aerial vehicles fly in a moving mode according to the course angle, the flight time is 1s, so that a new space position is reached, detection is carried out again at the new space position, and a new observation vector is obtained, wherein the new space position is a new navigation point.
Preferably, each unmanned aerial vehicle is provided with a data transmission station, so that information observed by the data transmission station is transmitted to the base station in real time, and a course angle instruction transmitted by the base station is received; and the base station is also provided with a data transmission station, so that the observation vector is received in real time, and the course angle instruction is transmitted to each unmanned aerial vehicle.
In a preferred embodiment, in step 2, the fusion process is performed by an extended kalman filter algorithm, by which a state vector can be obtained.
In a preferred embodiment, in step 3, the inverse matrix of the state estimation error variance matrix P k+n at the k+n time instant, i.e. the information matrix J k+n, is predicted at the k time instant, and the heading angles of the drone and the relay drone are solved such that the information matrix J k+n is maximized.
Preferably, the state vector at time k is:
Xk=[xT,yT,x1,y1,x2,y2,x3,y3]k T,
the state vector at time k+1 is estimated accordingly as:
Xk+1=Xk+[0,0,v0 cosψk1,v0 sinψk1,v0 cosψk2,v0 sinψk2,v0 cosψk3,v0 sinψk3]Tk
In the application, when optimizing the course angle, the values of the objective functions at the later n times (k+1 to k+n times) need to be predicted, and the expressions of the objective functions comprise X k+1 to X k+n, so that the state vectors at the later times need to be estimated.
Wherein ω k represents process noise, ψ k1 represents heading angle of the transfer unmanned aerial vehicle at time k; and ψ k2 and ψ k3 represent heading angles of two unmanned aerial vehicles at the time k, respectively.
Preferably, in step 3, the heading angles of the unmanned observation and unmanned transfer vehicle are obtained by the following formula (one);
Wherein, An optimization vector composed of three course angles is represented, namely, an optimization vector composed of phi k1、ψk2 and phi k3;
Is a mathematical symbol, which indicates that the objective function in the right bracket is maximized;
J k+n denotes an information matrix, obtained by the following formula (two):
Wherein, P 0 represents the initial error variance of the state in the kalman filtering algorithm, and the value is as follows:
[500,0,0,0,0,0,0,0;0,500,0,0,0,0,0,0;0,0,100,0,0,0,0,0;0,0,0,100,0,0,0,0;0,0,0,0,100,0,0,0;0,0,0,0,0,100,0,0;0,0,0,0,0,0,100,0;0,0,0,0,0,0,0,100;]
The I i represents the information quantity related to the state vector to be estimated in the observation vector at the moment I, and is obtained by the following formula (III):
wherein H i is a jacobian matrix obtained by performing bias derivation on the state vector by using the i moment observation vector, and is obtained by the following formula (four):
The observation vector is:
Zk=[ρ2T3T10102030121323]T;
The state vector is Xk=[xT,yT,x1,y1,x2,y2,x3,y3]k T;
X i represents a state vector at the ith moment and comprises unmanned plane position information and target position information; preferably comprising three drone location information and one target location information;
X b represents a base station position;
the R represents a sensor noise variance matrix;
In a preferred embodiment, in step 4, the error variance of the state estimate is obtained by the following formula (five):
Wherein P k represents the error variance of the state estimate at time k;
P - k denotes the predicted k-time state error variance matrix;
a represents a system state transition matrix, Q represents a state error variance matrix, Representing a predicted k-1 moment state error variance matrix;
K k denotes the kalman gain:
In a preferred embodiment, in step 4, the set threshold is 30-100, preferably 50-80, and more preferably 50, and in the present application, when the sum of diagonal elements of the error variance matrix of state estimation is smaller than the threshold, that is, the sum of error variances corresponding to each state is smaller than the threshold, at this time, each state quantity is converged, and the positioning error reaches the meter level, and the error at this time is considered to be within an acceptable range.
In a preferred embodiment, when the sum of diagonal elements of the error variance matrix of the state estimation is smaller than the set threshold, the corresponding state vector is an accurate output value, so as to obtain accurate target position information and unmanned aerial vehicle position information.
In a preferred embodiment, when the target, the unmanned observation vehicle, the unmanned transfer vehicle and the base station are all in the same three-dimensional space, the base station and the target are fixed on the ground, and the unmanned observation vehicle and the unmanned transfer vehicle can fly or hover in the three-dimensional space, the observation vector, the observation equation and the state vector at this time are:
Zk=[ρ2T3T10102030121323]T
Xk=[xT,yT,x1,y1,x2,y2,x3,y3]k T
Examples
Three identical unmanned aerial vehicles are selected, the flying speeds of the unmanned aerial vehicles are 10m/s, the unmanned aerial vehicles are respectively numbered as UAV1, UAV2 and UAV3, and the corresponding initial coordinates are respectively [10m,10m ], [0m, -20m ], [ -20m,0m ]; the base station coordinates are set as [0m,0m ], and the target real coordinates are set as [200m,200m ]. UAV1 is a transit unmanned aerial vehicle, and UAV2 and UAV3 are observation unmanned aerial vehicles.
Wherein, the laser range finders and the data transmission radio stations are arranged on the UAV1, the UAV2 and the UAV 3; the base station is provided with a laser range finder, an optoelectronic pod and a data transmission station.
In the control process, the turning rate constraint of the three unmanned aerial vehicles is 10 degrees/s, and the initial value of the state vector is as follows:
X0=[180,220,12,12,6,-22,-23,7]T
That is, the initial time of the control process is set to target coordinates of 180m,220m, UAV1 coordinates of 12m, UAV2 coordinates of 6 m-22 m, and UAV3 coordinates of 23m,7 m.
The three unmanned aerial vehicles are controlled as follows:
Step 1, detecting by a UAV1, a UAV2, a UAV3 and a base station to obtain observation vectors, namely distances between the UAV1 and the UAV2, between the base station and the UAV1, between the UAV2 and the UAV3, between the base station and the UAV1, between the UAV2 and a target, between the UAV3 and the target, and between the UAV2 and the UAV 3;
step 2, fusion processing is carried out on the measurement vectors by using a Kalman filtering algorithm to obtain state vectors;
The state vector comprises target position information and position information of all unmanned aerial vehicles, namely coordinate information of a target, the UAV1, the UAV2 and the UAV 3;
Fusion processing is carried out through an extended Kalman filtering algorithm;
step 3, solving the course angles of the UAV1, the UAV2 and the UAV3, and controlling the UAV1, the UAV2 and the UAV3 to reach a new waypoint according to the course angles;
Wherein, the course angles of the unmanned aerial vehicle UAV2, UAV3 and the transit unmanned aerial vehicle UAV1 are obtained by the following formula (one);
Wherein, An optimization vector composed of three course angles is represented;
j k+n is obtained by the following formula (II)
Wherein, P 0 takes on the value of
[500,0,0,0,0,0,0,0;0,500,0,0,0,0,0,0;0,0,100,0,0,0,0,0;0,0,0,100,0,0,0,0;0,0,0,0,100,0,0,0;0,0,0,0,0,100,0,0;0,0,0,0,0,0,100,0;0,0,0,0,0,0,0,100;]
Ii is obtained by the following formula (III)
Wherein H i is obtained by the following formula (IV)
The method comprises the following steps: the course angle of the UAV1 is-120 degrees, the course angle of the UAV2 is 50 degrees, the course angle of the UAV3 is 30 degrees, the UAV1, the UAV2 and the UAV3 are controlled to move for 1 second at the speed of 10m/s according to the course angle, and then the observation vector is obtained by detecting again, so that a new course angle is obtained;
Step 4, repeating the steps 1, 2 and 350 times; the obtained motion trajectories of the unmanned aerial vehicles UAV1, UAV2 and UAV3, namely the waypoints of the unmanned aerial vehicles are shown in the figure 3, and correspondingly, the change condition of the target positioning error corresponding to each waypoint is shown in the figure 4; the positioning error variation of the three unmanned aerial vehicles is shown in fig. 5;
as can be seen from fig. 4, the target positioning error can be converged to within 10m within 15 seconds;
as can be seen from fig. 5, the position error of the drone is maintained within 3 meters without the satellite navigation providing absolute coordinates.
Further, in step 4, the error variance of the state estimation is calculated in real time by the following equation (five),
Wherein P k represents the error variance of the state estimate at time k; p - k denotes the predicted k-time state error variance matrix;
a represents a system state transition matrix, Q represents a state error variance matrix, Representing a predicted k-1 moment state error variance matrix;
K k denotes the kalman gain:
The sum of the diagonal elements of the error variance matrix of the state estimate for each waypoint is shown in figure 6,
Therefore, it can be known that when the sum of diagonal elements of the state estimation error variance matrix reaches 50, the iterative computation is stopped, and at this time, the corresponding target positioning errors are 5m, and the positioning errors of the unmanned aerial vehicles UAV1, UAV2 and UAV3 are 2.5m, 0.8m and 2.4m respectively, and when the sum of diagonal elements of the state estimation error variance matrix reaches 50, the position information of the target, UAV1, UAV2 and UAV3 obtained when the iterative computation is stopped can meet the control requirement.
The invention has been described above in connection with preferred embodiments, which are, however, exemplary only and for illustrative purposes. On this basis, the invention can be subjected to various substitutions and improvements, and all fall within the protection scope of the invention.

Claims (3)

1. A method for self-positioning and target positioning of multiple unmanned aerial vehicles in refusing environment is characterized in that:
at least two observing unmanned aerial vehicles, at least one transit unmanned aerial vehicle and a base station with a known position are arranged,
The unmanned observation plane is used for obtaining detection: the distance between each unmanned observation plane and the target and the distance between any two unmanned observation planes;
the transfer unmanned aerial vehicle is used for detecting and obtaining: transferring the distance between the unmanned aerial vehicle and each unmanned aerial vehicle;
the base station is configured to obtain: the distance between the base station and the transfer unmanned aerial vehicle, the distance between the base station and each observation unmanned aerial vehicle and the line of sight angle between the base station and the transfer unmanned aerial vehicle;
The method comprises the following steps:
Step 1, detecting and obtaining an observation vector through an observation unmanned aerial vehicle, a transfer unmanned aerial vehicle and a base station;
Step 2, fusion processing is carried out on the observation vector by using a Kalman filtering algorithm to obtain a state vector
Step 3, obtaining course angles of the unmanned observation plane and the unmanned transfer plane, and respectively controlling the unmanned observation plane and the unmanned transfer plane to reach new waypoints according to the course angles;
Step 4, repeating the steps 1, 2 and 3 until the sum of diagonal elements of the error variance matrix of the state estimation is reduced to a set threshold;
In step 3, predicting the inverse matrix of the state estimation error variance matrix P k+n at the k+n time, namely the information matrix J k+n, and solving the heading angles of the observation unmanned aerial vehicle and the transfer unmanned aerial vehicle which maximize the information matrix J k+n;
in the step 3, the course angles of the observation unmanned aerial vehicle and the transfer unmanned aerial vehicle are obtained through the following formula (I);
Wherein, Representing a vector containing three heading angles;
J k+n denotes an information matrix, obtained by the following formula (two):
wherein, P 0 represents the initial value of the state estimation error variance matrix in the Kalman filtering algorithm;
The I i represents the information quantity related to the state vector to be estimated in the observation vector at the moment I, and is obtained by the following formula (III):
Wherein H i is a jacobian matrix obtained by performing bias derivation on the state vector by using the observation vector, and is obtained by the following formula (four):
The observation vector comprises a distance between each observation unmanned aerial vehicle and a target, a distance between any two observation unmanned aerial vehicles, a distance between the transfer unmanned aerial vehicle and each observation unmanned aerial vehicle, a distance between the base station and the transfer unmanned aerial vehicle, a distance between the base station and each observation unmanned aerial vehicle and a line of sight angle between the base station and the transfer unmanned aerial vehicle;
the state vector comprises the position of the target, the position of the transfer unmanned aerial vehicle and the position of each observation unmanned aerial vehicle;
X i represents a state vector at the i-th time;
X b represents a base station position;
The R represents the noise variance matrix of the observation sensor.
2. The method for self-positioning and target positioning of multiple unmanned aerial vehicles in a refusal environment according to claim 1,
In step 4, the error variance P of the state estimate is obtained by the following equation (five):
wherein, P k represents the error variance of the state estimation at k time, and P - k represents the predicted state error variance matrix at k time;
a represents a system state transition matrix, Q represents a state error variance matrix, Representing a predicted k-1 moment state error variance matrix;
K k denotes the kalman gain:
3. The method for self-positioning and target positioning of multiple unmanned aerial vehicles in a refusal environment according to claim 1,
In step 4, the set threshold is 50 to 80.
CN202110903143.3A 2021-08-06 2021-08-06 Multi-unmanned aerial vehicle self-positioning and target positioning method under refusing environment Active CN113721188B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110903143.3A CN113721188B (en) 2021-08-06 2021-08-06 Multi-unmanned aerial vehicle self-positioning and target positioning method under refusing environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110903143.3A CN113721188B (en) 2021-08-06 2021-08-06 Multi-unmanned aerial vehicle self-positioning and target positioning method under refusing environment

Publications (2)

Publication Number Publication Date
CN113721188A CN113721188A (en) 2021-11-30
CN113721188B true CN113721188B (en) 2024-06-11

Family

ID=78675106

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110903143.3A Active CN113721188B (en) 2021-08-06 2021-08-06 Multi-unmanned aerial vehicle self-positioning and target positioning method under refusing environment

Country Status (1)

Country Link
CN (1) CN113721188B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115790603A (en) * 2022-12-05 2023-03-14 中国科学院深圳先进技术研究院 Unmanned aerial vehicle dynamic target estimation method used in information rejection environment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109884586A (en) * 2019-03-07 2019-06-14 广东工业大学 Unmanned plane localization method, device, system and storage medium based on ultra-wide band

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107084714B (en) * 2017-04-29 2019-10-22 天津大学 A kind of multi-robot Cooperation object localization method based on RoboCup3D
CN111273687A (en) * 2020-02-17 2020-06-12 上海交通大学 Multi-unmanned aerial vehicle collaborative relative navigation method based on GNSS observed quantity and inter-aircraft distance measurement
CN111612810B (en) * 2020-04-03 2023-08-18 国网江西省电力有限公司上饶供电分公司 Target estimation method based on multi-source information fusion
CN112197761B (en) * 2020-07-24 2022-07-19 北京理工大学 High-precision multi-gyroplane co-location method and system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109884586A (en) * 2019-03-07 2019-06-14 广东工业大学 Unmanned plane localization method, device, system and storage medium based on ultra-wide band

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于协同再入飞行器的舰船目标定位方法研究;杨健 等;《计算机仿真》;第32卷(第03期);论文第2-4节 *
基于通信与观测联合优化的多无人机协同目标跟踪控制;刘重 等;《控制与决策》;第33卷(第10期);论文第2-3节 *

Also Published As

Publication number Publication date
CN113721188A (en) 2021-11-30

Similar Documents

Publication Publication Date Title
CN105184776B (en) Method for tracking target
Hening et al. 3D LiDAR SLAM integration with GPS/INS for UAVs in urban GPS-degraded environments
US20190273909A1 (en) Methods and systems for selective sensor fusion
US10240930B2 (en) Sensor fusion
CN110609570A (en) Autonomous obstacle avoidance inspection method based on unmanned aerial vehicle
CN112197761B (en) High-precision multi-gyroplane co-location method and system
CN111426320B (en) Vehicle autonomous navigation method based on image matching/inertial navigation/milemeter
CN105698762A (en) Rapid target positioning method based on observation points at different time on single airplane flight path
CN111102981B (en) High-precision satellite relative navigation method based on UKF
Chen et al. An auto-landing strategy based on pan-tilt based visual servoing for unmanned aerial vehicle in GNSS-denied environments
CN108896957A (en) The positioning system and method in a kind of unmanned plane control signal source
EP3759561A1 (en) Drone control system
CN111413708A (en) Unmanned aerial vehicle autonomous landing site selection method based on laser radar
EP3916356A1 (en) Global positioning denied navigation
CN110673627A (en) Forest unmanned aerial vehicle searching method
Ouyang et al. Cooperative navigation of UAVs in GNSS-denied area with colored RSSI measurements
EP4015993B1 (en) Aircraft sensor system synchronization
CN113721188B (en) Multi-unmanned aerial vehicle self-positioning and target positioning method under refusing environment
CN110741272A (en) Radio beacon system
CN109186614A (en) Short distance autonomous relative navigation method between a kind of spacecraft
Miller et al. Optical Flow as a navigation means for UAV
CN117320148A (en) Multi-source data fusion positioning method, system, electronic equipment and storage medium
CN115857520A (en) Unmanned aerial vehicle carrier landing state monitoring method based on combination of vision and ship state
CN115406439A (en) Vehicle positioning method, system, device and nonvolatile storage medium
Zahran et al. Augmented radar odometry by nested optimal filter aided navigation for UAVS in GNSS denied environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant