CN109974693B - Unmanned aerial vehicle positioning method and device, computer equipment and storage medium - Google Patents

Unmanned aerial vehicle positioning method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN109974693B
CN109974693B CN201910099571.8A CN201910099571A CN109974693B CN 109974693 B CN109974693 B CN 109974693B CN 201910099571 A CN201910099571 A CN 201910099571A CN 109974693 B CN109974693 B CN 109974693B
Authority
CN
China
Prior art keywords
transformation matrix
video frame
pose transformation
target
feature point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910099571.8A
Other languages
Chinese (zh)
Other versions
CN109974693A (en
Inventor
周翊民
陈雅兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201910099571.8A priority Critical patent/CN109974693B/en
Publication of CN109974693A publication Critical patent/CN109974693A/en
Application granted granted Critical
Publication of CN109974693B publication Critical patent/CN109974693B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S11/00Systems for determining distance or velocity not using reflection or reradiation
    • G01S11/12Systems for determining distance or velocity not using reflection or reradiation using electromagnetic waves other than radio waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to an unmanned aerial vehicle positioning method, which comprises the following steps: the method comprises the steps of obtaining measurement data obtained by measurement of an inertia measurement unit, obtaining an initial pose transformation matrix among video frames through calculation according to the measurement data, obtaining video frame images obtained through shooting of a camera, extracting feature points in each video frame image, obtaining feature point matching pairs among the video frame images through feature matching of the feature points, obtaining a target pose transformation matrix among the video frames through calculation according to the initial pose transformation matrix and the feature point matching pairs among the video frame images, and determining the position of the unmanned aerial vehicle according to the target pose transformation matrix. According to the method, the inertial measurement and the vision are fused to obtain the target transformation matrix, so that the positioning accuracy of the unmanned aerial vehicle is improved. In addition, still provide an unmanned aerial vehicle positioner, computer equipment and storage medium.

Description

Unmanned aerial vehicle positioning method and device, computer equipment and storage medium
Technical Field
The invention relates to the technical field of computers, in particular to a positioning method and device of an unmanned aerial vehicle, computer equipment and a storage medium.
Background
With the development of science and technology, unmanned aerial vehicles are increasingly miniaturized and intelligentized, and the flight space of the unmanned aerial vehicles is expanded to jungles, cities and even buildings. Based on the fact that the flight space of the unmanned aerial vehicle is complex and changeable, in an indoor or unknown environment without GPS signals, the existing mainstream GPS integrated navigation system of the unmanned aerial vehicle cannot be used normally, and the positioning precision is low.
Disclosure of Invention
Therefore, it is necessary to provide a positioning method and apparatus for an unmanned aerial vehicle, a computer device, and a storage medium with high positioning accuracy.
In a first aspect, an embodiment of the present invention provides an unmanned aerial vehicle positioning method, where the method includes:
acquiring measurement data measured by an inertia measurement unit;
calculating to obtain an initial pose transformation matrix between video frames according to the measurement data;
acquiring video frame images shot by a camera, extracting feature points in each video frame image, and performing feature matching on the feature points to obtain feature point matching pairs among the video frame images;
calculating to obtain a target pose transformation matrix between video frames according to the initial pose transformation matrix and the feature point matching pairs between the video frame images;
and determining the position of the unmanned aerial vehicle according to the target pose transformation matrix.
In a second aspect, an embodiment of the present invention provides an unmanned aerial vehicle positioning apparatus, where the apparatus includes:
the acquisition module is used for acquiring measurement data measured by the inertia measurement unit;
the initial calculation module is used for calculating an initial pose transformation matrix between video frames according to the measurement data;
the matching module is used for acquiring video frame images shot by the camera, extracting feature points in each video frame image, and performing feature matching on the feature points to obtain feature point matching pairs among the video frame images;
the target calculation module is used for calculating to obtain a target pose transformation matrix between video frames according to the initial pose transformation matrix and the feature point matching pairs between the video frame images;
and the determining module is used for determining the position of the unmanned aerial vehicle according to the target pose transformation matrix.
In a third aspect, an embodiment of the present invention provides a computer device, including a memory and a processor, where the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to execute the following steps:
acquiring measurement data measured by an inertia measurement unit;
calculating to obtain an initial pose transformation matrix between video frames according to the measurement data;
acquiring video frame images shot by a camera, extracting feature points in each video frame image, and performing feature matching on the feature points to obtain feature point matching pairs among the video frame images;
calculating to obtain a target pose transformation matrix between video frames according to the initial pose transformation matrix and the feature point matching pairs between the video frame images;
and determining the position of the unmanned aerial vehicle according to the target pose transformation matrix.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the processor is caused to execute the following steps:
acquiring measurement data measured by an inertia measurement unit;
calculating to obtain an initial pose transformation matrix between video frames according to the measurement data;
acquiring video frame images shot by a camera, extracting feature points in each video frame image, and performing feature matching on the feature points to obtain feature point matching pairs among the video frame images;
calculating to obtain a target pose transformation matrix between video frames according to the initial pose transformation matrix and the feature point matching pairs between the video frame images;
and determining the position of the unmanned aerial vehicle according to the target pose transformation matrix.
According to the unmanned aerial vehicle positioning method, the unmanned aerial vehicle positioning device, the computer equipment and the storage medium, firstly, an initial pose transformation matrix between video frames is obtained through calculation according to measurement data obtained through measurement by the inertia measurement unit, then, feature points between video frame images are matched to obtain feature point matching pairs, a target pose transformation matrix is obtained through common calculation according to the feature point matching pairs obtained through matching and the initial pose matrix, and therefore the position of the unmanned aerial vehicle is determined according to target pose transformation. According to the unmanned aerial vehicle positioning method, the initial pose transformation matrix is obtained according to the inertial measurement unit, and then the initial pose transformation matrix is optimized according to feature point matching between video frame images obtained by camera shooting, so that a target pose transformation matrix is obtained, namely, inertial measurement and vision are fused to obtain the target transformation matrix, and therefore the unmanned aerial vehicle positioning accuracy is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 is a flow chart of a method for positioning a drone according to an embodiment;
fig. 2 is a flow chart of a method for positioning a drone according to another embodiment;
FIG. 3A is a schematic diagram illustrating the partitioning of an octree three-dimensional space according to one embodiment;
FIG. 3B is a diagram of an octree data structure in one embodiment;
fig. 4 is a process diagram of a method for positioning a drone according to one embodiment;
fig. 5 is a schematic flow chart of a positioning method for a drone according to an embodiment;
fig. 6 is a block diagram of the positioning apparatus of the drone according to one embodiment;
fig. 7 is a block diagram of a positioning device of a drone according to another embodiment;
fig. 8 is a block diagram of a positioning device of a drone according to a further embodiment;
FIG. 9 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, a positioning method for an unmanned aerial vehicle is provided, where the positioning method is applied to an unmanned aerial vehicle or a terminal or a server connected to the unmanned aerial vehicle, and in this embodiment, the positioning method is described by taking the application to the unmanned aerial vehicle as an example, and specifically includes the following steps:
and 102, acquiring measurement data measured by the inertia measurement unit.
Among them, an Inertial Measurement Unit (IMU) is a device that measures the three-axis attitude angle (or angular velocity) and acceleration of an object. The inertial measurement unit is used as an inertial parameter measurement device of the unmanned aerial vehicle, and the device comprises a three-axis gyroscope, a three-axis acceleration and a three-axis magnetometer. Unmanned aerial vehicle can directly read the measured data of inertial measurement unit measurement, and measured data includes: angular velocity, acceleration, magnetometer data, and the like.
And 104, calculating to obtain an initial pose transformation matrix between the video frames according to the measurement data.
After the measurement data obtained by measurement of the inertia measurement unit are obtained, the pose transformation matrix of the unmanned aerial vehicle can be directly obtained by calculation according to the measurement data, and the obtained pose transformation matrix of the unmanned aerial vehicle is not accurate enough due to the fact that the inertia measurement unit has accumulated errors. In order to distinguish the position and orientation transformation matrix after subsequent optimization, the position and orientation transformation matrix directly calculated according to the measurement data is called as an initial position and orientation transformation matrix. The pose transformation matrix includes a rotation matrix R and a translational vector t. In one embodiment, an initial pose transformation matrix corresponding to the measurement data is obtained through calculation by adopting a complementary filtering algorithm. In one embodiment, the initial pose transformation matrix between video frames refers to a pose transformation matrix between adjacent video frames, that is, pose transformation matrices between two adjacent video frames are calculated respectively.
And 106, acquiring video frame images shot by the camera, extracting feature points in each video frame image, and performing feature matching on the feature points to obtain feature point matching pairs among the video frame images.
The camera adopts an RGB-D camera, acquires a color image and a depth image obtained by shooting, and aligns the acquired color image and depth image in time. Extracting characteristic points in the color image, wherein the characteristic points can be simply understood as more prominent points in the image, such as contour points, bright points in darker areas, dark points in lighter areas and the like. The feature extraction may use ORB features, and ORB uses fast (features from obtained segment test) algorithm to detect feature points. The core idea of FAST is to find out the pell's points, i.e. to compare a point with its surrounding points, and to consider it as a feature point if it is different from most of them. Of course other features may be employed such as HOG features, LBP features, etc. Here, the HOG (Histogram of Oriented gradients) feature is a feature descriptor used for object detection in computer vision and image processing, and is constructed by calculating and counting a Gradient direction Histogram of a local region of an image. LBP (Local Binary Pattern) is an operator used to describe the Local texture features of an image.
In one embodiment, in order to ensure that the feature points are uniformly distributed, the image is divided into small blocks, then an 8-layer image pyramid with the scale factor of 1.2 is constructed for the color image, and at least five feature points are extracted for each small block as far as possible. For example, if the reliable distance of the RGB-D camera is in a range of more than 0 and less than 3.5m, feature points with a depth of 0 and feature points with a depth of more than 3.5m are rejected, and then feature descriptors are extracted from the feature points (i.e., feature extraction of the feature points is performed). After the feature points in each video frame image are extracted, feature matching is carried out according to the features of the feature points to obtain feature point matching pairs among the video frame images. Because the unmanned aerial vehicle is flying continuously, the positions of the same point in the real space in different video frame images are different, and the positions of the same point in the real space in different video frames are obtained by acquiring the characteristics of the characteristic points in the front and rear video frames and then matching according to the characteristics.
In one embodiment, two adjacent video frame images are acquired, the features of a plurality of feature points are extracted from the previous video frame image and the next video frame image, and then the features of the feature points are matched to obtain matched feature points in the previous video frame image and the next video frame image, so as to form a feature point matching pair. For example, the feature points in the previous video frame image are P1, P2, P3 … …, Pn, respectively, and the corresponding matched feature points in the subsequent video frame image are Q1, Q2, Q3 … …, Qn, respectively. Wherein, P1 and Q1 are feature point matching pairs, P2 and Q2 are feature point matching pairs, and P3 and Q3 are feature point matching pairs. The matching of the feature points can adopt a Brute Force matching (Brute Force) algorithm or a fast approximate nearest neighbor (FLANN) algorithm to carry out feature matching, wherein the fast approximate nearest neighbor algorithm judges whether the ratio of the nearest matching distance to the next nearest matching distance exceeds a set threshold value, if so, the matching is judged to be successful, and mismatching point pairs are reduced.
And 108, calculating to obtain a target pose transformation matrix between the video frames according to the initial pose transformation matrix and the feature point matching pairs between the video frame images.
And calculating to obtain a target pose transformation matrix through the conversion relation between the feature point matching pairs by taking the initial pose transformation matrix as an initial estimation matrix. The initial pose transformation matrix is used as an initial estimation matrix, so that the calculation complexity is greatly reduced, and the positioning speed and accuracy are improved.
And step 110, determining the position of the unmanned aerial vehicle according to the target pose transformation matrix.
After the target pose transformation matrix between the video frame images is obtained through calculation, the position of the unmanned aerial vehicle can be obtained through calculation according to the initial position of the unmanned aerial vehicle and the target pose transformation matrix between the video frame images.
Traditional unmanned aerial vehicle locate mode, or rely on the tracking to image feature, there is matching inefficiency, and tracks easily and lose, or rely on single sensor, for example, IMU (inertial measurement unit), long-time operation leads to the fact very big accumulative error easily. In the embodiment of the invention, the images acquired by the inertial measurement unit and the camera are fused, so that the positioning efficiency is improved, and the positioning accuracy is further improved.
According to the unmanned aerial vehicle positioning method, the initial pose transformation matrix between video frames is obtained through calculation according to the measurement data obtained through measurement by the inertial measurement unit, then the feature points between the video frame images are matched to obtain the feature point matching pairs, the target pose transformation matrix is obtained through common calculation according to the feature point matching pairs obtained through matching and the initial pose matrix, and therefore the position of the unmanned aerial vehicle is determined according to target pose transformation. According to the unmanned aerial vehicle positioning method, the initial pose transformation matrix is obtained according to the inertial measurement unit, and then the initial pose transformation matrix is optimized according to feature point matching between video frame images obtained by camera shooting, so that a target pose transformation matrix is obtained, namely, inertial measurement and vision are fused to obtain the target transformation matrix, and therefore the unmanned aerial vehicle positioning accuracy is improved.
In one embodiment, the calculating a target pose transformation matrix between video frames according to the initial pose transformation matrix and the feature point matching pairs between the video frame images includes: acquiring the three-dimensional coordinates of each feature point in the feature point matching pair; calculating a three-dimensional coordinate obtained by converting the three-dimensional coordinate of the feature point in one video frame image into another video frame image by taking the initial pose transformation matrix between the video frames as an initial value; acquiring a target three-dimensional coordinate corresponding to the corresponding matched feature point in the other video frame image; and calculating to obtain a target pose transformation matrix according to the converted three-dimensional coordinates and the target three-dimensional coordinates.
After the matching pairs of the feature points are determined, the three-dimensional coordinates of each feature point are obtained, the three-dimensional coordinates are obtained according to a color image and a depth image which are obtained by shooting through an RGB-D camera, the color image is used for identifying and obtaining x and y values of the feature points, and the depth image is used for obtaining a corresponding z value. For two video frame images, the feature point matching pairs are respectively used as two sets, and the set of feature points in the first video frame image is { P | Pi∈R3I is 1,2KN, and the set of feature points in the second video frame image is { Q | Q }i∈R3And i is 1,2KN, taking the error between the two point sets as a cost function, and solving the corresponding rotation matrix R and translation vector t through the minimization of the cost function. Can be expressed by the following formula:
Figure BDA0001965341760000071
wherein, R and t are respectively a rotation matrix and a translation vector. The steps of the iterative closest point algorithm are as follows:
1) to PiThe closest point of each point in Q is marked as Qi
2) Solving the transformation matrixes R and t which are the minimum according to the formula;
3) carrying out rigid body transformation operation on the point set P by utilizing R and t to obtain a new point set
Figure BDA0001965341760000073
Calculating the error distance between the new point set and the point set Q:
Figure BDA0001965341760000072
in actual operation, the rotation matrix and the translation vector with constraint conditions can be represented by an unconstrained lie algebra, and the number of characteristic points with error distances smaller than a set threshold value, namely the number of inner points, is recorded. If the error distance E calculated in step 3) is not zerodIf the internal point is smaller than the threshold and the internal point is larger than the set threshold, or if the iteration frequency reaches the set threshold, the iteration is finished; if not, go to step 1) to carry out the next iteration. According to the method, the initial pose matrix obtained through calculation is used as the initial value of iteration, so that the iteration speed is improved, the calculation speed is improved, and the robustness is high.
As shown in fig. 2, in an embodiment, the method for positioning a drone further includes:
and step 112, performing loop detection on the current video frame by adopting a loop detection algorithm.
And step 114, optimizing and updating the corresponding target pose transformation matrix according to the result of loop detection to obtain an updated target pose transformation matrix.
The pose estimation is usually a recursive process, that is, the pose of the current video frame is solved by the pose of the previous video frame, and the accumulated error is obtained after the error is transmitted frame by frame. In fact, the pose of the following video frame can also be directly derived from the frontmost video frame, and such pose detection is called "loop back". For example, errors of the previous 4 video frames are accumulated in the pose error of the fifth frame, but the pose of the fifth frame does not need to be derived from the fourth frame, and can also be derived from the second frame, and obviously, such calculation errors are much smaller. However, the pose of the following video frame is not calculated by any frame in the front, and the two frames are required to satisfy a certain relation, generally, the poses of the two frames are required to be extremely close, so that the video frame meeting the requirement needs to be found, and then pose optimization is carried out to reduce errors, and the method is called loop detection. The closed-loop detection algorithm is to find a historical video frame with a position and a posture which are basically identical to those of a current video frame, namely, the current video frame returns to a certain historical point after a period of time, a closed loop is formed in the process, namely, the current video frame is sent from the point A and returns to the point A after a period of time, and a closed loop is formed in the process. The loop detection is performed through a closed loop detection algorithm, so that the error correction is facilitated. After loop detection is carried out, the target pose transformation matrix is updated and optimized according to a loop detection result to obtain a more accurate pose transformation matrix, and the pose transformation matrix is called an 'updated target pose transformation matrix' for distinguishing.
The closed-loop detection method is based on image matching and comprises a short-distance matching mode and a long-distance matching mode, specifically, a minimum spanning tree with limited depth is constructed by taking key frames which are continuous in time with a current video frame as root nodes, n key frames which are closest in time are deleted to avoid repetition, key frames of k frames which are closer to the early stage are randomly extracted from the minimum spanning tree to be matched with the current video frame, and the loop detection method is called short-distance loop detection. When a key frame can not be matched with the previous key frame, a specified key frame group is added, and 1 key frame is randomly extracted from the key frame group to be matched with the current frame, so that the number of sampling frames is greatly reduced, and the loop is called a long-distance loop.
In one embodiment, the performing loop detection on the current video frame by using a closed-loop detection algorithm includes: calculating the motion amount between the current video frame and the previous key frame, and if the motion amount is greater than a preset threshold value, taking the current video frame as the key frame; and when the current video frame is a key frame, matching the current video frame with the key frame in the previous key frame library, and if the key frame matched with the current video frame exists in the key frame library, taking the current video frame as a loop frame.
Wherein, in order to reduce the complexity of the subsequent optimization, the complexity of the calculation can be reduced by extracting the key frame. Since the captured video frames are relatively dense, for example, 30 frames can be captured within one second, it can be seen that the similarity between frames is very high, even identical, and the computation complexity is increased if each frame is computed. Complexity can be reduced by extracting key frames. Specifically, a first video frame is taken as a key frame, and then the motion amount between the current video frame and the previous key frame is calculated, and if the motion amount is within a certain threshold range, the motion amount is selected as the key frame, wherein the calculation formula of the motion amount is as follows:
Figure BDA0001965341760000091
wherein E ismA measure representing the amount of movement, tx,ty,tzRepresenting three translation distances of the translation vector t,
Figure BDA0001965341760000092
theta, psi denotes the rotational Euler angle of the motion between frames, which can be converted from a rotation matrix, omega12The balance weights of the translational motion quantity and the rotational motion quantity respectively are used for easily bringing large scene change to a visual field shot by a camera in a rotating mode compared with a translational mode, so that omega2Value ratio of (a) < omega >1And the specific value is adjusted according to specific conditions.
In an embodiment, the optimizing and updating the corresponding target pose transformation matrix according to the result of the loopback detection to obtain an updated target pose transformation matrix includes: determining the corresponding pose of each video frame image according to the target transformation matrix; according to the pose accumulation calculation corresponding to each video frame image, obtaining an estimated pose transformation matrix corresponding to the loop frame relative to the previously matched key frame; acquiring a corresponding observation pose transformation matrix according to the loop frame; and optimizing a target pose transformation matrix between video frame images according to the estimation pose transformation matrix and the observation pose transformation matrix to obtain an updated target pose transformation matrix.
After the target transformation matrix among the video frames is known, the corresponding pose of each video frame image can be determined. And obtaining an estimated pose transformation matrix equivalent to the previously matched key frame through accumulated calculation of the corresponding poses of the video frame images. Then, a real observation pose transformation matrix can be obtained according to the loop frame, and the target pose transformation matrix between the video frame images can be optimized by comparing the estimation pose transformation matrix with the observation pose transformation matrix, so that an updated target pose transformation matrix is obtained finally.
In one embodiment, the pose of the RGB-D camera is taken as the fixed point of the graph, in ξ1ΛξnAnd representing that relative motion estimation between two pose nodes is taken as an edge of the graph, and the relative motion estimation refers to a calculated target pose transformation matrix between the two pose nodes. The calculated target pose transformation matrix generally refers to a pose transformation matrix between two adjacent frames, so that a position transformation matrix between two video frames separated by a certain distance is calculated by an accumulative calculation method. In one embodiment, the corresponding poses of the ith frame and the jth frame are xi respectivelyiAnd xijThe corresponding pose transformation matrix can be obtained as
Figure BDA0001965341760000101
Is representative of xiiIn xijIn a coordinate system, i.e.
Figure BDA0001965341760000102
The true observation and pose estimation should be consistent under the most ideal condition, i.e.
Figure BDA0001965341760000103
However, errors often exist in the actual operation process, and an error function e is constructedi,jTo describe the error between the true observation and the estimate.
Figure BDA0001965341760000104
In the whole pose estimation process, the motion estimation obtained by matching image frames every time generates errors, and the error minimization problem can be converted into a least square optimization function F (x):
Figure BDA0001965341760000105
where i and j are key frame indices, Ωi,jRepresenting the inter-frame error information matrix and being a diagonal matrix whose elements have values representing the pair error ei,jThe degree of importance of.
The optimized camera pose set is obtained by error minimization, and this least squares problem can be solved by Levenberg-Marquardt algorithm (Levenberg-Marquardt algorithm) in General Graph Optimization library (g 2 o):
Figure BDA0001965341760000106
in one embodiment, when the camera is lost due to tracking caused by image blurring or feature deficiency or camera occlusion caused by too high moving speed, the pose estimation can be performed through the inertial measurement unit, and the pose optimization is performed after loop detection is successful.
In one embodiment, the above-mentioned drone positioning method further includes: determining a three-dimensional coordinate corresponding to each video frame image according to the updated target pose transformation matrix; transforming the three-dimensional coordinates to a world coordinate according to the three-dimensional coordinates and a corresponding target transformation matrix to obtain a three-dimensional point cloud map; and converting the three-dimensional point cloud map into a three-dimensional grid map by adopting an octree.
After an updated target pose transformation matrix among the video frames is obtained, a three-dimensional coordinate corresponding to each video frame image can be determined, the three-dimensional coordinate is in a camera coordinate system, the three-dimensional coordinate is converted into a world coordinate system according to the three-dimensional coordinate, the target transformation matrix and a conversion relation between the camera and the world coordinate system, the world coordinate system is an absolute coordinate system of the system, and a starting point of the unmanned aerial vehicle is generally used as an origin point of the world coordinate system. And converting the three-dimensional coordinates into a world coordinate system to obtain a three-dimensional point cloud map, and then converting the three-dimensional point cloud map into a three-dimensional grid map by adopting an octree.
In one embodiment, the unmanned aerial vehicle airborne operation system performs filtering processing on the optimized three-dimensional coordinates, and then converts the three-dimensional point cloud into a world coordinate system by combining with the updated target transformation matrix after optimization processing, so as to obtain three-dimensional map information. The global map is then constructed as an octree grid map. As shown in fig. 3, the data structure of the three-dimensional space in the octree grid map is formed by dividing the space into eight blocks, i.e., voxels, according to the division of eight quadrants of the spatial coordinate system, and then dividing each block according to the same method until the set resolution is met, as shown in fig. 3A, the data structure of the three-dimensional space in the octree grid map is a schematic division diagram of the octree three-dimensional space. Fig. 3B is a schematic diagram of an octree data structure, which is expanded from a root node to eight child nodes and then expanded from each child node to 8 child nodes from the viewpoint of the data structure until the leaf node at the lowest layer represents a square with the minimum resolution. In the octree, each node stores the information whether the node is occupied, when the left and right child nodes of a certain node are occupied or not, the node can not be expanded, and in the actual three-dimensional space, a plurality of objects are often connected together, the space has certain connectivity, so that most octree nodes do not need to be expanded to the leaf level, and a large storage space can be saved. In addition, the octree is converted into the three-dimensional grid map, so that the resolution can be adjusted, and flexible space modeling can be realized.
In one embodiment, after acquiring video frame images captured by the camera, extracting feature points in each video frame image, and performing feature matching on the feature points to obtain a feature point matching pair between the video frame images, the method further includes: and when the acquired video frame image is fuzzy, directly determining a target pose transformation matrix according to the initial pose transformation matrix.
When tracking loss is caused by image blurring or shielding, an initial pose matrix can be obtained by directly calculating measurement data obtained by measurement of the inertial measurement unit, and then a target pose transformation matrix is obtained according to the initial pose transformation matrix. The inertial measurement unit has the characteristics of short time accuracy and easy generation of accumulated errors, so that the inertial measurement unit can be used for making up for tracking loss caused by blurring or shielding of video frame images in a short time. In one embodiment, optimization is performed subsequently through loop detection to obtain a more accurate updated target pose transformation matrix. And the two parts of maps before and after the tracking loss are unified into a world coordinate system, so that the integrity of map construction can be ensured.
In one embodiment, during the motion amount during the vision loss, if the motion amount exceeds a certain threshold value, optimization compensation is carried out on the pose during the restart, otherwise, global pose optimization is carried out, and then the two parts of maps before and after the vision restart are unified under a world coordinate system. Aiming at tracking loss caused by image blurring or shielding in the actual motion process, the inertial measurement unit is adopted for compensation, the robustness and integrity of map construction are improved, the point cloud map is constructed into the octree grid map, the storage space is greatly saved, and an effective map is provided for subsequent path planning.
As shown in fig. 4, in one embodiment, the process diagram of the drone positioning method is composed of 4 parts, including four parts, namely, a visual inertial odometer, a back-end optimization, a loop detection, and a mapping. The method comprises the steps that data and image information of an inertial measurement unit are processed respectively in an inertial vision odometer part, three-dimensional space positions and postures (pose for short) of a camera are obtained through fusion optimization, then optimal pose estimation (target transformation matrix updating) is obtained through loop detection and optimization in a rear-end optimization part, and a three-dimensional point cloud map is constructed into an octree grid map in a map construction part in consideration of the fact that three-dimensional point cloud data are huge and effective map information cannot be provided for subsequent path planning. The loop detection is used for detecting whether the unmanned aerial vehicle returns to the original point or is near a visited place, and loop information is transmitted into the rear-end optimization part for optimization.
Fig. 5 is a schematic flow chart of a positioning method of a drone according to an embodiment. On one hand, the RGB-D camera acquires color images and depth images, then performs feature extraction and screening on the acquired images, and then performs matching on feature points between video frame images. In another aspect, an inertial measurement unit obtains measurement data, comprising: and calculating the initial pose matrix among the video frames according to the measured data. And when the feature points are successfully matched, calculating to obtain a target pose matrix by combining the initial pose matrix. And then, extracting key frames, performing loop detection on the key frames after extracting the key frames, performing pose optimization by combining the result of the loop detection, and then constructing an octree grid map by adopting octrees according to the three-dimensional point coordinates corresponding to the video frame images. When the image is blurred or blocked and the like to cause tracking loss, the corresponding image feature point matching is unsuccessful, the previously built map is kept, and then the inertial measurement unit is used for compensation. And then restarting the visual odometer, entering the processes of respective acquisition and fusion of the RGB-D camera and the inertia measurement unit again after the restart is successful, and continuing to restart if the restart of the visual odometer is unsuccessful.
As shown in fig. 6, a positioning device for a drone is proposed, the device comprising:
an obtaining module 602, configured to obtain measurement data obtained by measurement by an inertial measurement unit;
an initial calculation module 604, configured to calculate an initial pose transformation matrix between video frames according to the measurement data;
the matching module 606 is configured to acquire video frame images captured by a camera, extract feature points in each video frame image, and perform feature matching on the feature points to obtain feature point matching pairs between the video frame images;
a target calculation module 608, configured to calculate a target pose transformation matrix between video frames according to the initial pose transformation matrix and feature point matching pairs between the video frame images;
a determining module 610, configured to determine a position of the drone according to the target pose transformation matrix.
In one embodiment, the target calculation module 608 is further configured to obtain three-dimensional coordinates of each feature point in the feature point matching pair; calculating a three-dimensional coordinate obtained by converting the three-dimensional coordinate of the feature point in one video frame image into another video frame image by taking the initial pose transformation matrix between the video frames as an initial value; acquiring a target three-dimensional coordinate corresponding to the corresponding matched feature point in the other video frame image; and calculating to obtain a target pose transformation matrix according to the converted three-dimensional coordinates and the target three-dimensional coordinates.
As shown in fig. 7, in an embodiment, the positioning apparatus for a drone further includes:
a loop detection module 612, configured to perform loop detection on the current video frame by using a closed-loop detection algorithm;
and an optimization updating module 614, configured to perform optimization updating on the corresponding target pose transformation matrix according to the result of the loopback detection, so as to obtain an updated target pose transformation matrix.
In one embodiment, the loop detection module is further configured to calculate an amount of motion between the current video frame and a previous key frame, and if the amount of motion is greater than a preset threshold, take the current video frame as the key frame; and when the current video frame is a key frame, matching the current video frame with the key frame in the previous key frame library, and if the key frame matched with the current video frame exists in the key frame library, taking the current video frame as a loop frame.
In one embodiment, the optimization updating module is further configured to determine a pose corresponding to each video frame image according to the target transformation matrix; according to the pose accumulation calculation corresponding to each video frame image, obtaining an estimated pose transformation matrix corresponding to the loop frame relative to the previously matched key frame; acquiring a corresponding observation pose transformation matrix according to the loop frame; and optimizing a target pose transformation matrix between video frame images according to the estimation pose transformation matrix and the observation pose transformation matrix to obtain an updated target pose transformation matrix.
As shown in fig. 8, in an embodiment, the positioning apparatus for a drone further includes:
a coordinate determining module 616, configured to determine a three-dimensional coordinate corresponding to each video frame image according to the updated object pose transformation matrix;
the transformation module 618 is used for transforming the three-dimensional coordinates to world coordinates according to the three-dimensional coordinates and the corresponding target transformation matrix to obtain a three-dimensional point cloud map;
a converting module 620, configured to convert the three-dimensional point cloud map into a three-dimensional grid map by using an octree.
In one embodiment, the above-mentioned unmanned aerial vehicle positioning apparatus further comprises: and the judging module is used for judging whether the acquired video frame image is fuzzy or shielded, and when the acquired video frame image is fuzzy or shielded, determining the target pose transformation frame directly according to the initial pose transformation matrix.
FIG. 9 is a diagram illustrating an internal structure of a computer device in one embodiment. The computer device may be a drone, or a terminal or server connected to a drone. As shown in fig. 9, the computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program, which when executed by the processor, causes the processor to implement the drone positioning method. The internal memory may also have a computer program stored therein, which when executed by the processor, causes the processor to perform the drone positioning method. The network interface is used for communicating with an external device. Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the positioning method for the drone provided by the present application may be implemented in the form of a computer program that is executable on a computer device as shown in fig. 9. The memory of the computer device may store various program templates that make up the drone positioning device. Such as an acquisition module 602, an initial calculation module 604, a matching module 606, a target calculation module 608, and a determination module 610.
A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of: acquiring measurement data measured by an inertia measurement unit; calculating to obtain an initial pose transformation matrix between video frames according to the measurement data; acquiring video frame images shot by a camera, extracting feature points in each video frame image, and performing feature matching on the feature points to obtain feature point matching pairs among the video frame images; calculating to obtain a target pose transformation matrix between video frames according to the initial pose transformation matrix and the feature point matching pairs between the video frame images; and determining the position of the unmanned aerial vehicle according to the target pose transformation matrix.
In one embodiment, the calculating a target pose transformation matrix between video frames according to the initial pose transformation matrix and the feature point matching pairs between the video frame images includes: acquiring the three-dimensional coordinates of each feature point in the feature point matching pair; calculating a three-dimensional coordinate obtained by converting the three-dimensional coordinate of the feature point in one video frame image into another video frame image by taking the initial pose transformation matrix between the video frames as an initial value; acquiring a target three-dimensional coordinate corresponding to the corresponding matched feature point in the other video frame image; and calculating to obtain a target pose transformation matrix according to the converted three-dimensional coordinates and the target three-dimensional coordinates.
In one embodiment, the computer program, when executed by the processor, is further configured to perform the steps of: performing loop detection on the current video frame by adopting a closed loop detection algorithm; and optimizing and updating the corresponding target pose transformation matrix according to the loop detection result to obtain an updated target pose transformation matrix.
In one embodiment, the performing loop detection on the current video frame by using a closed-loop detection algorithm includes: calculating the motion amount between the current video frame and the previous key frame, and if the motion amount is greater than a preset threshold value, taking the current video frame as the key frame; and when the current video frame is a key frame, matching the current video frame with the key frame in the previous key frame library, and if the key frame matched with the current video frame exists in the key frame library, taking the current video frame as a loop frame.
In an embodiment, the optimizing and updating the corresponding target pose transformation matrix according to the result of the loopback detection to obtain an updated target pose transformation matrix includes: determining the corresponding pose of each video frame image according to the target transformation matrix; according to the pose accumulation calculation corresponding to each video frame image, obtaining an estimated pose transformation matrix corresponding to the loop frame relative to the previously matched key frame; acquiring a corresponding observation pose transformation matrix according to the loop frame; and optimizing a target pose transformation matrix between video frame images according to the estimation pose transformation matrix and the observation pose transformation matrix to obtain an updated target pose transformation matrix.
In one embodiment, the computer program, when executed by the processor, is further configured to perform the steps of: determining a three-dimensional coordinate corresponding to each video frame image according to the updated target pose transformation matrix; transforming the three-dimensional coordinates to a world coordinate according to the three-dimensional coordinates and a corresponding target transformation matrix to obtain a three-dimensional point cloud map; and converting the three-dimensional point cloud map into a three-dimensional grid map by adopting an octree.
In one embodiment, after the acquiring the video frame images captured by the camera, extracting the feature points in each video frame image, and performing feature matching on the feature points to obtain a feature point matching pair between the video frame images, when the computer program is executed by the processor, the computer program is further configured to perform the following steps: and when the acquired video frame image is fuzzy or shielded, directly determining a target pose transformation frame according to the initial pose transformation matrix.
A computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
acquiring measurement data measured by an inertia measurement unit; calculating to obtain an initial pose transformation matrix between video frames according to the measurement data; acquiring video frame images shot by a camera, extracting feature points in each video frame image, and performing feature matching on the feature points to obtain feature point matching pairs among the video frame images; calculating to obtain a target pose transformation matrix between video frames according to the initial pose transformation matrix and the feature point matching pairs between the video frame images; and determining the position of the unmanned aerial vehicle according to the target pose transformation matrix.
In one embodiment, the calculating a target pose transformation matrix between video frames according to the initial pose transformation matrix and the feature point matching pairs between the video frame images includes: acquiring the three-dimensional coordinates of each feature point in the feature point matching pair; calculating a three-dimensional coordinate obtained by converting the three-dimensional coordinate of the feature point in one video frame image into another video frame image by taking the initial pose transformation matrix between the video frames as an initial value; acquiring a target three-dimensional coordinate corresponding to the corresponding matched feature point in the other video frame image; and calculating to obtain a target pose transformation matrix according to the converted three-dimensional coordinates and the target three-dimensional coordinates.
In one embodiment, the computer program, when executed by the processor, is further configured to perform the steps of: performing loop detection on the current video frame by adopting a closed loop detection algorithm; and optimizing and updating the corresponding target pose transformation matrix according to the loop detection result to obtain an updated target pose transformation matrix.
In one embodiment, the performing loop detection on the current video frame by using a closed-loop detection algorithm includes: calculating the motion amount between the current video frame and the previous key frame, and if the motion amount is greater than a preset threshold value, taking the current video frame as the key frame; and when the current video frame is a key frame, matching the current video frame with the key frame in the previous key frame library, and if the key frame matched with the current video frame exists in the key frame library, taking the current video frame as a loop frame.
In an embodiment, the optimizing and updating the corresponding target pose transformation matrix according to the result of the loopback detection to obtain an updated target pose transformation matrix includes: determining the corresponding pose of each video frame image according to the target transformation matrix; according to the pose accumulation calculation corresponding to each video frame image, obtaining an estimated pose transformation matrix corresponding to the loop frame relative to the previously matched key frame; acquiring a corresponding observation pose transformation matrix according to the loop frame; and optimizing a target pose transformation matrix between video frame images according to the estimation pose transformation matrix and the observation pose transformation matrix to obtain an updated target pose transformation matrix.
In one embodiment, the computer program, when executed by the processor, is further configured to perform the steps of: determining a three-dimensional coordinate corresponding to each video frame image according to the updated target pose transformation matrix; transforming the three-dimensional coordinates to a world coordinate according to the three-dimensional coordinates and a corresponding target transformation matrix to obtain a three-dimensional point cloud map; and converting the three-dimensional point cloud map into a three-dimensional grid map by adopting an octree.
In one embodiment, after the acquiring the video frame images captured by the camera, extracting the feature points in each video frame image, and performing feature matching on the feature points to obtain a feature point matching pair between the video frame images, when the computer program is executed by the processor, the computer program is further configured to perform the following steps: and when the acquired video frame image is fuzzy or shielded, directly determining a target pose transformation frame according to the initial pose transformation matrix.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (8)

1. A method for locating a drone, the method comprising:
acquiring measurement data measured by an inertia measurement unit;
calculating to obtain an initial pose transformation matrix between video frames according to the measurement data;
acquiring video frame images shot by a camera, extracting feature points in each video frame image, and performing feature matching on the feature points to obtain feature point matching pairs among the video frame images;
calculating to obtain a target pose transformation matrix between video frames according to the initial pose transformation matrix and the feature point matching pairs between the video frame images;
determining the position of the unmanned aerial vehicle according to the target pose transformation matrix;
the method further comprises the following steps:
performing loop detection on the current video frame by adopting a closed loop detection algorithm;
optimizing and updating the corresponding target pose transformation matrix according to the loop detection result to obtain an updated target pose transformation matrix; the result of the loop detection is a determined loop frame;
wherein, the optimizing and updating the corresponding target pose transformation matrix according to the result of the loop detection to obtain an updated target pose transformation matrix comprises:
determining the corresponding pose of each video frame image according to the target transformation matrix;
according to the pose accumulation calculation corresponding to each video frame image, obtaining an estimated pose transformation matrix corresponding to the loop frame relative to the previously matched key frame;
acquiring a corresponding observation pose transformation matrix according to the loop frame;
and optimizing a target pose transformation matrix between video frame images according to the estimation pose transformation matrix and the observation pose transformation matrix to obtain an updated target pose transformation matrix.
2. The method according to claim 1, wherein the calculating a target pose transformation matrix between video frames according to the initial pose transformation matrix and the feature point matching pairs between the video frame images comprises:
acquiring the three-dimensional coordinates of each feature point in the feature point matching pair;
calculating a three-dimensional coordinate obtained by converting the three-dimensional coordinate of the feature point in one video frame image into another video frame image by taking the initial pose transformation matrix between the video frames as an initial value;
acquiring a target three-dimensional coordinate corresponding to the corresponding matched feature point in the other video frame image;
and calculating to obtain a target pose transformation matrix according to the converted three-dimensional coordinates and the target three-dimensional coordinates.
3. The method of claim 1, wherein the performing loop detection on the current video frame by using a closed-loop detection algorithm comprises:
calculating the motion amount between the current video frame and the previous key frame, and if the motion amount is greater than a preset threshold value, taking the current video frame as the key frame;
and when the current video frame is a key frame, matching the current video frame with the key frame in the previous key frame library, and if the key frame matched with the current video frame exists in the key frame library, taking the current video frame as a loop frame.
4. The method of claim 1, further comprising:
determining a three-dimensional coordinate corresponding to each video frame image according to the updated target pose transformation matrix;
transforming the three-dimensional coordinates to a world coordinate according to the three-dimensional coordinates and a corresponding target transformation matrix to obtain a three-dimensional point cloud map;
and converting the three-dimensional point cloud map into a three-dimensional grid map by adopting an octree.
5. The method according to claim 1, wherein after acquiring the video frame images captured by the camera, extracting the feature points in each video frame image, and obtaining the feature point matching pairs between the video frame images by performing feature matching on the feature points, the method further comprises:
and when the acquired video frame image is fuzzy or shielded, directly determining a target pose transformation frame according to the initial pose transformation matrix.
6. An unmanned aerial vehicle positioner, its characterized in that, the device includes:
the acquisition module is used for acquiring measurement data measured by the inertia measurement unit;
the initial calculation module is used for calculating an initial pose transformation matrix between video frames according to the measurement data;
the matching module is used for acquiring video frame images shot by the camera, extracting feature points in each video frame image, and performing feature matching on the feature points to obtain feature point matching pairs among the video frame images;
the target calculation module is used for calculating to obtain a target pose transformation matrix between video frames according to the initial pose transformation matrix and the feature point matching pairs between the video frame images;
the determining module is used for determining the position of the unmanned aerial vehicle according to the target pose transformation matrix;
the device is also used for carrying out loop detection on the current video frame by adopting a closed loop detection algorithm; the result of the loop detection is a determined loop frame; determining the corresponding pose of each video frame image according to the target transformation matrix; according to the pose accumulation calculation corresponding to each video frame image, obtaining an estimated pose transformation matrix corresponding to the loop frame relative to the previously matched key frame; acquiring a corresponding observation pose transformation matrix according to the loop frame; and optimizing a target pose transformation matrix between video frame images according to the estimation pose transformation matrix and the observation pose transformation matrix to obtain an updated target pose transformation matrix.
7. A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the method according to any one of claims 1 to 5.
8. A computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 5.
CN201910099571.8A 2019-01-31 2019-01-31 Unmanned aerial vehicle positioning method and device, computer equipment and storage medium Active CN109974693B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910099571.8A CN109974693B (en) 2019-01-31 2019-01-31 Unmanned aerial vehicle positioning method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910099571.8A CN109974693B (en) 2019-01-31 2019-01-31 Unmanned aerial vehicle positioning method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109974693A CN109974693A (en) 2019-07-05
CN109974693B true CN109974693B (en) 2020-12-11

Family

ID=67076819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910099571.8A Active CN109974693B (en) 2019-01-31 2019-01-31 Unmanned aerial vehicle positioning method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109974693B (en)

Families Citing this family (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110414458B (en) * 2019-08-01 2022-03-08 北京主线科技有限公司 Positioning method and device based on matching of plane label and template
CN110490131B (en) * 2019-08-16 2021-08-24 北京达佳互联信息技术有限公司 Positioning method and device of shooting equipment, electronic equipment and storage medium
CN110648363A (en) * 2019-09-16 2020-01-03 腾讯科技(深圳)有限公司 Camera posture determining method and device, storage medium and electronic equipment
WO2021056438A1 (en) * 2019-09-27 2021-04-01 深圳市大疆创新科技有限公司 Point cloud data processing method, device employing same, lidar, and movable platform
CN110728245A (en) * 2019-10-17 2020-01-24 珠海格力电器股份有限公司 Optimization method and device for VSLAM front-end processing, electronic equipment and storage medium
CN110887461B (en) * 2019-11-19 2021-04-06 西北工业大学 Unmanned aerial vehicle real-time computer vision processing method based on GPS attitude estimation
CN110909691B (en) * 2019-11-26 2023-05-05 腾讯科技(深圳)有限公司 Motion detection method, motion detection device, computer-readable storage medium, and computer device
CN111240321B (en) * 2020-01-08 2023-05-12 广州小鹏汽车科技有限公司 SLAM map-based high-frequency positioning method and vehicle control system
CN111260726A (en) * 2020-02-07 2020-06-09 北京三快在线科技有限公司 Visual positioning method and device
CN111462321B (en) * 2020-03-27 2023-08-29 广州小鹏汽车科技有限公司 Point cloud map processing method, processing device, electronic device and vehicle
CN112699718B (en) * 2020-04-15 2024-05-28 南京工程学院 Scale and illumination self-adaptive structured multi-target tracking method and application thereof
CN111583338B (en) * 2020-04-26 2023-04-07 北京三快在线科技有限公司 Positioning method and device for unmanned equipment, medium and unmanned equipment
CN111797906B (en) * 2020-06-15 2024-03-01 北京三快在线科技有限公司 Method and device for positioning based on vision and inertial mileage
CN111750853B (en) * 2020-06-24 2022-06-07 国汽(北京)智能网联汽车研究院有限公司 Map establishing method, device and storage medium
CN111811501B (en) * 2020-06-28 2022-03-08 鹏城实验室 Trunk feature-based unmanned aerial vehicle positioning method, unmanned aerial vehicle and storage medium
CN112014857B (en) * 2020-08-31 2023-04-07 上海宇航系统工程研究所 Three-dimensional laser radar positioning and navigation method for intelligent inspection and inspection robot
CN112085794B (en) * 2020-09-11 2022-05-17 中德(珠海)人工智能研究院有限公司 Space positioning method and three-dimensional reconstruction method applying same
CN112161567B (en) * 2020-09-28 2022-05-03 北京天玛智控科技股份有限公司 Positioning method and system for fully mechanized coal mining face
CN112270357A (en) * 2020-10-29 2021-01-26 德鲁动力科技(海南)有限公司 VIO vision system and method
CN114518767A (en) * 2020-11-19 2022-05-20 复旦大学 Unmanned aerial vehicle three-dimensional path planning method based on oblique photography model
CN112686950B (en) * 2020-12-04 2023-12-15 深圳市优必选科技股份有限公司 Pose estimation method, pose estimation device, terminal equipment and computer readable storage medium
CN112650422B (en) * 2020-12-17 2022-07-29 咪咕文化科技有限公司 AR interaction method and device for equipment, electronic equipment and storage medium
CN112752028B (en) * 2021-01-06 2022-11-11 南方科技大学 Pose determination method, device and equipment of mobile platform and storage medium
CN112577493B (en) * 2021-03-01 2021-05-04 中国人民解放军国防科技大学 Unmanned aerial vehicle autonomous positioning method and system based on remote sensing map assistance
CN112991449B (en) * 2021-03-22 2022-12-16 华南理工大学 AGV positioning and mapping method, system, device and medium
CN112699854B (en) * 2021-03-22 2021-07-20 亮风台(上海)信息科技有限公司 Method and device for identifying stopped vehicle
CN113096185B (en) * 2021-03-29 2023-06-06 Oppo广东移动通信有限公司 Visual positioning method, visual positioning device, storage medium and electronic equipment
CN113160221B (en) * 2021-05-14 2022-06-28 深圳市奥昇医疗科技有限责任公司 Image processing method, image processing device, computer equipment and storage medium
CN113409388A (en) * 2021-05-18 2021-09-17 深圳市乐纯动力机器人有限公司 Sweeper pose determination method and device, computer equipment and storage medium
CN113418527B (en) * 2021-06-15 2022-11-29 西安微电子技术研究所 Strong real-time double-structure continuous scene fusion matching navigation positioning method and system
CN113506368B (en) * 2021-07-13 2023-03-24 阿波罗智能技术(北京)有限公司 Map data fusion method, map data fusion device, electronic device, map data fusion medium, and program product
CN113587934B (en) * 2021-07-30 2024-03-19 深圳市普渡科技有限公司 Robot, indoor positioning method and device and readable storage medium
CN114331966B (en) * 2021-12-02 2024-02-13 北京斯年智驾科技有限公司 Port station locking method and system based on Gaussian process occupancy map estimation assistance
CN114063655A (en) * 2022-01-17 2022-02-18 四川腾盾科技有限公司 Estimation method, device, equipment and storage medium for real flight trajectory of unmanned aerial vehicle
CN116560394B (en) * 2023-04-04 2024-06-07 武汉理工大学 Unmanned aerial vehicle group pose follow-up adjustment method and device, electronic equipment and medium
CN116506732B (en) * 2023-06-26 2023-12-05 浙江华诺康科技有限公司 Image snapshot anti-shake method, device and system and computer equipment
CN117115414B (en) * 2023-10-23 2024-02-23 西安羚控电子科技有限公司 GPS-free unmanned aerial vehicle positioning method and device based on deep learning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104154910A (en) * 2014-07-22 2014-11-19 清华大学 Indoor micro unmanned aerial vehicle location method
CN104463953A (en) * 2014-11-11 2015-03-25 西北工业大学 Three-dimensional reconstruction method based on inertial measurement unit and RGB-D sensor
CN106873619A (en) * 2017-01-23 2017-06-20 上海交通大学 A kind of processing method in unmanned plane during flying path
CN107478220A (en) * 2017-07-26 2017-12-15 中国科学院深圳先进技术研究院 Unmanned plane indoor navigation method, device, unmanned plane and storage medium
CN107504969A (en) * 2017-07-24 2017-12-22 哈尔滨理工大学 Four rotor-wing indoor air navigation aids of view-based access control model and inertia combination
US20180075614A1 (en) * 2016-09-12 2018-03-15 DunAn Precision, Inc. Method of Depth Estimation Using a Camera and Inertial Sensor

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104154910A (en) * 2014-07-22 2014-11-19 清华大学 Indoor micro unmanned aerial vehicle location method
CN104463953A (en) * 2014-11-11 2015-03-25 西北工业大学 Three-dimensional reconstruction method based on inertial measurement unit and RGB-D sensor
US20180075614A1 (en) * 2016-09-12 2018-03-15 DunAn Precision, Inc. Method of Depth Estimation Using a Camera and Inertial Sensor
CN106873619A (en) * 2017-01-23 2017-06-20 上海交通大学 A kind of processing method in unmanned plane during flying path
CN107504969A (en) * 2017-07-24 2017-12-22 哈尔滨理工大学 Four rotor-wing indoor air navigation aids of view-based access control model and inertia combination
CN107478220A (en) * 2017-07-26 2017-12-15 中国科学院深圳先进技术研究院 Unmanned plane indoor navigation method, device, unmanned plane and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
An accelerated image matching technique for UAV orthoimage registration;Chung-Hsien Tsai, Yu-Ching Lin;《ISPRS Journal of Photogrammetry and Remote Sensing》;20170630;全文 *
单目视觉/惯性室内无人机自主导航算法研究;庄曈;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20130215(第02期);全文 *

Also Published As

Publication number Publication date
CN109974693A (en) 2019-07-05

Similar Documents

Publication Publication Date Title
CN109974693B (en) Unmanned aerial vehicle positioning method and device, computer equipment and storage medium
JP7326720B2 (en) Mobile position estimation system and mobile position estimation method
CN109631855B (en) ORB-SLAM-based high-precision vehicle positioning method
CN110070615B (en) Multi-camera cooperation-based panoramic vision SLAM method
CN110555901B (en) Method, device, equipment and storage medium for positioning and mapping dynamic and static scenes
CN110047108B (en) Unmanned aerial vehicle pose determination method and device, computer equipment and storage medium
Panahandeh et al. Vision-aided inertial navigation based on ground plane feature detection
CN112233177B (en) Unmanned aerial vehicle pose estimation method and system
CN110047142A (en) No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
CN106529538A (en) Method and device for positioning aircraft
CN106780729A (en) A kind of unmanned plane sequential images batch processing three-dimensional rebuilding method
CN109631911B (en) Satellite attitude rotation information determination method based on deep learning target recognition algorithm
JP2012118666A (en) Three-dimensional map automatic generation device
CN111623773B (en) Target positioning method and device based on fisheye vision and inertial measurement
CN111812978B (en) Cooperative SLAM method and system for multiple unmanned aerial vehicles
CN115115674A (en) Method for estimating satellite image trajectory target direction under single-satellite condition
CN114723811A (en) Stereo vision positioning and mapping method for quadruped robot in unstructured environment
CN117197333A (en) Space target reconstruction and pose estimation method and system based on multi-view vision
CN113761647B (en) Simulation method and system of unmanned cluster system
US10977810B2 (en) Camera motion estimation
CN116147618B (en) Real-time state sensing method and system suitable for dynamic environment
CN113129422A (en) Three-dimensional model construction method and device, storage medium and computer equipment
CN111710039B (en) High-precision map construction method, system, terminal and storage medium
CN113256736B (en) Multi-camera visual SLAM method based on observability optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant