CN110047142A - No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium - Google Patents
No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN110047142A CN110047142A CN201910209625.1A CN201910209625A CN110047142A CN 110047142 A CN110047142 A CN 110047142A CN 201910209625 A CN201910209625 A CN 201910209625A CN 110047142 A CN110047142 A CN 110047142A
- Authority
- CN
- China
- Prior art keywords
- video frame
- frame images
- module
- dimensional
- transformation matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Remote Sensing (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
This application involves a kind of no-manned plane three-dimensional map constructing methods, this method comprises: obtaining the video frame images that camera is shot, extract the characteristic point in each video frame images, characteristic point is matched to obtain Feature Points Matching pair using color histogram and Scale invariant features transform mixing matching algorithm, according to Feature Points Matching to module and carriage transformation matrix is calculated, the corresponding three-dimensional coordinate of each video frame images is determined according to module and carriage transformation matrix, the three-dimensional coordinate of characteristic point in video frame images is transformed under world coordinate system, obtain three-dimensional point cloud map, using video frame images as the input of target detection model, obtain object information, by three-dimensional point cloud map in conjunction with object information, obtain include object information three-dimensional point cloud map.The method increase the real-time of three-dimensional point cloud map structuring and accuracys, and include abundant information.Furthermore, it is also proposed that a kind of no-manned plane three-dimensional map structuring device, computer equipment and storage medium.
Description
Technical field
The present invention relates to field of computer technology, more particularly, to a kind of no-manned plane three-dimensional map constructing method, device, meter
Calculate machine equipment and storage medium.
Background technique
With the development of science and technology, unmanned plane small, intelligence, flight space have extended to jungle, city
In city's even building.Environment sensing is unmanned plane during the work time to the base of environment understanding, navigation, planning and behaviour decision making
Plinth.Wherein, the most important target of environment sensing is the perfect three-dimensional map of building, is then based on three-dimensional map and carries out path
Planning and navigation.Either accuracy is low or real-time is low for the building of traditional three-dimensional map, and construct Map Information Volume compared with
Planning and navigation few and then that influence subsequent path.
Summary of the invention
This, it is necessary to is in view of the above-mentioned problems, providing one kind can reach good real-time under the premise of guaranteeing high-precision
Property require and include no-manned plane three-dimensional map constructing method, device, computer equipment and the storage medium more than information content.
In a first aspect, the embodiment of the present invention provides a kind of no-manned plane three-dimensional map constructing method, which comprises
The video frame images that camera is shot are obtained, the characteristic point in each video frame images is extracted;
Using color histogram and Scale invariant features transform mixing matching algorithm to the characteristic point between video frame images
It is matched, obtains the Feature Points Matching pair between video frame images;
The pose being calculated between video frame images is converted according to the Feature Points Matching between the video frame images
Matrix;
The corresponding three-dimensional coordinate of each video frame images is determined according to the module and carriage transformation matrix;
According to the corresponding three-dimensional coordinate of video frame images and corresponding module and carriage transformation matrix by the feature in video frame images
The three-dimensional coordinate of point is transformed under world coordinate system, obtains three-dimensional point cloud map;
Using the video frame images as the input of target detection model, obtain what the target detection model inspection obtained
Object information in video frame images;
By the three-dimensional point cloud map in conjunction with the object information, obtain include object information three-dimensional point cloud
Map.
Second aspect, the embodiment of the present invention provide a kind of no-manned plane three-dimensional map structuring device, and described device includes:
Extraction module, the video frame images shot for obtaining camera, extracts the feature in each video frame images
Point;
Matching module, for using color histogram and Scale invariant features transform mixing matching algorithm to video frame images
Between characteristic point matched, obtain the Feature Points Matching pair between video frame images;
Computing module, for according to the Feature Points Matching between the video frame images to be calculated video frame images it
Between module and carriage transformation matrix;
Determining module, for determining the corresponding three-dimensional coordinate of each video frame images according to the module and carriage transformation matrix;
Conversion module, for according to the corresponding three-dimensional coordinate of video frame images and corresponding module and carriage transformation matrix by video frame
The three-dimensional coordinate of characteristic point in image is transformed under world coordinate system, obtains three-dimensional point cloud map;
Detection module, for obtaining the target detection using the video frame images as the input of target detection model
Object information in the video frame images that model inspection obtains;
Binding modules, in conjunction with the object information, obtaining including object the three-dimensional point cloud map
The three-dimensional point cloud map of information.
The third aspect, the embodiment of the present invention provide a kind of computer equipment, including memory and processor, the memory
It is stored with computer program, when the computer program is executed by the processor, so that the processor executes following steps:
The video frame images that camera is shot are obtained, the characteristic point in each video frame images is extracted;
Using color histogram and Scale invariant features transform mixing matching algorithm to the characteristic point between video frame images
It is matched, obtains the Feature Points Matching pair between video frame images;
The pose being calculated between video frame images is converted according to the Feature Points Matching between the video frame images
Matrix;
The corresponding three-dimensional coordinate of each video frame images is determined according to the module and carriage transformation matrix;
According to the corresponding three-dimensional coordinate of video frame images and corresponding module and carriage transformation matrix by the feature in video frame images
The three-dimensional coordinate of point is transformed under world coordinate system, obtains three-dimensional point cloud map;
Using the video frame images as the input of target detection model, obtain what the target detection model inspection obtained
Object information in video frame images;
By the three-dimensional point cloud map in conjunction with the object information, obtain include object information three-dimensional point cloud
Map.
Fourth aspect, the embodiment of the present invention provide a kind of computer readable storage medium, are stored with computer program, described
When computer program is executed by processor, so that the processor executes following steps:
The video frame images that camera is shot are obtained, the characteristic point in each video frame images is extracted;
Using color histogram and Scale invariant features transform mixing matching algorithm to the characteristic point between video frame images
It is matched, obtains the Feature Points Matching pair between video frame images;
The pose being calculated between video frame images is converted according to the Feature Points Matching between the video frame images
Matrix;
The corresponding three-dimensional coordinate of each video frame images is determined according to the module and carriage transformation matrix;
According to the corresponding three-dimensional coordinate of video frame images and corresponding module and carriage transformation matrix by the feature in video frame images
The three-dimensional coordinate of point is transformed under world coordinate system, obtains three-dimensional point cloud map;
Using the video frame images as the input of target detection model, obtain what the target detection model inspection obtained
Object information in video frame images;
By the three-dimensional point cloud map in conjunction with the object information, obtain include object information three-dimensional point cloud
Map.
Above-mentioned no-manned plane three-dimensional map constructing method, device, computer equipment and storage medium, by using color histogram
Figure and Scale invariant features transform mixing matching algorithm match the characteristic point between video frame images, can be improved feature
The matched accuracy of point and real-time.In addition, carrying out identification inspection to the object in video frame images by target detection model
Survey, object information and three-dimensional point cloud map are combined, obtain include object information three-dimensional point cloud map, make
The three-dimensional point cloud map that must be established includes richer information.Mixed by color histogram and Scale invariant features transform
Matching improves the accuracy of three-dimensional map building, and the object information identified with target detection model is combined,
So that three-dimensional point cloud map includes richer content, support is provided for subsequent progress optimum path planning.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with
The structure shown according to these attached drawings obtains other attached drawings.
Fig. 1 is the flow chart of no-manned plane three-dimensional map constructing method in one embodiment;
Fig. 2 is the schematic diagram of no-manned plane three-dimensional map constructing method in one embodiment;
Fig. 3 is color histogram and the matched combination schematic diagram of SIFT feature in one embodiment;
Fig. 4 is the signal of training and the prediction of the unmanned plane target detection model based on deep learning in one embodiment
Figure;
Fig. 5 is the structural block diagram of no-manned plane three-dimensional map structuring device in one embodiment;
Fig. 6 is the structural block diagram of no-manned plane three-dimensional map structuring device in another embodiment;
Fig. 7 is the structural block diagram of no-manned plane three-dimensional map structuring device in another embodiment;
Fig. 8 is the internal structure chart of computer equipment in one embodiment.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right
The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and
It is not used in the restriction present invention.
As shown in Figure 1, proposing a kind of no-manned plane three-dimensional map constructing method, which is answered
For unmanned plane or the terminal being connect with unmanned plane or server, illustrated for being applied to unmanned plane in the present embodiment, is had
Body the following steps are included:
Step 102, the video frame images that camera is shot are obtained, the characteristic point in each video frame images is extracted.
Wherein, characteristic point can simply be interpreted as more significant point in image, as bright in profile point, darker area
Point, the dim spot etc. in brighter areas.In one embodiment, the camera of unmanned plane can use RGB-D camera, and acquisition is shot
The color image and depth image arrived, and the color image and depth image that will acquire be aligned on the time, then extract
Characteristic point in color image, the feature extraction of characteristic point can be carried out special using color histogram and Scale invariant features transform
Sign is extracted.
Step 104, using color histogram and Scale invariant features transform mixing matching algorithm between video frame images
Characteristic point matched, obtain the Feature Points Matching pair between video frame images.
Wherein, color histogram match algorithm lays particular emphasis on the matching to color characteristic, Scale invariant features transform
(scaleinvariant feature transform, SIFT) lays particular emphasis on the matching to shape feature.So by color histogram
Figure matching algorithm and change of scale eigentransformation are mixed, i.e., carry out " shape " of " color " of color histogram and SIFT algorithm
It combines, to improve the accuracy of feature identification, improves the accuracy of Feature Points Matching, while being also beneficial to improve and know
Other real-time, to be conducive to improve real-time and accuracy that subsequent three-dimensional point cloud map generates.
After extracting the characteristic point in each video frame images, characteristic matching is carried out according to the feature of characteristic point, is obtained
Feature Points Matching pair between video frame images.Since unmanned plane is constantly in-flight, so same in real space
Position of the point in different video frame image is different, by obtaining the feature of characteristic point in the video frame of front and back, then according to feature
It is matched, obtains position of the same point in different video frame in real space.
In one embodiment, two adjacent video frame images are obtained, in previous video frame images and latter video frame
Extract the feature of multiple characteristic points in image, then the feature of characteristic point matched, obtain previous video frame images with
Matched characteristic point in latter video frame images, the matching pair of constitutive characteristic point.For example, the characteristic point in previous video frame images
Respectively P1, P2, P3 ..., Pn, the corresponding matched characteristic point in latter video frame images are respectively Q1, Q2, Q3 ...,
Qn.Wherein, P1 and Q1 is characterized a matching pair, and P2 and Q2 are characterized a matching pair, and P3 and Q3 are characterized a matching equity.Feature
The matching of point can be using violence matching (Brute Force) or quickly approximate KNN (FLANN) algorithm carries out characteristic matching,
Wherein, quick approximate KNN algorithm is by judging whether nearest matching distance and time near match distance ratio are more than setting threshold
Value determines successful match, reduces Mismatching point pair with this if being more than preset threshold.
Step 106, according to the Feature Points Matching between video frame images to the pose being calculated between video frame images
Transformation matrix.
Wherein, after position of the characteristic point in video frame images has been determined, so that it may according to the corresponding pass between position
The module and carriage transformation matrix between video frame images is calculated in system.
Step 108, the corresponding three-dimensional coordinate of each video frame images is determined according to module and carriage transformation matrix.
Wherein, the corresponding three-dimensional coordinate of video frame images refers to the corresponding three-dimensional coordinate of the camera in unmanned plane.Known
After module and carriage transformation matrix between video frame images, so that it may the three-dimensional of any video frame images be calculated according to transformation relation
Coordinate, the corresponding three-dimensional coordinate of video frame images actually refer to the corresponding three-dimensional in position when camera shoots the video frame images
Point coordinate.
Step 110, according to the corresponding three-dimensional coordinate of video frame images and corresponding module and carriage transformation matrix by video frame images
In the three-dimensional coordinate of characteristic point be transformed under world coordinate system, obtain three-dimensional point cloud map.
Wherein, since the corresponding three-dimensional coordinate of each video frame images is the three-dimensional seat under corresponding camera coordinates system
It marks, the coordinate of the characteristic point in video frame images is also under camera coordinates system, in order to which the coordinate of characteristic point to be all transformed into
It under world coordinate system, is converted according to module and carriage transformation matrix, obtains three-dimensional coordinate of the characteristic point under world coordinates, thus
To three-dimensional point cloud map.
Step 112, using video frame images as the input of target detection model, obtain what target detection model inspection obtained
Object information in video frame images.
Wherein, training obtains target detection model in advance, and target detection model is used to detecting to be occurred in video frame images
Object, for example, automobile.Due to that may include multiple objects in video frame images, each object be obtained if necessary to identify
Classification, then correspondingly need training obtaining multiple target detection models.After training obtains target detection model, by video frame
Input of the image as target detection model, so that it may which detection obtains where the object and object in video frame images
Position.
Step 114, by three-dimensional point cloud map in conjunction with object information, obtain include object information three-dimensional point cloud
Map.
Wherein, after the information of the object in obtained video frame images, by with the feature on three-dimensional point cloud map
Point is matched, so that it may the corresponding characteristic point of object is determined, by the corresponding object information labeling of this feature point to three-dimensional
Point cloud map, so that the three-dimensional point cloud map established has richer information content.Target detection model is used for part
Perception, and the building of three-dimensional point cloud map is to be combined overall situation perception and local sensing based on global perception, to improve
Three-dimensional point cloud map it is rich.
Above-mentioned no-manned plane three-dimensional map constructing method, by using color histogram and Scale invariant features transform mixing
The characteristic point between video frame images is matched with algorithm, can be improved the accuracy and real-time of Feature Points Matching.Separately
Outside, recognition detection is carried out to the object in video frame images by target detection model, by object information and three-dimensional
Point cloud map be combined, obtain include object information three-dimensional point cloud map so that establish three-dimensional point cloud map include
There is richer information.Three-dimensional map building is improved by color histogram and Scale invariant features transform mixing matching
Accuracy, and the object information identified with target detection model is combined, so that three-dimensional point cloud map includes more
Content abundant provides support for subsequent progress optimum path planning, improves the intelligent level of unmanned plane environment sensing.
As shown in Fig. 2, in one embodiment, the schematic diagram of no-manned plane three-dimensional map constructing method, comprising: overall situation perception
With two parts of local sensing.The structural framing progress mixed in overall situation perception using color histogram and SIFT feature
Match, then carries out positioning and the building of three-dimensional point cloud map.Local sensing is using target detection model in video frame images
Object identified, finally, the two is combined, obtain include object information three-dimensional point cloud map.
In one embodiment, described to use color histogram and Scale invariant features transform mixing matching algorithm to video
Characteristic point between frame image is matched, and the Feature Points Matching pair between video frame images is obtained, comprising: uses color histogram
Figure Feature Correspondence Algorithm matches the characteristic point between video frame images, obtains the first matching to set;Not using scale
Becoming eigentransformation matching algorithm, to the match point progress in set, further matching obtains target feature point to first matching
Matching pair.
Wherein, preliminary Feature Points Matching is first carried out using color histogram, obtains the first matching to set, then uses
Scale invariant features transform matching algorithm further matches the first matching to the match point in set, obtains target signature
Point matching pair.In one embodiment, the matching of color histogram is calculated using Bhattacharyya distance, or is used
Correlation distance calculates.As shown in figure 3, color histogram shows with the matched combination of SIFT feature in one embodiment
It is intended to, the two is the relationship of tandem.
In one embodiment, according to the Feature Points Matching between the video frame images to video frame images are calculated
Between module and carriage transformation matrix, comprising: obtain the three-dimensional coordinate of each characteristic point of Feature Points Matching centering;It calculates one
The three-dimensional coordinate of characteristic point is transformed into the conversion three-dimensional coordinate that another video frame images obtain in video frame images;It obtains described another
The corresponding target three-dimensional coordinate of corresponding matched characteristic point in one video frame images;According to the conversion three-dimensional coordinate and the mesh
Module and carriage transformation matrix is calculated in mark three-dimensional coordinate.
Wherein, it is being determined that Feature Points Matching to rear, obtains the three-dimensional coordinate of each characteristic point, three-dimensional coordinate is basis
What the color image and depth image that RGB-D camera is shot obtained, color image obtains the x and y of characteristic point for identification
Value, depth image is for obtaining corresponding z value.For two video frame images, Feature Points Matching is collected to as two
Close, the collection of the characteristic point in the first video frame images be combined into P | Pi∈R3, i=1,2KN }, the feature in the second video frame images
Point collection be combined into Q | Qi∈R3, i=1,2KN }, using the error between two point sets as cost function, pass through cost function
Minimum acquires corresponding spin matrix R and translation vector t.It can be indicated using following formula:
Wherein, R and t is respectively spin matrix and translation vector.The step of iteration closest approach algorithm are as follows:
1) to PiIn each point in Q corresponding closest approach, be denoted as Qi;
2) it seeks making the smallest transformation matrix R and t according to above formula;
3) rigid body translation is carried out to point set P using R and t to operate to obtain new point setIt calculates between new point set and point set Q
Error distance:
It in actual operation, can be by the spin matrix of Prescribed Properties and the unconfined Lie algebra table of translation vector
Show, and recording error distance is less than the characteristic point quantity of given threshold, i.e., it is interior to put quantity.If the error calculated in step 3)
Distance EdLess than threshold value and interior point is greater than given threshold or whether the number of iterations reaches given threshold, then iteration terminates;If
It is unsatisfactory for, goes to step 1) and carry out next round iteration.
In one embodiment, the target detection model is obtained based on deep learning model training;It is incited somebody to action described
Input of the video frame images as target detection model, the detection for obtaining the target detection model output obtain object
Before, further includes: obtain training video image pattern, the training video image pattern includes positive sample and negative sample, described
It include object and the object position mark in the video image in positive sample;According to the training video figure
Decent is trained the target detection model, obtains trained target detection model.
Wherein, target detection model is trained to obtain using deep learning model.In order to which training obtains target inspection
Survey model, first acquisition training video image pattern, and set positive sample and negative sample, positive sample be exactly include object with
And the video image of position mark of the object in video image, it is examined by training study to the target for being able to detect object
Survey model.As shown in figure 4, in one embodiment, the training of the unmanned plane target detection model based on deep learning and pre-
The schematic diagram of survey is divided into pretreatment and real-time detection two large divisions.Real-time detection target, first to unmanned plane acquisition data into
Collected video flowing is divided into video frame images one by one by row pretreatment operation, carries out sample label to the target in image,
It is divided into trained and test data set, using deep learning frame training pattern, then returns the model of preservation applied to platform
Video flowing, complete to the real-time detection of target.
Using small drone carrier, industry camera is carried, the extensive scene under unmanned plane visual angle largely regards
Frequency data sampling determines and identifies target needed for unmanned plane, required identification target is marked in the video data of acquisition,
Neural network model is trained using the data pre-processed, model parameter to training result is adjusted and meets the condition of convergence,
It saves training pattern to detect for succeeding target, trained model is loaded onto unmanned plane, with unmanned plane to target detection
Test, constantly adjusts Optimized model.
In a specific embodiment, deep learning model uses YOLOv3 network structure (also referred to as Darknet-
53), using full convolutional network, comprising: introduce residual (residual error) structure, i.e. ResNet skip floor connection type, largely make
With residual error network characteristic.The convolution for the use of step-length being 2 is down-sampled to carry out, while having used up-sampling, route operation, one
3 detections are carried out in a network structure.Dimension cluster is used to carry out predicted boundary frame, training as anchor boxes (anchor case)
The summation that period uses square error to lose, the object score of each bounding box is predicted by logistic regression.If pervious side
Boundary's frame is not best, and after object to be measured has been overlapped certain threshold value or more, we can ignore this prediction, continue into
Row.We are only that each object to be measured distributes a bounding box using 0.5 system of threshold value.If previous bounding box is not yet assigned to
Object to be measured will not then cause damages to coordinate or class prediction.Each frame carrys out predicted boundary frame using multi-tag classification may
The class for including.In the training process, intersect entropy loss using binary to carry out class prediction.It is examined using YOLOv3 lightweight target
It surveys neural network structure and is applied to unmanned aerial vehicle platform, the energy of target identified in real time is improved under the limited calculation power of unmanned plane
Power.
In one embodiment, it is described by the three-dimensional point cloud map in conjunction with the object information, included
The three-dimensional point cloud map of object information, comprising: obtain target position of the object for detecting and obtaining in video frame images;Root
Matching characteristic point is determined according to the target position;According to the characteristic point by the object information labeling to described three
Dimension point cloud map.
Wherein, position and characteristic point of the object obtained according to detection in video frame images are in video frame images
In position, the determining object information with Feature Points Matching, by object information labeling to three-dimensional point cloud map, thus
The three-dimensional point cloud map richer to information content.
In one embodiment, the method also includes: obtain the obtained measurement data of Inertial Measurement Unit measurement;According to
The initial module and carriage transformation matrix between video frame is calculated in the measurement data;It is described according between the video frame images
Feature Points Matching is to the module and carriage transformation matrix being calculated between video frame images, comprising: converts square according to the initial pose
Feature Points Matching between battle array and the video frame images is to the object pose transformation matrix being calculated between video frame.
Wherein, Inertial Measurement Unit (Inertial measurement unit, IMU) is measurement object triaxial attitude angle
The device of (or angular speed) and acceleration.Using Inertial Measurement Unit as the inertia parameter identification device of unmanned plane, the device
Contain three-axis gyroscope, 3-axis acceleration and three axle magnetometer.Inertial Measurement Unit measurement can be read directly in unmanned plane
Measurement data, measurement data include: angular speed, acceleration and magnetometer data etc..It is measured getting Inertial Measurement Unit
After the measurement data arrived, the module and carriage transformation matrix of unmanned plane can directly be calculated according to measurement data, due to inertia measurement
Unit can have cumulative errors, so the module and carriage transformation matrix of obtained unmanned plane is not accurate enough.In order to after subsequent optimization
Module and carriage transformation matrix distinguishes, and the module and carriage transformation matrix being directly calculated according to measurement data is known as " initial pose change
Change matrix ".Module and carriage transformation matrix includes spin matrix R and translation vector t.In one embodiment, by using complementary filter
The corresponding initial module and carriage transformation matrix of measurement data is calculated in algorithm.After obtaining initial module and carriage transformation matrix, by initial bit
Appearance transformation matrix is as initial matrix, using iteration closest approach (Iterative Closest Point, ICP) algorithm according to view
Feature Points Matching between frequency frame image is to the object pose transformation matrix being calculated between video frame.By by inertia measurement
The initial module and carriage transformation matrix that unit obtains is conducive to improve the speed calculated as initial matrix.
In one embodiment, in the Feature Points Matching according between the video frame images to video is calculated
After module and carriage transformation matrix between frame image, further includes: the amount of exercise between current video frame and previous key frame is calculated, if
Amount of exercise is greater than preset threshold, then using current video frame as key frame;It, will be current when the current video frame is key frame
Video frame is matched with the key frame in key frame library before, is matched if existing in the key frame library with current video frame
Key frame, then using current video frame as winding frame;Corresponding module and carriage transformation matrix is optimized according to the winding frame
It updates, obtains updating module and carriage transformation matrix;It is described that each video frame images corresponding three are determined according to the module and carriage transformation matrix
Tie up coordinate, comprising: the corresponding three-dimensional coordinate of each video frame images is determined according to the update module and carriage transformation matrix.
Wherein, in order to reduce the complexity of subsequent optimization, the complexity of calculating can be reduced by the extraction of key frame.
Since collected video frame is than comparatively dense, for example, 30 frames can be acquired in general one second, it is seen then that the phase between frame and frame
It is very high like spending, it is even duplicate, if that calculating each frame undoubtedly will increase computation complexity.So can pass through
Key frame is extracted to reduce complexity.Specifically, first using the first video frame as key frame, then by calculating current video
Amount of exercise between frame and previous key frame is selected as key frame if amount of exercise, wherein the meter of amount of exercise if certain threshold range
Calculate formula are as follows:
Wherein, EmIndicate the measurement of amount of exercise, tx,ty,tzIndicate three translation distances of translation vector t,It indicates
Interframe movement rotates Eulerian angles, can convert to obtain from spin matrix, ω1,ω2The respectively balance of translational and rotational movement amount
Weight, to the field of vision of camera shooting, speed ratio translation is easier to bring biggish scene changes, therefore ω2Value ratio ω1
Greatly, specific value will be adjusted as the case may be.
After being extracted key frame, update is optimized to obtained module and carriage transformation matrix using the method that winding detects.
In one embodiment, winding detection is carried out using closed loop detection algorithm.After carrying out winding detection, according to winding testing result
Optimization is updated to object pose transformation matrix, obtains more accurate module and carriage transformation matrix, in order to distinguish, referred to as " updates position
Appearance transformation matrix ".The corresponding three-dimensional coordinate of each video frame images is determined according to module and carriage transformation matrix is updated.
As shown in figure 5, proposing a kind of no-manned plane three-dimensional map structuring device, which includes:
Extraction module 502, the video frame images shot for obtaining camera, extracts the spy in each video frame images
Sign point;
Matching module 504, for using color histogram and Scale invariant features transform mixing matching algorithm to video frame
Characteristic point between image is matched, and the Feature Points Matching pair between video frame images is obtained;
Computing module 506, for according to the Feature Points Matching between the video frame images to video frame figure is calculated
Module and carriage transformation matrix as between;
Determining module 508, for determining the corresponding three-dimensional coordinate of each video frame images according to the module and carriage transformation matrix;
Conversion module 510, for that will be regarded according to the corresponding three-dimensional coordinate of video frame images and corresponding module and carriage transformation matrix
The three-dimensional coordinate of characteristic point in frequency frame image is transformed under world coordinate system, obtains three-dimensional point cloud map;
Detection module 512, for obtaining the target inspection using the video frame images as the input of target detection model
Survey the object information in the video frame images that model inspection obtains;
Binding modules 514, in conjunction with the object information, obtaining including target the three-dimensional point cloud map
The three-dimensional point cloud map of object information.
In one embodiment, the matching module 504 is also used to using color histogram Feature Correspondence Algorithm to video
Characteristic point between frame image is matched, and obtains the first matching to set;Using Scale invariant features transform matching algorithm pair
First matching carries out further matching to the match point in set and obtains target feature point matching pair.
In one embodiment, computing module 506 is also used to obtain the three of each characteristic point of Feature Points Matching centering
Tie up coordinate;It calculates and the three-dimensional coordinate of characteristic point in a video frame images is transformed into the conversion three that another video frame images obtain
Tie up coordinate;Obtain in another video frame images the corresponding target three-dimensional coordinate of corresponding matched characteristic point;According to described turn
It changes three-dimensional coordinate and module and carriage transformation matrix is calculated in the target three-dimensional coordinate.
In one embodiment, the target detection model is obtained based on deep learning model training;It is above-mentioned nobody
Machine three-dimensional map construction device further include: training module, for obtaining training video image pattern, the training video image sample
This includes positive sample and negative sample, includes object and the object position in the video image in the positive sample
Tagging;The target detection model is trained according to the training video image pattern, obtains trained target inspection
Survey model.
In one embodiment, the binding modules 514 are also used to obtain the object for detecting and obtaining in video frame images
In target position;Matching characteristic point is determined according to the target position;According to the characteristic point by the object type
Other information labeling is to the three-dimensional point cloud map.
As shown in fig. 6, in one embodiment, above-mentioned no-manned plane three-dimensional map structuring device further include:
Initial calculation module 505, the measurement data obtained for obtaining Inertial Measurement Unit measurement, according to the measurement number
According to the initial module and carriage transformation matrix being calculated between video frame;
The computing module is also used to include: according between the initial module and carriage transformation matrix and the video frame images
Feature Points Matching is to the object pose transformation matrix being calculated between video frame.
As shown in fig. 7, in one embodiment, above-mentioned no-manned plane three-dimensional map structuring device further include:
Key frame determining module 516, for calculating the amount of exercise between current video frame and previous key frame, if amount of exercise
Greater than preset threshold, then using current video frame as key frame.
Winding frame determining module 518 is used for when the current video frame is key frame, by current video frame and before
Key frame in key frame library is matched, if existing in the key frame library and the matched key frame of current video frame, general
Current video frame is as winding frame.
Optimization module 520 obtains more for optimizing update to corresponding module and carriage transformation matrix according to the winding frame
New module and carriage transformation matrix.
The determining module 508 is also used to determine that each video frame images are corresponding according to the update module and carriage transformation matrix
Three-dimensional coordinate.
Fig. 8 shows the internal structure chart of computer equipment in one embodiment.The computer equipment can be unmanned plane,
Or the terminal or server being connect with unmanned plane.As shown in figure 8, the computer equipment includes the processing connected by system bus
Device, memory and network interface.Wherein, memory includes non-volatile memory medium and built-in storage.The computer equipment
Non-volatile memory medium is stored with operating system, can also be stored with computer program, which is executed by processor
When, it may make processor to realize no-manned plane three-dimensional map constructing method.Computer program can also be stored in the built-in storage, it should
When computer program is executed by processor, processor may make to execute no-manned plane three-dimensional map constructing method.Network interface is used for
It is communicated with external.It will be understood by those skilled in the art that structure shown in Fig. 8, only related to application scheme
Part-structure block diagram, do not constitute the restriction for the computer equipment being applied thereon to application scheme, it is specific to count
Calculating machine equipment may include perhaps combining certain components or with different portions than more or fewer components as shown in the figure
Part arrangement.
In one embodiment, no-manned plane three-dimensional map constructing method provided by the present application can be implemented as a kind of computer
The form of program, computer program can be run in computer equipment as shown in Figure 8.It can be deposited in the memory of computer equipment
Storage forms each process template of the no-manned plane three-dimensional map structuring device.For example, extraction module 502, matching module 504, meter
Calculate module 506, determining module 508, conversion module 510, detection module 512 and binding modules 514.
A kind of computer equipment, including memory and processor, the memory are stored with computer program, the calculating
When machine program is executed by the processor, so that the processor executes following steps: obtaining the video frame that camera is shot
Image extracts the characteristic point in each video frame images;It is calculated using color histogram and Scale invariant features transform mixing matching
Method matches the characteristic point between video frame images, obtains the Feature Points Matching pair between video frame images;According to described
Feature Points Matching between video frame images is to the module and carriage transformation matrix being calculated between video frame images;According to the pose
Transformation matrix determines the corresponding three-dimensional coordinate of each video frame images;According to the corresponding three-dimensional coordinate of video frame images and accordingly
The three-dimensional coordinate of characteristic point in video frame images is transformed under world coordinate system by module and carriage transformation matrix, with obtaining three-dimensional point cloud
Figure;Using the video frame images as the input of target detection model, the video that the target detection model inspection obtains is obtained
Object information in frame image;By the three-dimensional point cloud map in conjunction with the object information, obtain including object
The three-dimensional point cloud map of information.
In one embodiment, described to use color histogram and Scale invariant features transform mixing matching algorithm to video
Characteristic point between frame image is matched, and the Feature Points Matching pair between video frame images is obtained, comprising: uses color histogram
Figure Feature Correspondence Algorithm matches the characteristic point between video frame images, obtains the first matching to set;Not using scale
Becoming eigentransformation matching algorithm, to the match point progress in set, further matching obtains target feature point to first matching
Matching pair.
In one embodiment, according to the Feature Points Matching between the video frame images to video frame images are calculated
Between module and carriage transformation matrix, comprising: obtain the three-dimensional coordinate of each characteristic point of Feature Points Matching centering;It calculates one
The three-dimensional coordinate of characteristic point is transformed into the conversion three-dimensional coordinate that another video frame images obtain in video frame images;It obtains described another
The corresponding target three-dimensional coordinate of corresponding matched characteristic point in one video frame images;According to the conversion three-dimensional coordinate and the mesh
Module and carriage transformation matrix is calculated in mark three-dimensional coordinate.
In one embodiment, the target detection model is obtained based on deep learning model training;It is incited somebody to action described
Input of the video frame images as target detection model, the detection for obtaining the target detection model output obtain object
Before, further includes: obtain training video image pattern, the training video image pattern includes positive sample and negative sample, described
It include object and the object position mark in the video image in positive sample;According to the training video figure
Decent is trained the target detection model, obtains trained target detection model.
In one embodiment, it is described by the three-dimensional point cloud map in conjunction with the object information, included
The three-dimensional point cloud map of object information, comprising: obtain target position of the object for detecting and obtaining in video frame images;Root
Matching characteristic point is determined according to the target position;According to the characteristic point by the object category information labeling to described
Three-dimensional point cloud map.
In one embodiment, it when the computer program is handled by the processor, is also used to execute following steps: obtain
The measurement data for taking Inertial Measurement Unit measurement to obtain;The initial pose between video frame is calculated according to the measurement data
Transformation matrix;The Feature Points Matching according between the video frame images is to the pose being calculated between video frame images
Transformation matrix, comprising: according to the Feature Points Matching between the initial module and carriage transformation matrix and the video frame images to calculating
Obtain the object pose transformation matrix between video frame.
In one embodiment, in the Feature Points Matching according between the video frame images to video is calculated
After module and carriage transformation matrix between frame image, when the computer program is handled by the processor, it is also used to execute following
Step: calculating the amount of exercise between current video frame and previous key frame, if amount of exercise is greater than preset threshold, by current video
Frame is as key frame;When the current video frame is key frame, by current video frame and the key in key frame library before
Frame is matched, if in the key frame library exist with the matched key frame of current video frame, using current video frame as time
Ring frame;Update is optimized to corresponding module and carriage transformation matrix according to the winding frame, obtains updating module and carriage transformation matrix;It is described
The corresponding three-dimensional coordinate of each video frame images is determined according to the module and carriage transformation matrix, comprising: is become according to the update pose
It changes matrix and determines the corresponding three-dimensional coordinate of each video frame images.
A kind of computer readable storage medium is stored with computer program, when the computer program is executed by processor,
So that the processor executes following steps: obtaining the video frame images that camera is shot, extract in each video frame images
Characteristic point;Using color histogram and Scale invariant features transform mixing matching algorithm to the characteristic point between video frame images
It is matched, obtains the Feature Points Matching pair between video frame images;According to the Feature Points Matching between the video frame images
To the module and carriage transformation matrix being calculated between video frame images;Each video frame images are determined according to the module and carriage transformation matrix
Corresponding three-dimensional coordinate;It will be in video frame images according to the corresponding three-dimensional coordinate of video frame images and corresponding module and carriage transformation matrix
The three-dimensional coordinate of characteristic point be transformed under world coordinate system, obtain three-dimensional point cloud map;Using the video frame images as mesh
The input for marking detection model, obtains the object information in the video frame images that the target detection model inspection obtains;By institute
Three-dimensional point cloud map is stated in conjunction with the object information, obtain include object information three-dimensional point cloud map.
In one embodiment, described to use color histogram and Scale invariant features transform mixing matching algorithm to video
Characteristic point between frame image is matched, and the Feature Points Matching pair between video frame images is obtained, comprising: uses color histogram
Figure Feature Correspondence Algorithm matches the characteristic point between video frame images, obtains the first matching to set;Not using scale
Becoming eigentransformation matching algorithm, to the match point progress in set, further matching obtains target feature point to first matching
Matching pair.
In one embodiment, according to the Feature Points Matching between the video frame images to video frame images are calculated
Between module and carriage transformation matrix, comprising: obtain the three-dimensional coordinate of each characteristic point of Feature Points Matching centering;It calculates one
The three-dimensional coordinate of characteristic point is transformed into the conversion three-dimensional coordinate that another video frame images obtain in video frame images;It obtains described another
The corresponding target three-dimensional coordinate of corresponding matched characteristic point in one video frame images;According to the conversion three-dimensional coordinate and the mesh
Module and carriage transformation matrix is calculated in mark three-dimensional coordinate.
In one embodiment, the target detection model is obtained based on deep learning model training;It is incited somebody to action described
Input of the video frame images as target detection model, the detection for obtaining the target detection model output obtain object
Before, further includes: obtain training video image pattern, the training video image pattern includes positive sample and negative sample, described
It include object and the object position mark in the video image in positive sample;According to the training video figure
Decent is trained the target detection model, obtains trained target detection model.
In one embodiment, it is described by the three-dimensional point cloud map in conjunction with the object information, included
The three-dimensional point cloud map of object information, comprising: obtain target position of the object for detecting and obtaining in video frame images;Root
Matching characteristic point is determined according to the target position;According to the characteristic point by the object category information labeling to described
Three-dimensional point cloud map.
In one embodiment, it when the computer program is handled by the processor, is also used to execute following steps: obtain
The measurement data for taking Inertial Measurement Unit measurement to obtain;The initial pose between video frame is calculated according to the measurement data
Transformation matrix;The Feature Points Matching according between the video frame images is to the pose being calculated between video frame images
Transformation matrix, comprising: according to the Feature Points Matching between the initial module and carriage transformation matrix and the video frame images to calculating
Obtain the object pose transformation matrix between video frame.
In one embodiment, in the Feature Points Matching according between the video frame images to video is calculated
After module and carriage transformation matrix between frame image, when the computer program is handled by the processor, it is also used to execute following
Step: calculating the amount of exercise between current video frame and previous key frame, if amount of exercise is greater than preset threshold, by current video
Frame is as key frame;When the current video frame is key frame, by current video frame and the key in key frame library before
Frame is matched, if in the key frame library exist with the matched key frame of current video frame, using current video frame as time
Ring frame;Update is optimized to corresponding module and carriage transformation matrix according to the winding frame, obtains updating module and carriage transformation matrix;It is described
The corresponding three-dimensional coordinate of each video frame images is determined according to the module and carriage transformation matrix, comprising: is become according to the update pose
It changes matrix and determines the corresponding three-dimensional coordinate of each video frame images.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the program can be stored in a non-volatile computer and can be read
In storage medium, the program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, provided herein
Each embodiment used in any reference to memory, storage, database or other media, may each comprise non-volatile
And/or volatile memory.Nonvolatile memory may include that read-only memory (ROM), programming ROM (PROM), electricity can be compiled
Journey ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include random access memory
(RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms, such as static state RAM
(SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhanced SDRAM
(ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) directly RAM (RDRAM), straight
Connect memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment
In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance
Shield all should be considered as described in this specification.
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously
The limitation to the application the scope of the patents therefore cannot be interpreted as.It should be pointed out that for those of ordinary skill in the art
For, without departing from the concept of this application, various modifications and improvements can be made, these belong to the guarantor of the application
Protect range.Therefore, the scope of protection shall be subject to the appended claims for the application patent.
Claims (10)
1. a kind of no-manned plane three-dimensional map constructing method, which is characterized in that the described method includes:
The video frame images that camera is shot are obtained, the characteristic point in each video frame images is extracted;
The characteristic point between video frame images is carried out using color histogram and Scale invariant features transform mixing matching algorithm
Matching, obtains the Feature Points Matching pair between video frame images;
According to the Feature Points Matching between the video frame images to the module and carriage transformation matrix being calculated between video frame images;
The corresponding three-dimensional coordinate of each video frame images is determined according to the module and carriage transformation matrix;
According to the corresponding three-dimensional coordinate of video frame images and corresponding module and carriage transformation matrix by the characteristic point in video frame images
Three-dimensional coordinate is transformed under world coordinate system, obtains three-dimensional point cloud map;
Using the video frame images as the input of target detection model, the video that the target detection model inspection obtains is obtained
Object information in frame image;
By the three-dimensional point cloud map in conjunction with the object information, obtain include object information three-dimensional point cloud
Figure.
2. the method according to claim 1, wherein described use color histogram and Scale invariant features transform
Mixing matching algorithm matches the characteristic point between video frame images, obtains the Feature Points Matching between video frame images
It is right, comprising:
The characteristic point between video frame images is matched using color histogram Feature Correspondence Algorithm, obtains the first matching pair
Set;
First matching further matches the match point in set using Scale invariant features transform matching algorithm
Obtain target feature point matching pair.
3. the method according to claim 1, wherein according to the Feature Points Matching pair between the video frame images
The module and carriage transformation matrix between video frame images is calculated, comprising:
Obtain the three-dimensional coordinate of each characteristic point of Feature Points Matching centering;
It calculates and the three-dimensional coordinate of characteristic point in a video frame images is transformed into the conversion three-dimensional that another video frame images obtain
Coordinate;
Obtain in another video frame images the corresponding target three-dimensional coordinate of corresponding matched characteristic point;
Module and carriage transformation matrix is calculated according to the conversion three-dimensional coordinate and the target three-dimensional coordinate.
4. the method according to claim 1, wherein the target detection model is instructed based on deep learning model
It gets;
Described using the video frame images as the input of target detection model, the inspection of the target detection model output is obtained
Before measuring object, further includes:
Training video image pattern is obtained, the training video image pattern includes positive sample and negative sample, in the positive sample
It include object and the object position mark in the video image;
The target detection model is trained according to the training video image pattern, obtains trained target detection mould
Type.
5. the method according to claim 1, wherein described believe the three-dimensional point cloud map and the object
Breath combine, obtain include object information three-dimensional point cloud map, comprising:
Obtain target position of the object for detecting and obtaining in video frame images;
Matching characteristic point is determined according to the target position;
According to the characteristic point by the object category information labeling to the three-dimensional point cloud map.
6. the method according to claim 1, wherein the method also includes:
Obtain the measurement data that Inertial Measurement Unit measurement obtains;
The initial module and carriage transformation matrix between video frame is calculated according to the measurement data;
The Feature Points Matching according between the video frame images converts the pose being calculated between video frame images
Matrix, comprising:
According to the Feature Points Matching between the initial module and carriage transformation matrix and the video frame images to video frame is calculated
Between object pose transformation matrix.
7. the method according to claim 1, wherein in the characteristic point according between the video frame images
After matching is to the module and carriage transformation matrix being calculated between video frame images, further includes:
The amount of exercise between current video frame and previous key frame is calculated, if amount of exercise is greater than preset threshold, by current video
Frame is as key frame;
When the current video frame is key frame, by current video frame and the key frame progress in key frame library before
Match, if in the key frame library exist with the matched key frame of current video frame, using current video frame as winding frame;
Update is optimized to corresponding module and carriage transformation matrix according to the winding frame, obtains updating module and carriage transformation matrix;
It is described that the corresponding three-dimensional coordinate of each video frame images is determined according to the module and carriage transformation matrix, comprising: according to it is described more
New module and carriage transformation matrix determines the corresponding three-dimensional coordinate of each video frame images.
8. a kind of no-manned plane three-dimensional map structuring device, which is characterized in that described device includes:
Extraction module, the video frame images shot for obtaining camera, extracts the characteristic point in each video frame images;
Matching module, for using color histogram and Scale invariant features transform mixing matching algorithm between video frame images
Characteristic point matched, obtain the Feature Points Matching pair between video frame images;
Computing module, for according to the Feature Points Matching between the video frame images to being calculated between video frame images
Module and carriage transformation matrix;
Determining module, for determining the corresponding three-dimensional coordinate of each video frame images according to the module and carriage transformation matrix;
Conversion module, for according to the corresponding three-dimensional coordinate of video frame images and corresponding module and carriage transformation matrix by video frame images
In the three-dimensional coordinate of characteristic point be transformed under world coordinate system, obtain three-dimensional point cloud map;
Detection module, for obtaining the target detection model using the video frame images as the input of target detection model
Detect the object information in obtained video frame images;
Binding modules, in conjunction with the object information, obtaining including object information the three-dimensional point cloud map
Three-dimensional point cloud map.
9. a kind of computer equipment, including memory and processor, the memory is stored with computer program, the computer
When program is executed by the processor, so that the processor executes the step such as any one of claims 1 to 7 the method
Suddenly.
10. a kind of computer readable storage medium is stored with computer program, when the computer program is executed by processor,
So that the processor is executed such as the step of any one of claims 1 to 7 the method.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910209625.1A CN110047142A (en) | 2019-03-19 | 2019-03-19 | No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium |
PCT/CN2019/097745 WO2020186678A1 (en) | 2019-03-19 | 2019-07-25 | Three-dimensional map constructing method and apparatus for unmanned aerial vehicle, computer device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910209625.1A CN110047142A (en) | 2019-03-19 | 2019-03-19 | No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110047142A true CN110047142A (en) | 2019-07-23 |
Family
ID=67273899
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910209625.1A Pending CN110047142A (en) | 2019-03-19 | 2019-03-19 | No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110047142A (en) |
WO (1) | WO2020186678A1 (en) |
Cited By (46)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110487274A (en) * | 2019-07-30 | 2019-11-22 | 中国科学院空间应用工程与技术中心 | SLAM method, system, navigation vehicle and storage medium for weak texture scene |
CN110490131A (en) * | 2019-08-16 | 2019-11-22 | 北京达佳互联信息技术有限公司 | A kind of localization method of capture apparatus, device, electronic equipment and storage medium |
CN110543917A (en) * | 2019-09-06 | 2019-12-06 | 电子科技大学 | indoor map matching method by utilizing pedestrian inertial navigation track and video information |
CN110580703A (en) * | 2019-09-10 | 2019-12-17 | 广东电网有限责任公司 | distribution line detection method, device, equipment and storage medium |
CN110602456A (en) * | 2019-09-11 | 2019-12-20 | 安徽天立泰科技股份有限公司 | Display method and system of aerial photography focus |
CN110660134A (en) * | 2019-09-25 | 2020-01-07 | Oppo广东移动通信有限公司 | Three-dimensional map construction method, three-dimensional map construction device and terminal equipment |
CN110705574A (en) * | 2019-09-27 | 2020-01-17 | Oppo广东移动通信有限公司 | Positioning method and device, equipment and storage medium |
CN110728245A (en) * | 2019-10-17 | 2020-01-24 | 珠海格力电器股份有限公司 | Optimization method and device for VSLAM front-end processing, electronic equipment and storage medium |
CN110826448A (en) * | 2019-10-29 | 2020-02-21 | 中山大学 | Indoor positioning method with automatic updating function |
CN110880187A (en) * | 2019-10-17 | 2020-03-13 | 北京达佳互联信息技术有限公司 | Camera position information determining method and device, electronic equipment and storage medium |
CN111009012A (en) * | 2019-11-29 | 2020-04-14 | 四川沃洛佳科技有限公司 | Unmanned aerial vehicle speed measurement method based on computer vision, storage medium and terminal |
CN111105695A (en) * | 2019-12-31 | 2020-05-05 | 智车优行科技(上海)有限公司 | Map making method and device, electronic equipment and computer readable storage medium |
CN111105454A (en) * | 2019-11-22 | 2020-05-05 | 北京小米移动软件有限公司 | Method, device and medium for acquiring positioning information |
CN111145339A (en) * | 2019-12-25 | 2020-05-12 | Oppo广东移动通信有限公司 | Image processing method and device, equipment and storage medium |
CN111199584A (en) * | 2019-12-31 | 2020-05-26 | 武汉市城建工程有限公司 | Target object positioning virtual-real fusion method and device |
CN111311685A (en) * | 2020-05-12 | 2020-06-19 | 中国人民解放军国防科技大学 | Motion scene reconstruction unsupervised method based on IMU/monocular image |
CN111462029A (en) * | 2020-03-27 | 2020-07-28 | 北京百度网讯科技有限公司 | Visual point cloud and high-precision map fusion method and device and electronic equipment |
CN111586360A (en) * | 2020-05-14 | 2020-08-25 | 佳都新太科技股份有限公司 | Unmanned aerial vehicle projection method, device, equipment and storage medium |
WO2020186678A1 (en) * | 2019-03-19 | 2020-09-24 | 中国科学院深圳先进技术研究院 | Three-dimensional map constructing method and apparatus for unmanned aerial vehicle, computer device, and storage medium |
CN111814731A (en) * | 2020-07-23 | 2020-10-23 | 科大讯飞股份有限公司 | Sitting posture detection method, device, equipment and storage medium |
CN111968242A (en) * | 2020-09-11 | 2020-11-20 | 中国石油集团西南管道有限公司 | Pipe ditch measuring method and system for pipeline engineering construction |
CN112215714A (en) * | 2020-09-08 | 2021-01-12 | 北京农业智能装备技术研究中心 | Rice ear detection method and device based on unmanned aerial vehicle |
CN112241010A (en) * | 2019-09-17 | 2021-01-19 | 北京新能源汽车技术创新中心有限公司 | Positioning method, positioning device, computer equipment and storage medium |
CN112393720A (en) * | 2019-08-15 | 2021-02-23 | 纳恩博(北京)科技有限公司 | Target equipment positioning method and device, storage medium and electronic device |
CN112419375A (en) * | 2020-11-18 | 2021-02-26 | 青岛海尔科技有限公司 | Feature point matching method and device, storage medium and electronic device |
CN112613107A (en) * | 2020-12-26 | 2021-04-06 | 广东电网有限责任公司 | Method and device for determining construction progress of tower project, storage medium and equipment |
CN112634370A (en) * | 2020-12-31 | 2021-04-09 | 广州极飞科技有限公司 | Unmanned aerial vehicle dotting method, device, equipment and storage medium |
CN112819892A (en) * | 2021-02-08 | 2021-05-18 | 北京航空航天大学 | Image processing method and device |
CN112819889A (en) * | 2020-12-30 | 2021-05-18 | 浙江大华技术股份有限公司 | Method and device for determining position information, storage medium and electronic device |
CN112907550A (en) * | 2021-03-01 | 2021-06-04 | 创新奇智(成都)科技有限公司 | Building detection method and device, electronic equipment and storage medium |
CN112950715A (en) * | 2021-03-04 | 2021-06-11 | 杭州迅蚁网络科技有限公司 | Visual positioning method and device for unmanned aerial vehicle, computer equipment and storage medium |
CN112950667A (en) * | 2021-02-10 | 2021-06-11 | 中国科学院深圳先进技术研究院 | Video annotation method, device, equipment and computer readable storage medium |
CN112966718A (en) * | 2021-02-05 | 2021-06-15 | 深圳市优必选科技股份有限公司 | Image identification method and device and communication equipment |
CN112991448A (en) * | 2021-03-22 | 2021-06-18 | 华南理工大学 | Color histogram-based loop detection method and device and storage medium |
CN113326769A (en) * | 2021-05-28 | 2021-08-31 | 北京三快在线科技有限公司 | High-precision map generation method, device, equipment and storage medium |
CN113628286A (en) * | 2021-08-09 | 2021-11-09 | 咪咕视讯科技有限公司 | Video color gamut detection method and device, computing equipment and computer storage medium |
CN113673388A (en) * | 2021-08-09 | 2021-11-19 | 北京三快在线科技有限公司 | Method and device for determining position of target object, storage medium and equipment |
CN113793414A (en) * | 2021-08-17 | 2021-12-14 | 中科云谷科技有限公司 | Method, processor and device for establishing three-dimensional view of industrial field environment |
CN113853577A (en) * | 2020-04-28 | 2021-12-28 | 深圳市大疆创新科技有限公司 | Image processing method and device, movable platform and control terminal thereof, and computer-readable storage medium |
CN114119885A (en) * | 2020-08-11 | 2022-03-01 | 中国电信股份有限公司 | Image feature point matching method, device and system and map construction method and system |
CN114596363A (en) * | 2022-05-10 | 2022-06-07 | 北京鉴智科技有限公司 | Three-dimensional point cloud labeling method and device and terminal |
CN114743116A (en) * | 2022-04-18 | 2022-07-12 | 蜂巢航宇科技(北京)有限公司 | Barracks patrol scene-based unattended special load system and method |
WO2022160790A1 (en) * | 2021-02-01 | 2022-08-04 | 华为技术有限公司 | Three-dimensional map construction method and apparatus |
WO2023030062A1 (en) * | 2021-09-01 | 2023-03-09 | 中移(成都)信息通信科技有限公司 | Flight control method and apparatus for unmanned aerial vehicle, and device, medium and program |
CN117115414A (en) * | 2023-10-23 | 2023-11-24 | 西安羚控电子科技有限公司 | GPS-free unmanned aerial vehicle positioning method and device based on deep learning |
CN117395377A (en) * | 2023-12-06 | 2024-01-12 | 上海海事大学 | Multi-view fusion-based coastal bridge sea side safety monitoring method, system and medium |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116033231A (en) * | 2021-10-27 | 2023-04-28 | 海鹰航空通用装备有限责任公司 | Video live broadcast AR label superposition method and device |
CN115375870B (en) * | 2022-10-25 | 2023-02-10 | 杭州华橙软件技术有限公司 | Loop detection optimization method, electronic equipment and computer readable storage device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104835115A (en) * | 2015-05-07 | 2015-08-12 | 中国科学院长春光学精密机械与物理研究所 | Imaging method for aerial camera, and system thereof |
CN106097304A (en) * | 2016-05-31 | 2016-11-09 | 西北工业大学 | A kind of unmanned plane real-time online ground drawing generating method |
CN106485655A (en) * | 2015-09-01 | 2017-03-08 | 张长隆 | A kind of taken photo by plane map generation system and method based on quadrotor |
CN106595659A (en) * | 2016-11-03 | 2017-04-26 | 南京航空航天大学 | Map merging method of unmanned aerial vehicle visual SLAM under city complex environment |
CN108648240A (en) * | 2018-05-11 | 2018-10-12 | 东南大学 | Based on a non-overlapping visual field camera posture scaling method for cloud characteristics map registration |
CN108692661A (en) * | 2018-05-08 | 2018-10-23 | 深圳大学 | Portable three-dimensional measuring system based on Inertial Measurement Unit and its measurement method |
CN109073385A (en) * | 2017-12-20 | 2018-12-21 | 深圳市大疆创新科技有限公司 | A kind of localization method and aircraft of view-based access control model |
CN109410316A (en) * | 2018-09-21 | 2019-03-01 | 深圳前海达闼云端智能科技有限公司 | Method, tracking, relevant apparatus and the storage medium of the three-dimensional reconstruction of object |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102663391B (en) * | 2012-02-27 | 2015-03-25 | 安科智慧城市技术(中国)有限公司 | Image multifeature extraction and fusion method and system |
US9488492B2 (en) * | 2014-03-18 | 2016-11-08 | Sri International | Real-time system for multi-modal 3D geospatial mapping, object recognition, scene annotation and analytics |
CN108932475B (en) * | 2018-05-31 | 2021-11-16 | 中国科学院西安光学精密机械研究所 | Three-dimensional target identification system and method based on laser radar and monocular vision |
CN108303099B (en) * | 2018-06-14 | 2018-09-28 | 江苏中科院智能科学技术应用研究院 | Autonomous navigation method in unmanned plane room based on 3D vision SLAM |
CN109146935B (en) * | 2018-07-13 | 2021-03-12 | 中国科学院深圳先进技术研究院 | Point cloud registration method and device, electronic equipment and readable storage medium |
CN110047142A (en) * | 2019-03-19 | 2019-07-23 | 中国科学院深圳先进技术研究院 | No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium |
-
2019
- 2019-03-19 CN CN201910209625.1A patent/CN110047142A/en active Pending
- 2019-07-25 WO PCT/CN2019/097745 patent/WO2020186678A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104835115A (en) * | 2015-05-07 | 2015-08-12 | 中国科学院长春光学精密机械与物理研究所 | Imaging method for aerial camera, and system thereof |
CN106485655A (en) * | 2015-09-01 | 2017-03-08 | 张长隆 | A kind of taken photo by plane map generation system and method based on quadrotor |
CN106097304A (en) * | 2016-05-31 | 2016-11-09 | 西北工业大学 | A kind of unmanned plane real-time online ground drawing generating method |
CN106595659A (en) * | 2016-11-03 | 2017-04-26 | 南京航空航天大学 | Map merging method of unmanned aerial vehicle visual SLAM under city complex environment |
CN109073385A (en) * | 2017-12-20 | 2018-12-21 | 深圳市大疆创新科技有限公司 | A kind of localization method and aircraft of view-based access control model |
CN108692661A (en) * | 2018-05-08 | 2018-10-23 | 深圳大学 | Portable three-dimensional measuring system based on Inertial Measurement Unit and its measurement method |
CN108648240A (en) * | 2018-05-11 | 2018-10-12 | 东南大学 | Based on a non-overlapping visual field camera posture scaling method for cloud characteristics map registration |
CN109410316A (en) * | 2018-09-21 | 2019-03-01 | 深圳前海达闼云端智能科技有限公司 | Method, tracking, relevant apparatus and the storage medium of the three-dimensional reconstruction of object |
Cited By (78)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020186678A1 (en) * | 2019-03-19 | 2020-09-24 | 中国科学院深圳先进技术研究院 | Three-dimensional map constructing method and apparatus for unmanned aerial vehicle, computer device, and storage medium |
CN110487274B (en) * | 2019-07-30 | 2021-01-29 | 中国科学院空间应用工程与技术中心 | SLAM method and system for weak texture scene, navigation vehicle and storage medium |
CN110487274A (en) * | 2019-07-30 | 2019-11-22 | 中国科学院空间应用工程与技术中心 | SLAM method, system, navigation vehicle and storage medium for weak texture scene |
CN112393720B (en) * | 2019-08-15 | 2023-05-30 | 纳恩博(北京)科技有限公司 | Target equipment positioning method and device, storage medium and electronic device |
CN112393720A (en) * | 2019-08-15 | 2021-02-23 | 纳恩博(北京)科技有限公司 | Target equipment positioning method and device, storage medium and electronic device |
CN110490131A (en) * | 2019-08-16 | 2019-11-22 | 北京达佳互联信息技术有限公司 | A kind of localization method of capture apparatus, device, electronic equipment and storage medium |
CN110490131B (en) * | 2019-08-16 | 2021-08-24 | 北京达佳互联信息技术有限公司 | Positioning method and device of shooting equipment, electronic equipment and storage medium |
CN110543917B (en) * | 2019-09-06 | 2021-09-28 | 电子科技大学 | Indoor map matching method by utilizing pedestrian inertial navigation track and video information |
CN110543917A (en) * | 2019-09-06 | 2019-12-06 | 电子科技大学 | indoor map matching method by utilizing pedestrian inertial navigation track and video information |
CN110580703A (en) * | 2019-09-10 | 2019-12-17 | 广东电网有限责任公司 | distribution line detection method, device, equipment and storage medium |
CN110580703B (en) * | 2019-09-10 | 2024-01-23 | 广东电网有限责任公司 | Distribution line detection method, device, equipment and storage medium |
CN110602456A (en) * | 2019-09-11 | 2019-12-20 | 安徽天立泰科技股份有限公司 | Display method and system of aerial photography focus |
CN112241010A (en) * | 2019-09-17 | 2021-01-19 | 北京新能源汽车技术创新中心有限公司 | Positioning method, positioning device, computer equipment and storage medium |
CN110660134B (en) * | 2019-09-25 | 2023-05-30 | Oppo广东移动通信有限公司 | Three-dimensional map construction method, three-dimensional map construction device and terminal equipment |
CN110660134A (en) * | 2019-09-25 | 2020-01-07 | Oppo广东移动通信有限公司 | Three-dimensional map construction method, three-dimensional map construction device and terminal equipment |
CN110705574B (en) * | 2019-09-27 | 2023-06-02 | Oppo广东移动通信有限公司 | Positioning method and device, equipment and storage medium |
CN110705574A (en) * | 2019-09-27 | 2020-01-17 | Oppo广东移动通信有限公司 | Positioning method and device, equipment and storage medium |
CN110880187A (en) * | 2019-10-17 | 2020-03-13 | 北京达佳互联信息技术有限公司 | Camera position information determining method and device, electronic equipment and storage medium |
CN110880187B (en) * | 2019-10-17 | 2022-08-12 | 北京达佳互联信息技术有限公司 | Camera position information determining method and device, electronic equipment and storage medium |
CN110728245A (en) * | 2019-10-17 | 2020-01-24 | 珠海格力电器股份有限公司 | Optimization method and device for VSLAM front-end processing, electronic equipment and storage medium |
CN110826448B (en) * | 2019-10-29 | 2023-04-07 | 中山大学 | Indoor positioning method with automatic updating function |
CN110826448A (en) * | 2019-10-29 | 2020-02-21 | 中山大学 | Indoor positioning method with automatic updating function |
CN111105454B (en) * | 2019-11-22 | 2023-05-09 | 北京小米移动软件有限公司 | Method, device and medium for obtaining positioning information |
CN111105454A (en) * | 2019-11-22 | 2020-05-05 | 北京小米移动软件有限公司 | Method, device and medium for acquiring positioning information |
CN111009012B (en) * | 2019-11-29 | 2023-07-28 | 四川沃洛佳科技有限公司 | Unmanned aerial vehicle speed measuring method based on computer vision, storage medium and terminal |
CN111009012A (en) * | 2019-11-29 | 2020-04-14 | 四川沃洛佳科技有限公司 | Unmanned aerial vehicle speed measurement method based on computer vision, storage medium and terminal |
CN111145339B (en) * | 2019-12-25 | 2023-06-02 | Oppo广东移动通信有限公司 | Image processing method and device, equipment and storage medium |
CN111145339A (en) * | 2019-12-25 | 2020-05-12 | Oppo广东移动通信有限公司 | Image processing method and device, equipment and storage medium |
CN111105695A (en) * | 2019-12-31 | 2020-05-05 | 智车优行科技(上海)有限公司 | Map making method and device, electronic equipment and computer readable storage medium |
CN111199584B (en) * | 2019-12-31 | 2023-10-20 | 武汉市城建工程有限公司 | Target object positioning virtual-real fusion method and device |
CN111199584A (en) * | 2019-12-31 | 2020-05-26 | 武汉市城建工程有限公司 | Target object positioning virtual-real fusion method and device |
CN111462029A (en) * | 2020-03-27 | 2020-07-28 | 北京百度网讯科技有限公司 | Visual point cloud and high-precision map fusion method and device and electronic equipment |
CN111462029B (en) * | 2020-03-27 | 2023-03-03 | 阿波罗智能技术(北京)有限公司 | Visual point cloud and high-precision map fusion method and device and electronic equipment |
CN113853577A (en) * | 2020-04-28 | 2021-12-28 | 深圳市大疆创新科技有限公司 | Image processing method and device, movable platform and control terminal thereof, and computer-readable storage medium |
CN111311685A (en) * | 2020-05-12 | 2020-06-19 | 中国人民解放军国防科技大学 | Motion scene reconstruction unsupervised method based on IMU/monocular image |
CN111311685B (en) * | 2020-05-12 | 2020-08-07 | 中国人民解放军国防科技大学 | Motion scene reconstruction unsupervised method based on IMU and monocular image |
CN111586360A (en) * | 2020-05-14 | 2020-08-25 | 佳都新太科技股份有限公司 | Unmanned aerial vehicle projection method, device, equipment and storage medium |
CN111586360B (en) * | 2020-05-14 | 2021-09-10 | 佳都科技集团股份有限公司 | Unmanned aerial vehicle projection method, device, equipment and storage medium |
CN111814731A (en) * | 2020-07-23 | 2020-10-23 | 科大讯飞股份有限公司 | Sitting posture detection method, device, equipment and storage medium |
CN111814731B (en) * | 2020-07-23 | 2023-12-01 | 科大讯飞股份有限公司 | Sitting posture detection method, device, equipment and storage medium |
CN114119885A (en) * | 2020-08-11 | 2022-03-01 | 中国电信股份有限公司 | Image feature point matching method, device and system and map construction method and system |
CN112215714A (en) * | 2020-09-08 | 2021-01-12 | 北京农业智能装备技术研究中心 | Rice ear detection method and device based on unmanned aerial vehicle |
CN112215714B (en) * | 2020-09-08 | 2024-05-10 | 北京农业智能装备技术研究中心 | Unmanned aerial vehicle-based rice spike detection method and device |
CN111968242B (en) * | 2020-09-11 | 2024-05-31 | 国家管网集团西南管道有限责任公司 | Pipe ditch measuring method and system for pipeline engineering construction |
CN111968242A (en) * | 2020-09-11 | 2020-11-20 | 中国石油集团西南管道有限公司 | Pipe ditch measuring method and system for pipeline engineering construction |
CN112419375B (en) * | 2020-11-18 | 2023-02-03 | 青岛海尔科技有限公司 | Feature point matching method and device, storage medium and electronic device |
CN112419375A (en) * | 2020-11-18 | 2021-02-26 | 青岛海尔科技有限公司 | Feature point matching method and device, storage medium and electronic device |
CN112613107A (en) * | 2020-12-26 | 2021-04-06 | 广东电网有限责任公司 | Method and device for determining construction progress of tower project, storage medium and equipment |
CN112819889B (en) * | 2020-12-30 | 2024-05-10 | 浙江大华技术股份有限公司 | Method and device for determining position information, storage medium and electronic device |
CN112819889A (en) * | 2020-12-30 | 2021-05-18 | 浙江大华技术股份有限公司 | Method and device for determining position information, storage medium and electronic device |
CN112634370A (en) * | 2020-12-31 | 2021-04-09 | 广州极飞科技有限公司 | Unmanned aerial vehicle dotting method, device, equipment and storage medium |
WO2022160790A1 (en) * | 2021-02-01 | 2022-08-04 | 华为技术有限公司 | Three-dimensional map construction method and apparatus |
CN112966718B (en) * | 2021-02-05 | 2023-12-19 | 深圳市优必选科技股份有限公司 | Image recognition method and device and communication equipment |
CN112966718A (en) * | 2021-02-05 | 2021-06-15 | 深圳市优必选科技股份有限公司 | Image identification method and device and communication equipment |
CN112819892B (en) * | 2021-02-08 | 2022-11-25 | 北京航空航天大学 | Image processing method and device |
CN112819892A (en) * | 2021-02-08 | 2021-05-18 | 北京航空航天大学 | Image processing method and device |
CN112950667B (en) * | 2021-02-10 | 2023-12-22 | 中国科学院深圳先进技术研究院 | Video labeling method, device, equipment and computer readable storage medium |
WO2022170844A1 (en) * | 2021-02-10 | 2022-08-18 | 中国科学院深圳先进技术研究院 | Video annotation method, apparatus and device, and computer readable storage medium |
CN112950667A (en) * | 2021-02-10 | 2021-06-11 | 中国科学院深圳先进技术研究院 | Video annotation method, device, equipment and computer readable storage medium |
CN112907550A (en) * | 2021-03-01 | 2021-06-04 | 创新奇智(成都)科技有限公司 | Building detection method and device, electronic equipment and storage medium |
CN112907550B (en) * | 2021-03-01 | 2024-01-19 | 创新奇智(成都)科技有限公司 | Building detection method and device, electronic equipment and storage medium |
CN112950715A (en) * | 2021-03-04 | 2021-06-11 | 杭州迅蚁网络科技有限公司 | Visual positioning method and device for unmanned aerial vehicle, computer equipment and storage medium |
CN112950715B (en) * | 2021-03-04 | 2024-04-30 | 杭州迅蚁网络科技有限公司 | Visual positioning method and device of unmanned aerial vehicle, computer equipment and storage medium |
CN112991448A (en) * | 2021-03-22 | 2021-06-18 | 华南理工大学 | Color histogram-based loop detection method and device and storage medium |
CN112991448B (en) * | 2021-03-22 | 2023-09-26 | 华南理工大学 | Loop detection method, device and storage medium based on color histogram |
CN113326769A (en) * | 2021-05-28 | 2021-08-31 | 北京三快在线科技有限公司 | High-precision map generation method, device, equipment and storage medium |
CN113628286B (en) * | 2021-08-09 | 2024-03-22 | 咪咕视讯科技有限公司 | Video color gamut detection method, device, computing equipment and computer storage medium |
CN113673388A (en) * | 2021-08-09 | 2021-11-19 | 北京三快在线科技有限公司 | Method and device for determining position of target object, storage medium and equipment |
CN113628286A (en) * | 2021-08-09 | 2021-11-09 | 咪咕视讯科技有限公司 | Video color gamut detection method and device, computing equipment and computer storage medium |
CN113793414A (en) * | 2021-08-17 | 2021-12-14 | 中科云谷科技有限公司 | Method, processor and device for establishing three-dimensional view of industrial field environment |
WO2023030062A1 (en) * | 2021-09-01 | 2023-03-09 | 中移(成都)信息通信科技有限公司 | Flight control method and apparatus for unmanned aerial vehicle, and device, medium and program |
CN114743116A (en) * | 2022-04-18 | 2022-07-12 | 蜂巢航宇科技(北京)有限公司 | Barracks patrol scene-based unattended special load system and method |
CN114596363A (en) * | 2022-05-10 | 2022-06-07 | 北京鉴智科技有限公司 | Three-dimensional point cloud labeling method and device and terminal |
CN114596363B (en) * | 2022-05-10 | 2022-07-22 | 北京鉴智科技有限公司 | Three-dimensional point cloud marking method and device and terminal |
CN117115414A (en) * | 2023-10-23 | 2023-11-24 | 西安羚控电子科技有限公司 | GPS-free unmanned aerial vehicle positioning method and device based on deep learning |
CN117115414B (en) * | 2023-10-23 | 2024-02-23 | 西安羚控电子科技有限公司 | GPS-free unmanned aerial vehicle positioning method and device based on deep learning |
CN117395377A (en) * | 2023-12-06 | 2024-01-12 | 上海海事大学 | Multi-view fusion-based coastal bridge sea side safety monitoring method, system and medium |
CN117395377B (en) * | 2023-12-06 | 2024-03-22 | 上海海事大学 | Multi-view fusion-based coastal bridge sea side safety monitoring method, system and medium |
Also Published As
Publication number | Publication date |
---|---|
WO2020186678A1 (en) | 2020-09-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110047142A (en) | No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium | |
Zhao et al. | Detection, tracking, and geolocation of moving vehicle from uav using monocular camera | |
CN109974693A (en) | Unmanned plane localization method, device, computer equipment and storage medium | |
CN111156984B (en) | Monocular vision inertia SLAM method oriented to dynamic scene | |
CN104062973B (en) | A kind of mobile robot based on logos thing identification SLAM method | |
CN110047108B (en) | Unmanned aerial vehicle pose determination method and device, computer equipment and storage medium | |
CN109102547A (en) | Robot based on object identification deep learning model grabs position and orientation estimation method | |
CN105930819A (en) | System for real-time identifying urban traffic lights based on single eye vision and GPS integrated navigation system | |
CN109472828B (en) | Positioning method, positioning device, electronic equipment and computer readable storage medium | |
CN112233177B (en) | Unmanned aerial vehicle pose estimation method and system | |
CN108051002A (en) | Transport vehicle space-location method and system based on inertia measurement auxiliary vision | |
CN106529538A (en) | Method and device for positioning aircraft | |
US10347001B2 (en) | Localizing and mapping platform | |
CN112734852A (en) | Robot mapping method and device and computing equipment | |
KR102308456B1 (en) | Tree species detection system based on LiDAR and RGB camera and Detection method of the same | |
CN112101160B (en) | Binocular semantic SLAM method for automatic driving scene | |
CN109781092A (en) | Localization for Mobile Robot and drawing method is built in a kind of danger chemical accident | |
CN110672088A (en) | Unmanned aerial vehicle autonomous navigation method imitating homing mechanism of landform perception of homing pigeons | |
CN109886356A (en) | A kind of target tracking method based on three branch's neural networks | |
US20220237908A1 (en) | Flight mission learning using synthetic three-dimensional (3d) modeling and simulation | |
Jiao et al. | 2-entity random sample consensus for robust visual localization: Framework, methods, and verifications | |
CN111812978B (en) | Cooperative SLAM method and system for multiple unmanned aerial vehicles | |
CN106846367A (en) | A kind of Mobile object detection method of the complicated dynamic scene based on kinematic constraint optical flow method | |
CN112652003A (en) | Three-dimensional point cloud registration method based on RANSAC measure optimization | |
CN117036989A (en) | Miniature unmanned aerial vehicle target recognition and tracking control method based on computer vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190723 |
|
RJ01 | Rejection of invention patent application after publication |