CN107369183A - Towards the MAR Tracing Registration method and system based on figure optimization SLAM - Google Patents
Towards the MAR Tracing Registration method and system based on figure optimization SLAM Download PDFInfo
- Publication number
- CN107369183A CN107369183A CN201710581403.3A CN201710581403A CN107369183A CN 107369183 A CN107369183 A CN 107369183A CN 201710581403 A CN201710581403 A CN 201710581403A CN 107369183 A CN107369183 A CN 107369183A
- Authority
- CN
- China
- Prior art keywords
- field picture
- characteristic point
- current key
- video camera
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Probability & Statistics with Applications (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of Tracing Registration method and system based on figure optimization SLAM towards MAR, including obtain environment depth map;Current key two field picture is determined from environment depth map according to the first preset algorithm;The position of video camera, and the map built according to the location updating of video camera are determined according to current key two field picture and the map built;Characteristic point is extracted from current key two field picture according to the second preset algorithm;Characteristic point in current key two field picture is matched with the characteristic point in previous keyframe image, obtains matching characteristic point;The pose of video camera, and the trajectory diagram built according to the renewal of the pose of video camera are obtained according to matching characteristic point and the trajectory diagram built.The application realizes the Tracing Registration of the natural scene on mobile terminal, improves MAR Tracing Registration performance.
Description
Technical field
The present invention relates to AR technical fields, more particularly to a kind of Tracing Registration for optimizing SLAM based on figure towards MAR
Method and system.
Background technology
AR (Augmented Reality, augmented reality) is the technology to grow up on the basis of virtual reality, its
Purpose is being added in real scene and is realizing real scene and virtual scene exactly in the dummy object for generating computer
Seamless combination, and then complete enhancing to real scene.Three-dimensional Tracing Registration technology is always the most crucial research in AR fields
Content, the purpose of three-dimensional Tracing Registration are exactly to calculate pose and the position of video camera exactly so that dummy object can be correct
Be placed in real scene.
MAR (Mobile Augmented Reality, mobile augmented reality) is referred in IPAD, smart mobile phone, portable
The augmented reality system realized on the mobile terminals such as formula computer.Because conventional AR systems are mostly all using desktop computer, large-scale
Work station etc. is used as system operation platform, limits the scope of activities of user, can not be applied to outdoor environment.With mobile terminal
With the rapid development of network technology so that the limitation that AR technologies depart from the cumbersome apparatus such as PC, work station is referred to as possible, promotion
MAR generation and development, therefore, the demand of the Tracing Registration of natural scene is carried out on mobile terminal also becomes more and more urgent.
But there is presently no the three-dimensional Tracing Registration method for mobile terminal of maturation in the prior art.
Therefore, how to provide a kind of scheme for solving above-mentioned technical problem is that those skilled in the art need to solve at present
Problem.
The content of the invention
It is real it is an object of the invention to provide a kind of Tracing Registration method and system based on figure optimization SLAM towards MAR
Show the Tracing Registration of the natural scene on mobile terminal, improve MAR Tracing Registration performance.
In order to solve the above technical problems, the invention provides a kind of Tracing Registration based on figure optimization SLAM towards MAR
Method, including:
Obtain environment depth map;
Current key two field picture is determined from the environment depth map according to the first preset algorithm;
The position of video camera is determined according to the current key two field picture and the map built, and according to the video camera
Location updating described in the map that has built;
Characteristic point is extracted from the current key two field picture according to the second preset algorithm;
Characteristic point in the current key two field picture is matched with the characteristic point in previous keyframe image, obtained
Matching characteristic point;
The pose of the video camera is obtained according to the matching characteristic point and the trajectory diagram built, and according to the shooting
The trajectory diagram built described in the pose renewal of machine.
Preferably, this method also includes:
Feature in all key frame images before the current key two field picture is gathered using K mean cluster method
Class obtains vision word corresponding to the feature in all key frame images before, and obtain according to the vision word to words tree
To bag of words;
Vision word corresponding to the feature in the current key two field picture is obtained, institute is calculated by TF-IDF models respectively
The similarity of vision word corresponding to the feature in current key two field picture and all vision words in the bag of words is stated,
Similarity highest similarity is determined, judges whether the similarity highest similarity is more than preset value, if it is, determining
Position corresponding to the current key two field picture is position corresponding with the key frame images where its similarity highest feature.
Preferably, first preset algorithm is the side being combined based on selection of time method and view-based access control model content selection method
Method;And first preset algorithm includes following constraints:
The current key two field picture of determination at least matches the characteristic point of the first predetermined number with previous keyframe image;
Characteristic matching rate is no more than the first predetermined threshold value between the current key two field picture and previous keyframe image of determination;
The second predetermined number frame is comprised at least between the current key two field picture and previous keyframe image of determination.
Preferably, first predetermined number is 50, and first predetermined threshold value is 95%, and second predetermined number is
20。
Preferably,
Second preset algorithm includes:
The current key frame image uniform Ground Split is expressed as { h into M*N grids, all grids11,h12,..h1n,
h21,h22...hmn, M, N are the integer not less than 2
Judge whether each grid is able to detect that characteristic point, if grid hikCharacteristic point is not detected inside, then
This grid is not considered further that, otherwise, judges grid hikIn the quantity of characteristic point whether be more than the second predetermined threshold value j, if it is,
Sorted by Harris Corner Detector key points, select wherein best j and be used as test point, remaining is as time
Selected Inspection measuring point, otherwise, by grid hikIn characteristic point all as test point, wherein, 1≤i≤M, 1≤k≤N;
When the quantity summation of the characteristic point of all grid extractions meets three predetermined numbers, feature extraction terminates,
Otherwise, random extraction meets the characteristic point of quantity and terminates feature extraction from the couple candidate detection point.
Preferably, the characteristic point by the current key two field picture clicks through with the feature in previous keyframe image
Row matching, the process for obtaining matching characteristic point are specially:
Judge the Hamming distance of the characteristic point and the characteristic point in previous keyframe image in the current key two field picture
Whether the 3rd predetermined threshold value is less than, if it is, being matching characteristic point.
In order to solve the above technical problems, present invention also offers a kind of tracking note based on figure optimization SLAM towards MAR
Volume system, including:
Video camera, for obtaining environment depth map;
Key frame determining unit, for determining current key frame figure from the environment depth map according to the first preset algorithm
Picture;
Position determination unit, for determining the position of video camera according to the current key two field picture and the map built
Put, and the map built according to the location updating of the video camera;
Feature point extraction unit, for extracting characteristic point from the current key two field picture according to the second preset algorithm;
Matching unit, for by the characteristic point in the characteristic point in the current key two field picture and previous keyframe image
Matched, obtain matching characteristic point;
Pose determining unit, for obtaining the position of the video camera according to the matching characteristic point and the trajectory diagram built
Appearance, and the trajectory diagram built according to the renewal of the pose of the video camera.
Preferably, the video camera is Kinect video camera.
Preferably, the system also includes:
Word setting unit, for the feature in all key frame images before the current key two field picture to be adopted
With K mean cluster clustering to words tree, vision word corresponding to the feature in all key frame images before, and root are obtained
Bag of words are obtained according to the vision word;
Closed loop detection unit, for vision word corresponding to obtaining the feature in the current key two field picture, pass through TF-
IDF models calculate respectively vision word corresponding to feature in the current key two field picture with it is all in the bag of words
The similarity of vision word, similarity highest similarity is determined, it is pre- to judge whether the similarity highest similarity is more than
If value, if it is, determining position corresponding to the current key two field picture for the pass where with its similarity highest feature
Position corresponding to key two field picture.
Preferably, first preset algorithm is the side being combined based on selection of time method and view-based access control model content selection method
Method;And first preset algorithm includes following constraints:
The current key two field picture of determination at least matches the characteristic point of the first predetermined number with previous keyframe image;
Characteristic matching rate is no more than the first predetermined threshold value between the current key two field picture and previous keyframe image of determination;
The second predetermined number frame is comprised at least between the current key two field picture and previous keyframe image of determination.
The invention provides a kind of Tracing Registration method based on figure optimization SLAM towards MAR, including obtain environment depth
Degree figure;Current key two field picture is determined from environment depth map according to the first preset algorithm;According to current key two field picture and
The map of structure determines the position of video camera, and the map built according to the location updating of video camera;According to the second pre- imputation
Method extracts characteristic point from current key two field picture;By in the characteristic point in current key two field picture and previous keyframe image
Characteristic point is matched, and obtains matching characteristic point;The pose of video camera is obtained according to matching characteristic point and the trajectory diagram built,
And the trajectory diagram built according to the renewal of the pose of video camera.This application provides it is a kind of suitable for MAR based on figure optimization
SLAM methods, and as MAR three-dimensional Tracing Registration method, the requirement of mobile terminal real-time and rendering is met, in fact
Show the Tracing Registration of the natural scene on mobile terminal, improve MAR Tracing Registration performance.
Present invention also provides a kind of Tracing Registration system based on figure optimization SLAM towards MAR, have and above-mentioned side
Method identical beneficial effect.
Brief description of the drawings
Technical scheme in order to illustrate the embodiments of the present invention more clearly, below will be to institute in prior art and embodiment
The accompanying drawing needed to use is briefly described, it should be apparent that, drawings in the following description are only some implementations of the present invention
Example, for those of ordinary skill in the art, on the premise of not paying creative work, can also be obtained according to these accompanying drawings
Obtain other accompanying drawings.
Fig. 1 is a kind of process flow of Tracing Registration method based on figure optimization SLAM towards MAR provided by the invention
Figure;
Fig. 2 is a kind of structural representation of Tracing Registration system based on figure optimization SLAM towards MAR provided by the invention
Figure.
Embodiment
The core of the present invention is to provide a kind of Tracing Registration method and system based on figure optimization SLAM towards MAR, real
Show the Tracing Registration of the natural scene on mobile terminal, improve MAR Tracing Registration performance.
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with the embodiment of the present invention
In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is
Part of the embodiment of the present invention, rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art
The every other embodiment obtained under the premise of creative work is not made, belongs to the scope of protection of the invention.
Fig. 1 is refer to, Fig. 1 is a kind of Tracing Registration method based on figure optimization SLAM towards MAR provided by the invention
Process flow diagram flow chart, this method includes:
Step S11:Obtain environment depth map;
Firstly the need of explanation, (simultaneous localization and mapping, are positioned SLAM simultaneously
With map structuring) autonomous map building is combined by algorithm with self-positioning, and its main thought is by the cartographic information created
Carry out self-positioning, and map is updated according to positioning result.The SLAM algorithms based on figure optimization are used in the application, will be taken the photograph
The pose of camera regards the node in trajectory diagram as, and the space constraint relation of crucial interframe is expressed as side, and structure is based on video camera phase
To the trajectory diagram of pose estimation, to determine the pose of video camera subsequently through the trajectory diagram built.
Specifically, three-dimensional Tracing Registration is carried out, it is necessary first to obtain environment depth map, in AR fields, pass through
Video camera gathers environment depth map, and the environment depth map of camera acquisition is made up of picture one by one.The application
Environment depth map is obtained by Kinect video camera, Kinect video camera includes being used for the RGB video camera and use for gathering RGB figures
In the IR video cameras for gathering infrared depth image, environment depth map is the superposition of RGB figures and infrared depth image.Specifically,
SLAM algorithms need the RGB of camera acquisition to scheme have identical timestamp, RGB video camera and IR shootings with infrared depth image
External parameter between machine shows as rotation and translation relation, it is necessary to carry out registration to RGB figures and infrared depth image so that logical
The each pixel for crossing index RGB figures can be accurately to obtain the depth value of its position.
Specifically, can first with OpenNI (Open Natural Interaction, depth camera data processing
Open source software storehouse) to scheme progress to the synchronous acquisition of image and infrared depth map and RGB registering in storehouse, and by frame number under video format
Change the picture format in OpenCV (vision of increasing income storehouse) into according to dress.After the depth for obtaining pixel value after calibration, camera is entered
Rower is determined, and the purpose of camera calibration is to try to achieve the internal reference matrix of camera, so as to enter between image slices vegetarian refreshments and three-dimensional point
Row mutually dress changes.
Certainly, environment depth map can also be obtained using other depth cameras here, the application is not done particularly herein
Limit, determined according to actual conditions.
Kinect video camera is demarcated by RGB video camera, IR camera calibrations, the rigid body of IR video cameras and RGB video camera
Change to be demarcated.It is as follows for the three-dimensional coordinate calculation process under point P its color camera coordinates in space,
Parallax d of the P points under thermal camera coordinate system is obtained by Kinect.
Step S12:Current key two field picture is determined from environment depth map according to the first preset algorithm;
Specifically, figure is used as using restriction relation between the key frame images chosen in the SLAM algorithms based on figure optimization
Side, so different key frame images has a great impact to comparison data association, in mobile augmented reality, real-time is direct
The experience of user is had influence on, so amount of images needs Rational choice, it is contemplated that adjacent inter frame image is probably to Same Scene
The image of generation, similarity system design is high between image, the high similar feature in this local time domain, exists in the image for causing to gather
Redundant data, if all calculated these redundant images, the waste of ample resources can be caused.
Therefore, the application is in the case where meeting the requirement of MAR real-times, does not choose whole scene images, but from all
Environment depth map in determine some key frame images so that while MAR real-times are met, reduce amount of calculation, save
Resource.
Step S13:The position of video camera is determined according to current key two field picture and the map built, and according to video camera
The map that has built of location updating;
Specifically, it is determined that after current key two field picture, can determine to take the photograph according to key frame images and the map built
The current position of camera, and the map built according to the current location updating of video camera, realize SALM algorithms.
Step S14:Characteristic point is extracted from current key two field picture according to the second preset algorithm;
It is determined that after current key two field picture, characteristic point is extracted from current key two field picture, it is special using natural environment point
Sign describes map as road sign, it is not necessary to the supplementary means such as handmarking, meets that MAR is most of in large-scale complex position ring
Scene under border.
Step S15:Characteristic point in current key two field picture is matched with the characteristic point in previous keyframe image,
Obtain matching characteristic point;
Specifically, carried out in the motion it is determined that after the characteristic point of current key two field picture, then to current key two field picture
Match somebody with somebody, in order to follow-up video camera pose estimation and key frame extraction, it is necessary to extract characteristic point progress characteristic matching and
Characteristic point in current key two field picture is matched, obtained by tracking, the application with the characteristic point in previous keyframe image
Matching characteristic point.
Step S16:The pose of video camera is obtained according to matching characteristic point and the trajectory diagram built, and according to video camera
The trajectory diagram that pose renewal has been built.
Specifically, trajectory diagram regards the pose of video camera as the node in trajectory diagram, the space constraint relation of crucial interframe
Side is expressed as, the pose of this video camera is estimated by trajectory diagram and the matching characteristic point built.
The invention provides a kind of Tracing Registration method based on figure optimization SLAM towards MAR, including obtain environment depth
Degree figure;Current key two field picture is determined from environment depth map according to the first preset algorithm;According to current key two field picture and
The map of structure determines the position of video camera, and the map built according to the location updating of video camera;According to the second pre- imputation
Method extracts characteristic point from current key two field picture;By in the characteristic point in current key two field picture and previous keyframe image
Characteristic point is matched, and obtains matching characteristic point;The pose of video camera is obtained according to matching characteristic point and the trajectory diagram built,
And the trajectory diagram built according to the renewal of the pose of video camera.This application provides it is a kind of suitable for MAR based on figure optimization
SLAM methods, and as MAR three-dimensional Tracing Registration method, the requirement of mobile terminal real-time and rendering is met, in fact
Show the Tracing Registration of the natural scene on mobile terminal, improve MAR Tracing Registration performance.
As a kind of preferred embodiment, this method also includes:
Feature in all key frame images before current key two field picture is arrived using K mean cluster clustering
Words tree, vision word corresponding to the feature in all key frame images before is obtained, and bag of words mould is obtained according to vision word
Type;
Vision word corresponding to obtaining the feature in current key two field picture, current pass is calculated by TF-IDF models respectively
The similarity of vision word corresponding to feature in key two field picture and all vision words in bag of words, determines similarity most
High similarity, judges whether similarity highest similarity is more than preset value, if it is, determining current key two field picture pair
The position answered is position corresponding with the key frame images where its similarity highest feature.
Specifically, it is contemplated that during simultaneous localization and mapping is performed, due to the cumulative errors of video camera, only
Only relying on pose estimation will cause loop not close, i.e., can not judge whether video camera returns to the region explored,
This problem is particularly important during outdoor extensive Tracing Registration and is difficult to accurately detect.
Correct closed-loop information can reduce the accumulated error of system, so as to obtain the consistent optimization map of information, and it is wrong
Closed-loop information can cause serious interference by mistake to subsequent figure optimization processing, and based on this, this application provides one kind to be based on bag of words
The closed loop detection of model.
Specifically, the main thought of bag of words be after image characteristics extraction, by K mean cluster method to words tree,
The vision word of image is obtained, the feature of continuous transformation is changed into " word " of discretization, the similitude carried out between image judges
And matching strategy, complete the detection of closed loop.The workflow of the algorithm is:
step1:Arbitrarily k object of selection is as initial cluster center from n two valued description sub-vector;
Step2:Judge that other objects to the similarity (Hamming distance) of cluster centre, are then individually positioned in recently
Cluster;
Step3:Repeat the above steps, until error sum of squares criterion function valueIt is full
Sufficient condition, split data into 4 classes.Wherein xjFor the vector of data, SiFor xjResiding cluster, uiTo cluster SiIn average value.
After new image adds database, the growth of words tree is carried out, after the completion of words tree is built, using TF-IDF
The similarity progress one that (Term Frequency Inverse Document Frequency) model carries out to image is sentenced
It is fixed.TF-IDF main thoughts are:If the frequency that some vision word occurs in a secondary key frame images is high, and at other
Key frame seldom occurs, then it is assumed that this vision word has class discrimination ability well, is adapted to classification.
It is understood that key frame images where being marked with vision word, so as in follow-up closed loop detection.
Specifically, it is assumed here that current key two field picture includes vision word " car ", it was found that former key frame images
In also include vision word " car ", then may determine that corresponding to the two labels whether be a feature, similar to mentioned above
Matching characteristic point, if it is, explanation user be in before some region for having accessed.
In addition, can be judged here when whether judge corresponding to two labels is a feature by Hamming distance,
If the Hamming distance of two labels is less than certain predetermined value, illustrate it is a feature corresponding to the two labels.
As a kind of preferred embodiment, the first preset algorithm is based on selection of time method and view-based access control model content selection method
The method being combined;And first preset algorithm include following constraints:
The current key two field picture of determination at least matches the characteristic point of the first predetermined number with previous keyframe image;
Characteristic matching rate is no more than the first predetermined threshold value between the current key two field picture and previous keyframe image of determination;
The second predetermined number frame is comprised at least between the current key two field picture and previous keyframe image of determination.
Specifically, when progress key frame images determine, the application selects and is based on selection of time method and view-based access control model content
The method that back-and-forth method is combined, specifically, schemed by calculating the content change between image to decide whether to extract current time
As corresponding scene representations, being used as and being aided in using selection of time method.
Specifically, ifFor previous keyframe image, then current key two field pictureCriterion of Selecting may be defined as:
Wherein, k represents current key two field picture, and D is the metric function of the picture material difference of definition, and T is the phase of setting
Like degree threshold value, n is defined as the minimum frame number at two key frame images intervals.Then the selection step of current key image is as follows:
Step1:The characteristic point extracted is evenly distributed, and quantity then initializes successfully enough, and is closed as first
Key two field picture;
Step2:In order to estimate the motion model of video camera, it is necessary to ensure certain matching points, current key two field picture
At least with previous keyframe images match to the first predetermined number (such as can be 50) characteristic point;
Step3:In order to ensure to extract enough information, feature between current key two field picture and previous keyframe image
Matching is no more than the first predetermined threshold value (such as can be 95%);
Step4:In order to ensure there is certain dissimilarity between two key frames, passage time domain back-and-forth method, meeting
Under conditions of matching, it is pre- that the insertion of the chosen distance previous keyframe image of current key two field picture has at least pass by second
Quantity if (being, for example, 20) frame.
As a kind of preferred embodiment, the first predetermined number is 50, and the first predetermined threshold value is 95%, the second predetermined number
For 20.
Certainly, the first predetermined number, the first predetermined threshold value and the second predetermined number here can also be other numerical value, this
Application is not particularly limited herein.
As a kind of preferred embodiment, the second preset algorithm includes:
Current key frame image uniform Ground Split is expressed as { h into M*N grids, all grids11,h12,..h1n,h21,
h22...hmn, M, N are the integer not less than 2
Judge whether each grid is able to detect that characteristic point, if grid hikCharacteristic point is not detected inside, then no longer
Consider this grid, otherwise, judge grid hikIn the quantity of characteristic point whether be more than the second predetermined threshold value j, if it is, passing through
Harris Corner Detector key points sort, and select wherein best j and are used as test point, remaining is examined as candidate
Measuring point, otherwise, by grid hikIn characteristic point all as test point, wherein, 1≤i≤M, 1≤k≤N;
When the quantity summation of the characteristic point of all grids extraction meets three predetermined numbers, feature extraction terminates, otherwise,
Random extraction meets the characteristic point of quantity and terminates feature extraction from couple candidate detection point.
Specifically, due to MAR to be met requirement of real-time, in theory, there is provided the more more fortune so generated of characteristic point
Dynamic estimated result is more accurate, and characteristic point is fewer, then may cause the failure of the inaccurate even algorithm of estimation.But in reality
In, because excessive characteristic point can cause amount of calculation excessive, the real-time of system is had a strong impact on, therefore, is ensureing feature
Under the quantity of point, characteristic point should cover whole image region as much as possible, SLAM algorithms is made full use of the figure of acquisition
Computing is carried out as information.In order to meet actual application, the present invention is calculated using a kind of ORB feature extractions based on region segmentation
Method:
Step1:Subregion by the current key frame image uniform Ground Split of determination into specified size, these subregions claim
Be grid (Grid), set image by grid division as M*N grid regions.So characteristic point will be randomly distributed on these
In region.The grid of generation is numbered in order, all grids can then be expressed as { h11,h12,..h1n,h21,
h22...hmn};
Step2:If grid hikCharacteristic point is not detected inside, then is set to region of loseing interest in, and no longer
Consider this grid.If grid hikInside detect nikIndividual candidate feature point, then be set to area-of-interest.If nj≤j
The candidate feature point of (j threshold value usually requires oneself and set) then in its grid is all used as test point, if nik>=j, then pass through
Harris Corner Detector key points sort, and select wherein best j and are used as test point, remaining is examined as candidate
Measuring point kik;
Step3:The elected feature point number gotWhen meeting to choose quantity term, Feature Selection terminates.If choosing
During the Characteristic Number deficiency taken, then in couple candidate detection point kikInterior random extraction meets the characteristic point of quantity, terminates feature extraction
Journey.
To sum up, the current key frame image uniform of determination is divided into multiple grids by the application, and characteristic point is extracted with this,
On the one hand, convenient extraction, extraction efficiency is high, and on the other hand, the characteristic point for the extraction for being also is more uniform, improves subsequent motion
As a result the precision estimated.
As a kind of preferred embodiment, by the spy in the characteristic point in current key two field picture and previous keyframe image
Sign point is matched, and the process for obtaining matching characteristic point is specially:
Judge characteristic point in current key two field picture and the characteristic point in previous keyframe image Hamming distance whether
Less than the 3rd predetermined threshold value, if it is, being matching characteristic point.
Specifically, the application matches standard using distance of the Hamming distance as characteristic point.ORB binary character string descriptors are adopted
It is same to select smallest hamming distance similar right as its with Hamming space representation.N (this n the is 256) dimension two for obtaining ORB is entered
After system description, it is assumed that description of K1, K2 two images:
The similarity degree of two ORB Feature Descriptors is characterized by the XOR sum between Hamming distance, with D (K1, K2)
Represent:
D (K1, K2) is smaller, and to represent similarity higher.For match complexity, Hamming distance only needs to ask in identical bits
Xor operation, complexity are less than Euclidean distance.
To sum up, the present invention proposes a kind of SLAM three-dimensional Tracing Registration methods based on figure optimization for MAR, applied field
Scape is unmarked natural environment, and the depth information of each frame is obtained by depth camera Kinect video camera, and proposes ORB spies
Sign extraction can obtain video camera at the two moment using same characteristic point with matching in image coordinate at different moments
Relative pose relation.Pose of camera is regarded to the node in map as, the control restriction relation of image interframe is expressed as side, structure
Track map based on video camera Relative attitude and displacement estimation, the data correlation of image interframe is obtained, and then complete positioning simultaneously and ground
Figure structure.
Fig. 2 is refer to, Fig. 2 is a kind of Tracing Registration system based on figure optimization SLAM towards MAR provided by the invention
Structural representation, the system includes:
Video camera 1, for obtaining environment depth map;
Key frame determining unit 2, for determining current key two field picture from environment depth map according to the first preset algorithm;
Position determination unit 3, for determining the position of video camera according to current key two field picture and the map built, and
The map built according to the location updating of video camera;
Feature point extraction unit 4, for extracting characteristic point from current key two field picture according to the second preset algorithm;
Matching unit 5, for the feature in the characteristic point in current key two field picture and previous keyframe image to be clicked through
Row matching, obtains matching characteristic point;
Pose determining unit 6, for obtaining the pose of video camera, and root according to matching characteristic point and the trajectory diagram built
The trajectory diagram built according to the pose renewal of video camera.
As a kind of preferred embodiment, video camera 1 is Kinect video camera.
As a kind of preferred embodiment, the system also includes:
Word setting unit, for the feature in all key frame images before current key two field picture to be used into K
Means clustering method cluster arrive words tree, obtains all vision words corresponding to feature in key frame images before, and according to
Vision word obtains bag of words;
Closed loop detection unit, for vision word corresponding to obtaining the feature in current key two field picture, pass through TF-IDF
Model calculates vision word corresponding to feature in current key two field picture and all vision words in bag of words respectively
Similarity, similarity highest similarity is determined, judge whether similarity highest similarity is more than preset value, if it is,
It is position corresponding with the key frame images where its similarity highest feature to determine position corresponding to current key two field picture.
As a kind of preferred embodiment, the first preset algorithm is based on selection of time method and view-based access control model content selection method
The method being combined;And first preset algorithm include following constraints:
The current key two field picture of determination at least matches the characteristic point of the first predetermined number with previous keyframe image;
Characteristic matching rate is no more than the first predetermined threshold value between the current key two field picture and previous keyframe image of determination;
The second predetermined number frame is comprised at least between the current key two field picture and previous keyframe image of determination.
Introduction for the Tracing Registration system based on figure optimization SLAM provided by the invention towards MAR refer to above-mentioned
Embodiment, the present invention will not be repeated here.
It should be noted that in this manual, such as first and second or the like relational terms are used merely to one
Individual entity or operation make a distinction with another entity or operation, and not necessarily require or imply these entities or operate it
Between any this actual relation or order be present.Moreover, term " comprising ", "comprising" or its any other variant are intended to
Cover including for nonexcludability, so that process, method, article or equipment including a series of elements not only include those
Key element, but also the other element including being not expressly set out, or also include for this process, method, article or set
Standby intrinsic key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that
Other identical element in the process including the key element, method, article or equipment also be present.
The foregoing description of the disclosed embodiments, professional and technical personnel in the field are enable to realize or using the present invention.
A variety of modifications to these embodiments will be apparent for those skilled in the art, as defined herein
General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, it is of the invention
The embodiments shown herein is not intended to be limited to, and is to fit to and principles disclosed herein and features of novelty phase one
The most wide scope caused.
Claims (10)
- A kind of 1. Tracing Registration method based on figure optimization SLAM towards MAR, it is characterised in that including:Obtain environment depth map;Current key two field picture is determined from the environment depth map according to the first preset algorithm;The position of video camera is determined according to the current key two field picture and the map built, and according to the position of the video camera Put the map built described in renewal;Characteristic point is extracted from the current key two field picture according to the second preset algorithm;Characteristic point in the current key two field picture is matched with the characteristic point in previous keyframe image, matched Characteristic point;The pose of the video camera is obtained according to the matching characteristic point and the trajectory diagram built, and according to the video camera The trajectory diagram built described in pose renewal.
- 2. the method as described in claim 1, it is characterised in that this method also includes:Feature in all key frame images before the current key two field picture is arrived using K mean cluster clustering Words tree, vision word corresponding to the feature in all key frame images before is obtained, and word is obtained according to the vision word Bag model;Vision word corresponding to the feature in the current key two field picture is obtained, described work as is calculated by TF-IDF models respectively The similarity of vision word corresponding to feature in preceding key frame images and all vision words in the bag of words, it is determined that Similarity highest similarity, judges whether the similarity highest similarity is more than preset value, if it is, described in determining Position corresponding to current key two field picture is position corresponding with the key frame images where its similarity highest feature.
- 3. the method as described in claim 1, it is characterised in that first preset algorithm is based on selection of time method and is based on The method that vision content back-and-forth method is combined;And first preset algorithm includes following constraints:The current key two field picture of determination at least matches the characteristic point of the first predetermined number with previous keyframe image;Characteristic matching rate is no more than the first predetermined threshold value between the current key two field picture and previous keyframe image of determination;The second predetermined number frame is comprised at least between the current key two field picture and previous keyframe image of determination.
- 4. method as claimed in claim 3, it is characterised in that first predetermined number is 50, first predetermined threshold value For 95%, second predetermined number is 20.
- 5. method as claimed in claim 3, it is characterised in that second preset algorithm includes:The current key frame image uniform Ground Split is expressed as { h into M*N grids, all grids11,h12,..h1n,h21, h22...hmn, M, N are the integer not less than 2Judge whether each grid is able to detect that characteristic point, if grid hikCharacteristic point is not detected inside, then no longer Consider this grid, otherwise, judge grid hikIn the quantity of characteristic point whether be more than the second predetermined threshold value j, if it is, passing through Harris Corner Detector key points sort, and select wherein best j and are used as test point, remaining is examined as candidate Measuring point, otherwise, by grid hikIn characteristic point all as test point, wherein, 1≤i≤M, 1≤k≤N;When the quantity summation of the characteristic point of all grid extractions meets three predetermined numbers, feature extraction terminates, otherwise, Random extraction meets the characteristic point of quantity and terminates feature extraction from the couple candidate detection point.
- 6. the method as described in claim any one of 1-5, it is characterised in that the spy by the current key two field picture Sign point is matched with the characteristic point in previous keyframe image, and the process for obtaining matching characteristic point is specially:Judge characteristic point in the current key two field picture and the characteristic point in previous keyframe image Hamming distance whether Less than the 3rd predetermined threshold value, if it is, being matching characteristic point.
- A kind of 7. Tracing Registration system based on figure optimization SLAM towards MAR, it is characterised in that including:Video camera, for obtaining environment depth map;Key frame determining unit, for determining current key two field picture from the environment depth map according to the first preset algorithm;Position determination unit, for determining the position of video camera according to the current key two field picture and the map built, and The map built according to the location updating of the video camera;Feature point extraction unit, for extracting characteristic point from the current key two field picture according to the second preset algorithm;Matching unit, for the characteristic point in the characteristic point in the current key two field picture and previous keyframe image to be carried out Matching, obtains matching characteristic point;Pose determining unit, for obtaining the pose of the video camera according to the matching characteristic point and the trajectory diagram built, And the trajectory diagram built according to the renewal of the pose of the video camera.
- 8. system as claimed in claim 7, it is characterised in that the video camera is Kinect video camera.
- 9. system as claimed in claim 7, it is characterised in that the system also includes:Word setting unit, for the feature in all key frame images before the current key two field picture to be used into K Means clustering method cluster arrive words tree, obtains all vision words corresponding to feature in key frame images before, and according to The vision word obtains bag of words;Closed loop detection unit, for vision word corresponding to obtaining the feature in the current key two field picture, pass through TF-IDF Model calculates vision word corresponding to feature in the current key two field picture and regarded with all in the bag of words respectively Feel the similarity of word, determine similarity highest similarity, it is default to judge whether the similarity highest similarity is more than Value, if it is, determining position corresponding to the current key two field picture for the key where with its similarity highest feature Position corresponding to two field picture.
- 10. system as claimed in claim 7, it is characterised in that first preset algorithm is based on selection of time method and base In the method that vision content back-and-forth method is combined;And first preset algorithm includes following constraints:The current key two field picture of determination at least matches the characteristic point of the first predetermined number with previous keyframe image;Characteristic matching rate is no more than the first predetermined threshold value between the current key two field picture and previous keyframe image of determination;The second predetermined number frame is comprised at least between the current key two field picture and previous keyframe image of determination.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710581403.3A CN107369183A (en) | 2017-07-17 | 2017-07-17 | Towards the MAR Tracing Registration method and system based on figure optimization SLAM |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710581403.3A CN107369183A (en) | 2017-07-17 | 2017-07-17 | Towards the MAR Tracing Registration method and system based on figure optimization SLAM |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107369183A true CN107369183A (en) | 2017-11-21 |
Family
ID=60308377
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710581403.3A Pending CN107369183A (en) | 2017-07-17 | 2017-07-17 | Towards the MAR Tracing Registration method and system based on figure optimization SLAM |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107369183A (en) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108021921A (en) * | 2017-11-23 | 2018-05-11 | 塔普翊海(上海)智能科技有限公司 | Image characteristic point extraction system and its application |
CN108615246A (en) * | 2018-04-19 | 2018-10-02 | 浙江大承机器人科技有限公司 | It improves visual odometry system robustness and reduces the method that algorithm calculates consumption |
CN108735052A (en) * | 2018-05-09 | 2018-11-02 | 北京航空航天大学青岛研究院 | A kind of augmented reality experiment with falling objects method based on SLAM |
CN109947886A (en) * | 2019-03-19 | 2019-06-28 | 腾讯科技(深圳)有限公司 | Image processing method, device, electronic equipment and storage medium |
CN110059651A (en) * | 2019-04-24 | 2019-07-26 | 北京计算机技术及应用研究所 | A kind of camera real-time tracking register method |
CN110148167A (en) * | 2019-04-17 | 2019-08-20 | 维沃移动通信有限公司 | A kind of distance measurement method and terminal device |
CN110245639A (en) * | 2019-06-10 | 2019-09-17 | 北京航空航天大学 | A kind of bag of words generation method and device based on characteristic matching |
GB2572795A (en) * | 2018-04-11 | 2019-10-16 | Nokia Technologies Oy | Camera registration |
CN110727265A (en) * | 2018-06-28 | 2020-01-24 | 深圳市优必选科技有限公司 | Robot repositioning method and device and storage device |
CN111046698A (en) * | 2018-10-12 | 2020-04-21 | 锥能机器人(上海)有限公司 | Visual positioning method and system for visual editing |
CN111239761A (en) * | 2020-01-20 | 2020-06-05 | 西安交通大学 | Method for indoor real-time establishment of two-dimensional map |
CN111274847A (en) * | 2018-12-04 | 2020-06-12 | 上海汽车集团股份有限公司 | Positioning method |
CN111310654A (en) * | 2020-02-13 | 2020-06-19 | 北京百度网讯科技有限公司 | Map element positioning method and device, electronic equipment and storage medium |
CN111339228A (en) * | 2020-02-18 | 2020-06-26 | Oppo广东移动通信有限公司 | Map updating method, device, cloud server and storage medium |
CN111583331A (en) * | 2020-05-12 | 2020-08-25 | 北京轩宇空间科技有限公司 | Method and apparatus for simultaneous localization and mapping |
CN111784775A (en) * | 2020-07-13 | 2020-10-16 | 中国人民解放军军事科学院国防科技创新研究院 | Identification-assisted visual inertia augmented reality registration method |
CN111795704A (en) * | 2020-06-30 | 2020-10-20 | 杭州海康机器人技术有限公司 | Method and device for constructing visual point cloud map |
CN112556695A (en) * | 2020-11-30 | 2021-03-26 | 北京建筑大学 | Indoor positioning and three-dimensional modeling method and system, electronic equipment and storage medium |
CN112614185A (en) * | 2020-12-29 | 2021-04-06 | 浙江商汤科技开发有限公司 | Map construction method and device and storage medium |
CN112634395A (en) * | 2019-09-24 | 2021-04-09 | 杭州海康威视数字技术股份有限公司 | Map construction method and device based on SLAM |
CN112926593A (en) * | 2021-02-20 | 2021-06-08 | 温州大学 | Image feature processing method and device for dynamic image enhancement presentation |
CN113094457A (en) * | 2021-04-15 | 2021-07-09 | 成都纵横自动化技术股份有限公司 | Incremental generation method of digital orthographic image map and related components |
CN113532431A (en) * | 2021-07-15 | 2021-10-22 | 贵州电网有限责任公司 | Visual inertia SLAM method for power inspection and operation |
WO2022262152A1 (en) * | 2021-06-18 | 2022-12-22 | 深圳市商汤科技有限公司 | Map construction method and apparatus, electronic device, storage medium and computer program product |
WO2023216918A1 (en) * | 2022-05-09 | 2023-11-16 | 北京字跳网络技术有限公司 | Image rendering method and apparatus, electronic device, and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103177468A (en) * | 2013-03-29 | 2013-06-26 | 渤海大学 | Three-dimensional motion object augmented reality registration method based on no marks |
CN103530881A (en) * | 2013-10-16 | 2014-01-22 | 北京理工大学 | Outdoor augmented reality mark-point-free tracking registration method applicable to mobile terminal |
CN103854283A (en) * | 2014-02-21 | 2014-06-11 | 北京理工大学 | Mobile augmented reality tracking registration method based on online study |
-
2017
- 2017-07-17 CN CN201710581403.3A patent/CN107369183A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103177468A (en) * | 2013-03-29 | 2013-06-26 | 渤海大学 | Three-dimensional motion object augmented reality registration method based on no marks |
CN103530881A (en) * | 2013-10-16 | 2014-01-22 | 北京理工大学 | Outdoor augmented reality mark-point-free tracking registration method applicable to mobile terminal |
CN103854283A (en) * | 2014-02-21 | 2014-06-11 | 北京理工大学 | Mobile augmented reality tracking registration method based on online study |
Non-Patent Citations (3)
Title |
---|
林城: "《面向移动增强现实的跟踪注册技术研究》", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
赵越 等: "《基于IEKF-SLAM的未知场景增强现实跟踪注册算法》", 《计算机工程》 * |
郑顺凯: "《自然环境中基于图优化的单目视觉SLAM的研究》", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108021921A (en) * | 2017-11-23 | 2018-05-11 | 塔普翊海(上海)智能科技有限公司 | Image characteristic point extraction system and its application |
GB2572795A (en) * | 2018-04-11 | 2019-10-16 | Nokia Technologies Oy | Camera registration |
CN108615246A (en) * | 2018-04-19 | 2018-10-02 | 浙江大承机器人科技有限公司 | It improves visual odometry system robustness and reduces the method that algorithm calculates consumption |
CN108615246B (en) * | 2018-04-19 | 2021-02-26 | 浙江大承机器人科技有限公司 | Method for improving robustness of visual odometer system and reducing calculation consumption of algorithm |
CN108735052A (en) * | 2018-05-09 | 2018-11-02 | 北京航空航天大学青岛研究院 | A kind of augmented reality experiment with falling objects method based on SLAM |
CN110727265A (en) * | 2018-06-28 | 2020-01-24 | 深圳市优必选科技有限公司 | Robot repositioning method and device and storage device |
CN111046698A (en) * | 2018-10-12 | 2020-04-21 | 锥能机器人(上海)有限公司 | Visual positioning method and system for visual editing |
CN111046698B (en) * | 2018-10-12 | 2023-06-20 | 锥能机器人(上海)有限公司 | Visual positioning method and system for visual editing |
CN111274847A (en) * | 2018-12-04 | 2020-06-12 | 上海汽车集团股份有限公司 | Positioning method |
CN109947886B (en) * | 2019-03-19 | 2023-01-10 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN109947886A (en) * | 2019-03-19 | 2019-06-28 | 腾讯科技(深圳)有限公司 | Image processing method, device, electronic equipment and storage medium |
CN110148167A (en) * | 2019-04-17 | 2019-08-20 | 维沃移动通信有限公司 | A kind of distance measurement method and terminal device |
CN110059651A (en) * | 2019-04-24 | 2019-07-26 | 北京计算机技术及应用研究所 | A kind of camera real-time tracking register method |
CN110059651B (en) * | 2019-04-24 | 2021-07-02 | 北京计算机技术及应用研究所 | Real-time tracking and registering method for camera |
CN110245639B (en) * | 2019-06-10 | 2021-03-02 | 北京航空航天大学 | Bag-of-words generation method and device based on feature matching |
CN110245639A (en) * | 2019-06-10 | 2019-09-17 | 北京航空航天大学 | A kind of bag of words generation method and device based on characteristic matching |
CN112634395A (en) * | 2019-09-24 | 2021-04-09 | 杭州海康威视数字技术股份有限公司 | Map construction method and device based on SLAM |
CN112634395B (en) * | 2019-09-24 | 2023-08-25 | 杭州海康威视数字技术股份有限公司 | Map construction method and device based on SLAM |
CN111239761A (en) * | 2020-01-20 | 2020-06-05 | 西安交通大学 | Method for indoor real-time establishment of two-dimensional map |
CN111310654A (en) * | 2020-02-13 | 2020-06-19 | 北京百度网讯科技有限公司 | Map element positioning method and device, electronic equipment and storage medium |
CN111310654B (en) * | 2020-02-13 | 2023-09-08 | 北京百度网讯科技有限公司 | Map element positioning method and device, electronic equipment and storage medium |
CN111339228A (en) * | 2020-02-18 | 2020-06-26 | Oppo广东移动通信有限公司 | Map updating method, device, cloud server and storage medium |
CN111339228B (en) * | 2020-02-18 | 2023-08-11 | Oppo广东移动通信有限公司 | Map updating method, device, cloud server and storage medium |
CN111583331B (en) * | 2020-05-12 | 2023-09-01 | 北京轩宇空间科技有限公司 | Method and device for simultaneous localization and mapping |
CN111583331A (en) * | 2020-05-12 | 2020-08-25 | 北京轩宇空间科技有限公司 | Method and apparatus for simultaneous localization and mapping |
CN111795704A (en) * | 2020-06-30 | 2020-10-20 | 杭州海康机器人技术有限公司 | Method and device for constructing visual point cloud map |
CN111784775A (en) * | 2020-07-13 | 2020-10-16 | 中国人民解放军军事科学院国防科技创新研究院 | Identification-assisted visual inertia augmented reality registration method |
CN111784775B (en) * | 2020-07-13 | 2021-05-04 | 中国人民解放军军事科学院国防科技创新研究院 | Identification-assisted visual inertia augmented reality registration method |
CN112556695A (en) * | 2020-11-30 | 2021-03-26 | 北京建筑大学 | Indoor positioning and three-dimensional modeling method and system, electronic equipment and storage medium |
CN112556695B (en) * | 2020-11-30 | 2023-09-19 | 北京建筑大学 | Indoor positioning and three-dimensional modeling method, system, electronic equipment and storage medium |
CN112614185A (en) * | 2020-12-29 | 2021-04-06 | 浙江商汤科技开发有限公司 | Map construction method and device and storage medium |
CN112614185B (en) * | 2020-12-29 | 2022-06-21 | 浙江商汤科技开发有限公司 | Map construction method and device and storage medium |
CN112926593A (en) * | 2021-02-20 | 2021-06-08 | 温州大学 | Image feature processing method and device for dynamic image enhancement presentation |
CN113094457A (en) * | 2021-04-15 | 2021-07-09 | 成都纵横自动化技术股份有限公司 | Incremental generation method of digital orthographic image map and related components |
CN113094457B (en) * | 2021-04-15 | 2023-11-03 | 成都纵横自动化技术股份有限公司 | Incremental generation method of digital orthophoto map and related components |
WO2022262152A1 (en) * | 2021-06-18 | 2022-12-22 | 深圳市商汤科技有限公司 | Map construction method and apparatus, electronic device, storage medium and computer program product |
CN113532431A (en) * | 2021-07-15 | 2021-10-22 | 贵州电网有限责任公司 | Visual inertia SLAM method for power inspection and operation |
WO2023216918A1 (en) * | 2022-05-09 | 2023-11-16 | 北京字跳网络技术有限公司 | Image rendering method and apparatus, electronic device, and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107369183A (en) | Towards the MAR Tracing Registration method and system based on figure optimization SLAM | |
WO2020259481A1 (en) | Positioning method and apparatus, electronic device, and readable storage medium | |
CN103530881B (en) | Be applicable to the Outdoor Augmented Reality no marks point Tracing Registration method of mobile terminal | |
US9626585B2 (en) | Composition modeling for photo retrieval through geometric image segmentation | |
CN104715471B (en) | Target locating method and its device | |
EP3274964B1 (en) | Automatic connection of images using visual features | |
CN104781849A (en) | Fast initialization for monocular visual simultaneous localization and mapping (SLAM) | |
TWI745818B (en) | Method and electronic equipment for visual positioning and computer readable storage medium thereof | |
Tang et al. | ESTHER: Joint camera self-calibration and automatic radial distortion correction from tracking of walking humans | |
Tau et al. | Dense correspondences across scenes and scales | |
JP2014515530A (en) | Planar mapping and tracking for mobile devices | |
CN101976461A (en) | Novel outdoor augmented reality label-free tracking registration algorithm | |
CN105069809A (en) | Camera positioning method and system based on planar mixed marker | |
Garg et al. | Where's Waldo: matching people in images of crowds | |
CN111709317B (en) | Pedestrian re-identification method based on multi-scale features under saliency model | |
CN109063549A (en) | High-resolution based on deep neural network is taken photo by plane video moving object detection method | |
CN112163588A (en) | Intelligent evolution-based heterogeneous image target detection method, storage medium and equipment | |
CN108961385A (en) | A kind of SLAM patterning process and device | |
Shalaby et al. | Algorithms and applications of structure from motion (SFM): A survey | |
Liu et al. | Stereo video object segmentation using stereoscopic foreground trajectories | |
Revaud et al. | Did it change? learning to detect point-of-interest changes for proactive map updates | |
Zhu et al. | Large-scale architectural asset extraction from panoramic imagery | |
Park et al. | Estimating the camera direction of a geotagged image using reference images | |
Wang et al. | Tc-sfm: Robust track-community-based structure-from-motion | |
CN111402429B (en) | Scale reduction and three-dimensional reconstruction method, system, storage medium and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171121 |