CN108765326A - A kind of synchronous superposition method and device - Google Patents

A kind of synchronous superposition method and device Download PDF

Info

Publication number
CN108765326A
CN108765326A CN201810479742.5A CN201810479742A CN108765326A CN 108765326 A CN108765326 A CN 108765326A CN 201810479742 A CN201810479742 A CN 201810479742A CN 108765326 A CN108765326 A CN 108765326A
Authority
CN
China
Prior art keywords
key point
frame
point
class
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810479742.5A
Other languages
Chinese (zh)
Inventor
路通
李志凯
巫义锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201810479742.5A priority Critical patent/CN108765326A/en
Publication of CN108765326A publication Critical patent/CN108765326A/en
Pending legal-status Critical Current

Links

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Abstract

The present invention proposes a kind of synchronous superposition method, the method includes:In current frame image, first kind key point and the second class key point are extracted respectively, the first kind key point is used to carry out characteristic matching with contiguous frames to obtain initial pose, second class key point is used to carry out Block- matching on the basis of initial pose to generate stable location and pose, to complete synchronous superposition;Wherein, the first kind key point is different with the extracting method of the second class key point.The present invention carries out characteristic matching to obtain reliable initial pose using only a small amount of characteristic point, effectively reduces calculation scale to ensure real-time, then matches key point by highly efficient block matching algorithm, optimized to initial pose, improves the accuracy of map.

Description

A kind of synchronous superposition method and device
Technical field
The invention belongs to the three-dimensional reconstruction fields in machine vision, more particularly to a kind of synchronous superposition side Method and device.
Background technology
Synchronous superposition is one of the machine vision algorithm of core the most in robot system, is mainly used for Help robot solve " I somewhere?" and " what ambient enviroment is?" the problem of.
Recently emerged in large numbers many outstanding vision SLAM (Simultaneous Localization and Mapping, together Step positioning and map structuring) method, the vision SLAM methods of mainstream can substantially be divided into two classes:Feature based point and key frame BA (Bundle Adjustment, light-stream adjustment) and it is based on matched direct method for tracing.The main difference between the two is Method of characteristic point estimates pose and structure map by calculating with matching characteristic point, has stronger robustness;Based on Block- matching Direct back tracking method need not extract with matching characteristic point, it is therefore more more efficient than the method for feature based point, but Block- matching is to light According to extremely sensitive with ambiguity, if being difficult to ensure reliability without accurate initial pose estimation.
Invention content
The technical problem to be solved by the present invention is in view of the deficiency of the prior art, propose one kind embedded It can not only reach real-time simultaneously in environment but also can ensure the synchronous superposition method of reliability.
The present invention proposes a kind of synchronous superposition method, the method includes:
In current frame image, first kind key point and the second class key point are extracted respectively, and the first kind key point is used Initial pose is obtained in carrying out characteristic matching with contiguous frames, the second class key point is used on the basis of initial pose into row block With stable location and pose is generated, to complete synchronous superposition;Wherein, the first kind key point and the second class key point Extracting method is different.
As a preferred technical solution of the present invention:The first kind key point is extracted using feature extracting method The set of characteristic point.
As a preferred technical solution of the present invention:It is described to extract the terraced according to pixel in frame image of the second class key point It spends to extract key point, specially:Pixel gradient is more than the set of the pixel of threshold value as the second class key point.
As a preferred technical solution of the present invention:It is characterized in that, further including in the method:Key frame is obtained, Map is updated according to the key frame.
The present invention also proposes a kind of synchronous superposition device, which is characterized in that described device includes:
Image capture module, the frame image for acquiring different moments;
Key point extraction module, in current frame image, extracting first kind key point and the second class key point respectively, The first kind key point is used to carry out characteristic matching with contiguous frames to obtain initial pose, and the second class key point is used in initial bit Block- matching is carried out on the basis of appearance and generates stable location and pose, to complete synchronous superposition;Wherein, the first kind is crucial Point is different with the extracting method of the second class key point;
Update module updates map for obtaining key frame according to the key frame.
As a preferred technical solution of the present invention:The key point extraction module includes:
First extraction unit is closed for extracting characteristic point using feature extracting method using set of characteristic points as the first kind Key point;
Second extraction unit, for extracting set of the pixel gradient more than the pixel of threshold value as the second class key point.
Compared with prior art, the invention has the advantages that:
The present invention carries out characteristic matching to obtain reliable initial pose using only a small amount of characteristic point, effectively reduces calculating Then scale matches key point by highly efficient block matching algorithm, optimizes, carry to initial pose to ensure real-time The high accuracy of map.
Description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, needed in being described below to the embodiment of the present invention Attached drawing to be used is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, For for those of ordinary skill in the art, without creative efforts, it can also obtain according to these attached drawings Obtain other accompanying drawings.
Fig. 1 is the flow chart of the synchronous superposition method of feature based point;
Fig. 2 is the map structuring result being inserted into after new key frame;
Fig. 3 is the structure result of complete map after all frames are disposed;
Fig. 4 and Fig. 5 is the comparison in complete the key frame path and legitimate reading that generate.
Specific implementation mode
With reference to the attached drawing in the embodiment of the present invention, technical solution in the embodiment of the present invention carries out clear, complete Ground describes, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Based on this The embodiment of invention, the every other reality that those of ordinary skill in the art are obtained without making creative work Example is applied, protection scope of the present invention is belonged to.
Term of the present invention is described as follows:
Frame:In field of machine vision, the piece image that custom obtains is referred to as a frame, for example, what camera previous moment obtained Image is referred to as former frame, and the image that camera current time obtains is referred to as present frame, and the continuous two images that camera obtains are referred to as phase Adjacent frame etc.;
Key frame:Since the frame per second of Current camera is higher, the pose variation between consecutive frame is often smaller, in order to enhance The accuracy of pose estimation, generally takes the strategy of key frame, i.e., in certain pose variation range, the image newly obtained is only It is aligned with a certain specific frame to estimate current pose, and only after having exceeded certain range, we just take New specific frame carries out the image alignment of next stage, i.e., these are used for carrying out the particular frame of image alignment being referred to as key frame;
Reference frame:Frame for being aligned present image is known as the reference frame of present image;
Map:In field of machine vision, known environmental information (for example the position of the point calculated, has been obtained The image etc. taken) it saves, referred to as map.
Technical scheme of the present invention is described in detail below in conjunction with the accompanying drawings:
A kind of synchronous superposition method of feature based point, as shown in Figure 1, the present invention is divided into three parts, Estimate including initial pose, the optimization of iteration pose and nearest frame queue.In first part, we are executed based on ORB characteristic points Initial attitude is estimated, using the initial pose of frame and corresponding key point as input;Second part is responsible for iteratively optimizing These input values;Part III is responsible for determining key frame in nearest frame queue, and finds out pass by a greedy search algorithm Matching characteristic between key frame is used for map reconstruction.
Specific implementation mode includes the following steps:
Step 1:Input one needs to carry out the sequence of frames of video of map structuring, according to each frame image of sequential processing;
Step 2:Key point is extracted for present frame, wherein first kind key point extracting method is:
First, it converts current frame image to gray-scale map, is denoted as Igray
Secondly, to gray-scale map IgrayMultistage scaling is carried out, image pyramid is established, is denoted as I1, I2..., Is..., Il, Middle s indicates that the level of zoom in image pyramid, l are the series of image pyramid;
Then, in order to which ensure key point is evenly distributed on grid division in image pyramid, respectively to each grid ORB key points are extracted, the method for extracting key point is:
For the pixel p on image, if being to have continuous n pixel and p on circumference of the center of circle using r as radius using p The gray scale difference value of point is more than a threshold value, then the point is a key point, and it is 11 that r values, which are 3, n values, in experiment, crucial point set Close KfIt can be defined as;
Wherein c (p) is using p as the pixel collection on the circumference in the center of circle, εP, sIt is the area based on average gray in set Domain adaptive threshold, calculation formula are:
Wherein n is the quantity of pixel in set, and parameter alpha is used for the quantity of strategic point.
In another embodiment of the invention, following characteristics extracting method also can be used in extraction first kind key point:SIFT, SURF, BRISK, FREAK scheduling algorithm.
Remaining key point is supplemented as the second class key point according to pixel gradient in the picture, determination method is:If point q Pixel gradient be more than a threshold value, it is a key point to be considered as q, then supplements set of keypoints KgIt can be expressed as:
Wherein b (q) is the pixel collection in the square window centered on q, and m is the quantity of pixel in set, εQ, sIt is Based on the region adaptivity threshold value of set inside gradient average value, calculation formula is:
Wherein m is the quantity of pixel in set, and parameter beta is used for the quantity of strategic point.
Finally obtained set of keypoints K is above-mentioned two union of sets collection, is expressed as:
K=Kf∪Kg
Step 3:It calculates description and completes matching, and the specific implementation mode for obtaining initial pose is as follows:
First, the K obtained in step 2 is calculatedfCorresponding ORB descriptions of middle key point, as a result one binary Feature descriptor string, specific implementation mode are:
In a feature neighborhood of a point, 256 couples of pixel (p are selectedi, qi), i=1,2 ..., 256, then comparison is every The gray value size of a point pair, if I (pi) > I (qi) i-th bit of binary string that then generates sets 1, it is otherwise 0.It may finally Obtain the binary string that a length is 256, as final description.
Aforesaid operations are executed to each key point and can be obtained description subclass Df
Then, nearest frame queue includes several contiguous frames, and being found in nearest frame queue can be with the i-th frame successful match Frame set Fm(i), specific implementation mode is:
The transfer matrix generated according to constant motion model for the ORB characteristic points of each frame is by its projecting characteristic points to working as Previous frame, then grid division, is matched according to corresponding grid, and final two frame matchings points are considered as more than the frame of a threshold value Successful match.And the matching point set of the jth frame in the frame with successful match is denoted as MI, j, following weight scheme is built, according to Contiguous frames pose calculates the initial pose P of present framei
Wherein Fm(i) being can be with the contiguous frames set of the i-th frame successful match, function fv() indicates BA majorized functions, MI, jI.e. Matched ORB set of characteristic points, ω between i-th frame and jth frameI, jIt indicates corresponding weight between the i-th frame and jth frame, passes through one It ties up Gauss weighting function and generates ω, which will assign contiguous frames with larger weights.
It constantly repeats the above steps, initialization is completed when obtaining the stable location and pose estimation for meeting following standard:
Pimi={ Pi||Pi-Pi-1| < γ | Pi-1-Pi-2|}
Wherein γ is the threshold value for controlling abnormal determination.
Step 4:Optimization pose specific implementation mode include:
First, it is the transfer matrix T from reference frame to present frame in fact to optimize the optimization object in pose taski, by step The initial pose obtained in rapid 3 is as initial value, the set of keypoints K that is generated in re-projection former framegTo present frame, and calculate it Then re-projection error is that it applies the gamma error that a small disturbance variable constantly minimizes re-projection by iteration, To realize that the final purpose of optimization pose, process description are as follows:
Wherein CI, i-1The correspondence key point between the i-th frame and the (i-1)-th frame is indicated to set, function δ I () calculate gray scale and miss Difference, c are one pair of which key point.Since above formula is non-linear, Gauss-Newton methods can be used to solve.
Next Lucas-Kanade tracking is executed, "ball-park" estimate corresponds to key point to CI, i-1In current frame image Position, and optimize key point in the coordinate of present frame by minimizing the gamma error of two interframe corresponding blocks, process can be with It is expressed as following formula:
Wherein c andKey point is respectively corresponded to in former frame and in the coordinate of present frame, AiIt is an affine transformation square Battle array, the block of pixels in former frame is transformed in present frame;
Finally, camera pose is optimized based on the correspondence key point after optimization simultaneously to executing minimum re-projection error of coordinate riWith the space point coordinates of key point, process is as follows:
Wherein pcIt is key point to c corresponding three dimensions points, π (Ti, pc) it is three dimensions point pcBy transfer matrix TiThe projection equation in current frame image is projected to after transformation, form is as follows:
Wherein (fx, fy) it is focal length, (cx, cy) it is principal point, above-mentioned parameter can be directly obtained from camera internal reference.
Step 5:Judge key frame and the main task for updating map has:
First, it is determined that whether being key frame, key frame FnDecision procedure is as follows:
Wherein, viIndicate the similarity of the i-th frame and other key frames, FrefIndicate key frame set and nearest frame respectively with N Queue, KiIndicate the corresponding set of keypoints of the i-th frame, function fδ() calculates extracts the similar of characteristic point in two groups of key points Degree;
Then, if producing new key frame, space map is updated using new key frame, it will not existing new matching Three dimensions point and newly generated key frame be added in map.
By nearest frame queue, key frame decision is helped, reduces the number that key frame rejects operation, mitigates computation burden, And influence of the camera shake to map structuring result can be improved to a certain extent.
Map structuring result after the completion of step 5 is with reference to Fig. 3;
After the completion of steps be repeated alternatively until that all frames are handled, you can obtain complete three-dimensional point cloud map and key frame road Diameter, the comparison of key frame path and legitimate reading is with reference to Fig. 4 and Fig. 5.

Claims (6)

1. a kind of synchronous superposition method, which is characterized in that the method includes:
In current frame image, extract first kind key point and the second class key point respectively, the first kind key point be used for Contiguous frames carry out characteristic matching and obtain initial pose, and the second class key point is used to carry out Block- matching life on the basis of initial pose At stable location and pose, to complete synchronous superposition;Wherein, the extraction of the first kind key point and the second class key point Method is different.
2. according to the method described in claim 1, it is characterized in that, the first kind key point is carried using feature extracting method The set of the characteristic point taken.
3. according to the method described in claim 1, it is characterized in that, it is described extraction the second class key point according to picture in frame image Plain gradient extracts key point, specially:Pixel gradient is more than the set of the pixel of threshold value as the second class key point.
4. synchronous superposition method according to claim 1, which is characterized in that further include in the method: Key frame is obtained, map is updated according to the key frame.
5. a kind of synchronous superposition device, which is characterized in that described device includes:
Image capture module, the frame image for acquiring different moments;
Key point extraction module, it is described in current frame image, extracting first kind key point and the second class key point respectively First kind key point is used to carry out characteristic matching with contiguous frames to obtain initial pose, and the second class key point is used in initial pose On the basis of carry out Block- matching generate stable location and pose, to complete synchronous superposition;Wherein, the first kind key point and The extracting method of second class key point is different;
Update module updates map for obtaining key frame according to the key frame.
6. synchronous superposition device according to claim 5, which is characterized in that the key point extraction module Including:
First extraction unit, for extracting characteristic point using feature extracting method, using set of characteristic points as first kind key point;
Second extraction unit, for extracting set of the pixel gradient more than the pixel of threshold value as the second class key point.
CN201810479742.5A 2018-05-18 2018-05-18 A kind of synchronous superposition method and device Pending CN108765326A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810479742.5A CN108765326A (en) 2018-05-18 2018-05-18 A kind of synchronous superposition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810479742.5A CN108765326A (en) 2018-05-18 2018-05-18 A kind of synchronous superposition method and device

Publications (1)

Publication Number Publication Date
CN108765326A true CN108765326A (en) 2018-11-06

Family

ID=64007336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810479742.5A Pending CN108765326A (en) 2018-05-18 2018-05-18 A kind of synchronous superposition method and device

Country Status (1)

Country Link
CN (1) CN108765326A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109901207A (en) * 2019-03-15 2019-06-18 武汉大学 A kind of high-precision outdoor positioning method of Beidou satellite system and feature combinations
WO2020134082A1 (en) * 2018-12-28 2020-07-02 歌尔股份有限公司 Path planning method and apparatus, and mobile device
CN111985268A (en) * 2019-05-21 2020-11-24 搜狗(杭州)智能科技有限公司 Method and device for driving animation by human face

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732518A (en) * 2015-01-19 2015-06-24 北京工业大学 PTAM improvement method based on ground characteristics of intelligent robot
CN107025668A (en) * 2017-03-30 2017-08-08 华南理工大学 A kind of design method of the visual odometry based on depth camera

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732518A (en) * 2015-01-19 2015-06-24 北京工业大学 PTAM improvement method based on ground characteristics of intelligent robot
CN107025668A (en) * 2017-03-30 2017-08-08 华南理工大学 A kind of design method of the visual odometry based on depth camera

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JAKOB等: ""Direct Sparse Odometry"", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
徐浩楠等: ""基于半直接法SLAM 的大场景稠密三维重建系统"", 《模式识别与人工智能》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020134082A1 (en) * 2018-12-28 2020-07-02 歌尔股份有限公司 Path planning method and apparatus, and mobile device
US11709058B2 (en) 2018-12-28 2023-07-25 Goertek Inc. Path planning method and device and mobile device
CN109901207A (en) * 2019-03-15 2019-06-18 武汉大学 A kind of high-precision outdoor positioning method of Beidou satellite system and feature combinations
CN111985268A (en) * 2019-05-21 2020-11-24 搜狗(杭州)智能科技有限公司 Method and device for driving animation by human face

Similar Documents

Publication Publication Date Title
CN108242079B (en) VSLAM method based on multi-feature visual odometer and graph optimization model
CN110070615B (en) Multi-camera cooperation-based panoramic vision SLAM method
CN107025668B (en) Design method of visual odometer based on depth camera
CN107610175A (en) The monocular vision SLAM algorithms optimized based on semi-direct method and sliding window
CN110125928A (en) A kind of binocular inertial navigation SLAM system carrying out characteristic matching based on before and after frames
CN109544636A (en) A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method
CN108615246B (en) Method for improving robustness of visual odometer system and reducing calculation consumption of algorithm
CN108682027A (en) VSLAM realization method and systems based on point, line Fusion Features
CN111462207A (en) RGB-D simultaneous positioning and map creation method integrating direct method and feature method
CN111553939B (en) Image registration algorithm of multi-view camera
CN102456225A (en) Video monitoring system and moving target detecting and tracking method thereof
CN110146099A (en) A kind of synchronous superposition method based on deep learning
CN108765326A (en) A kind of synchronous superposition method and device
CN111998862B (en) BNN-based dense binocular SLAM method
CN111415417B (en) Mobile robot topology experience map construction method integrating sparse point cloud
CN112419497A (en) Monocular vision-based SLAM method combining feature method and direct method
CN110764504A (en) Robot navigation method and system for transformer substation cable channel inspection
CN111199556A (en) Indoor pedestrian detection and tracking method based on camera
CN104240217B (en) Binocular camera image depth information acquisition methods and device
CN112767546B (en) Binocular image-based visual map generation method for mobile robot
CN110390685A (en) Feature point tracking method based on event camera
CN115063447A (en) Target animal motion tracking method based on video sequence and related equipment
CN112967340A (en) Simultaneous positioning and map construction method and device, electronic equipment and storage medium
CN112541423A (en) Synchronous positioning and map construction method and system
CN110428461B (en) Monocular SLAM method and device combined with deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181106