CN109902532A - A kind of vision closed loop detection method - Google Patents

A kind of vision closed loop detection method Download PDF

Info

Publication number
CN109902532A
CN109902532A CN201711286052.XA CN201711286052A CN109902532A CN 109902532 A CN109902532 A CN 109902532A CN 201711286052 A CN201711286052 A CN 201711286052A CN 109902532 A CN109902532 A CN 109902532A
Authority
CN
China
Prior art keywords
scene
feature
closed loop
loop detection
visual signature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711286052.XA
Other languages
Chinese (zh)
Inventor
覃争鸣
周健
李康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rich Intelligent Science And Technology Ltd Is Reflected In Guangzhou
Original Assignee
Rich Intelligent Science And Technology Ltd Is Reflected In Guangzhou
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rich Intelligent Science And Technology Ltd Is Reflected In Guangzhou filed Critical Rich Intelligent Science And Technology Ltd Is Reflected In Guangzhou
Priority to CN201711286052.XA priority Critical patent/CN109902532A/en
Publication of CN109902532A publication Critical patent/CN109902532A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of vision closed loop detection methods, which comprises S1 acquires scene image using laser sensor and pre-processed;S2 extracts the geometry feature of scene;S3 extracts the visual signature of scene;S4 merges geometry feature and visual signature with Method of Data with Adding Windows;S5 carries out closed loop detection using fusion feature.The visual signature and geometry Fusion Features that closed loop detection system and method based on multi-feature fusion of the invention passes through scene appearance, obtaining healthy and strong scene characteristic indicates, and " perception is obscured " in scene matching is reduced using the constraint of the geometric space of characteristic matching, false detection rate is reduced, therefore the present invention program has good expansibility and robustness.

Description

A kind of vision closed loop detection method
Technical field
The present invention relates to closed loop detection techniques, and in particular to vision closed loop detection method.
Background technique
With the raising of properties, service robot can complete more and more tasks in people's daily life, Such as cleaning, mobile object etc..In order to make task complete more smooth, robot must carry out the environment of surrounding It perceives and recognizes in more detail and accurately.
Map expression is robot localization and the basis for building figure, i.e., using the certain special point, line, surface or field in scene The pose of some visual signatures characterization robot in scape image can speculate machine by carrying out matching comparison to the category feature The current pose of device people.
Closed loop detection indicates that the stability of algorithm has extremely important effect, closed loop detection for improving robot map Basic definition be position that robot ceaselessly detects whether to have returned to that in heuristic process, a past accessed.This The actual position estimation accuracy of robot can be improved in kind detection method, and whether confirmation is also related to before by this region It is all highly beneficial to Global localization problem, or even to robot abduction issue is solved.
The closed loop detection method of mainstream depends on visual signature, i.e., acquires object and background in indoor environment by camera Visual signature, matched by visual signature, carry out closed loop detection.However the visual scene largely repeated in indoor environment Such as door and window will lead to matched " perception is obscured " phenomenon of visual signature.In addition, the method for view-based access control model feature can not be abundant Utilize the structuring of geometric scene, semi-structured feature in indoor environment.
Summary of the invention
The purpose of the present invention is to overcome the deficiency in the prior art, the vision especially largely repeated in solution indoor environment Scene such as door and window will lead to the method for the problem of visual signature matched " perception is obscured " phenomenon and view-based access control model feature not The problem of structuring of geometric scene in indoor environment, semi-structured feature can be made full use of.
In order to solve the above technical problems, the present invention provides a kind of vision closed loop detection method, which comprises S1 makes Scene image is acquired with laser sensor and is pre-processed;S2 extracts the geometry feature of scene;S3 extracts scene Visual signature;S4 merges geometry feature and visual signature with Method of Data with Adding Windows;S5 carries out closed loop using fusion feature Detection.
The beneficial effect of the scheme of the invention is, by the visual signature and geometry Fusion Features of scene appearance, obtains The scene characteristic for obtaining stalwartness indicates, and reduces " perceiving mixed in scene matching using the constraint of the geometric space of characteristic matching Confuse ", false detection rate is reduced, therefore the present invention program has good expansibility and robustness.
Detailed description of the invention
Fig. 1 is the flow chart of the vision closed loop detection method of the embodiment of the present invention.
Specific embodiment
With reference to the accompanying drawing and specific embodiment to the present invention carry out in further detail with complete explanation.It is understood that It is that described herein the specific embodiments are only for explaining the present invention, rather than limitation of the invention.
Fig. 1 is that the vision closed loop detection method of the embodiment of the present invention includes: S1, using laser sensor acquisition scene figure Picture is simultaneously pre-processed;S2, the geometry feature for extracting scene;S3, the visual signature for extracting scene;S4, Data Dimensionality Reduction is used Method merges geometry feature and visual signature;S5, closed loop detection is carried out using fusion feature.
In addition, the geometrical characteristic that the S2 extracts scene operates and histograms of oriented gradients (HOG feature), not bending moment can be used Feature or other suitable geometry character representation methods.HOG feature is by calculating the ladder with statistical picture regional area Degree direction histogram carrys out constitutive characteristic, for describing the edge of object;Invariant moment features use insensitive based on area to converting Several squares in domain express the geometrical characteristic of image-region as shape feature, have the characteristics such as rotation, translation, scale not Become feature.The present embodiment is illustrated using invariant moment features below.
The visual signature that the S3 extracts scene, which operates, can be used Scale invariant features transform (SIFT) feature, accelerates robust (SUFR) feature or other suitable visual signature representation methods.SIFT feature is a kind of common image local expression feature side Method calculates stable characteristic point using difference of Gaussian pyramid under multiscale space, and uses characteristic point and adjacent domain structure Build local feature description.SUFR feature is speeded up to realize to one kind of SIFT feature, it may have scale invariability and invariable rotary Property.The present embodiment is illustrated using SIFT feature below.
The S5 using fusion feature carry out closed loop detection operation in similarity calculation operate can be used kmeans cluster, Fuzzy C-means clustering or other suitable clustering methods.Kmeans cluster is a kind of common clustering method based on distance, is recognized Smaller for two object distances, similarity is bigger.Fuzzy C-means clustering does not know sample class using fuzzy theory Description, objectively responds real world.The present embodiment is illustrated using kmeans cluster below.
Specifically, the present embodiment is described as follows:
Step S1: scene image is acquired using laser sensor and is pre-processed.By laser sensor to field around After scape is scanned, the dotted data that multiple discrete laser scanning points return are obtained, since original laser scan data carries Noise scans resulting scene and not fully meets scene expression, thus the pretreatment for needing to carry out laser scanning.Using certainly It adapts to neighbor point cluster dividing method and carries out neighbor point sub-clustering, as shown in formula (1):
Δ l=m ρk-1Δφ (1)
Wherein, ρk-1For the observation of former point, Δ l is the distance of consecutive points, and Δ φ is that two adjacent scanning elements are corresponding partially Gyration, m are empirical coefficient.Neighbor point cluster is considered if practical two o'clock distance is less than Δ l.
Step S2: the geometry feature of scene is extracted.Represent the image as distributed function f (x, y), zeroth order square M00 Indicate the quality of gray level, first moment (M01,M10) indicate, (xc,yc) central coordinate of circle be origin, centralized moments of image indicate are as follows:
Mpq=∫ ∫ [(x-xc)p]×[(y-yc)q]f(x,y)dxdy (2)
Wherein, p and q indicates the order of square, then invariant moment features { I1,I2,I3,I4,I5By multiple high-orders, bending moment is not combined It forms, each High Order Moment is got by central moment calculating, and calculation formula is as follows:
I1=M20+M02 (3)
I2=(M20-M02)2+(2M11)2 (4)
I3=(M30-3M12)2+(3M21-M03)2 (5)
I4=(M30+M12)2+(M21+M03)2 (6)
Step S3: the visual signature of scene is extracted.Specifically include following operation:
S31: image level gradient and vertical gradient are calculated.Using horizontal, vertical difference operator to each pixel of image (x, y) is filtered to take horizontal gradient lxWith vertical gradient ly, as shown in formula (8).
S32: image Harris angle point is calculated.The Harris angle point value c (x, y) of each pixel (x, y) such as formula (9) institute Show.
When the value of c (x, y) is greater than given threshold value, then it is assumed that the pixel is a Harris angle point.
S33: building multi-scale image space.For a sub-picture, various sizes of subgraph is obtained by down-sampling, it will Subgraph and Gaussian convolution nuclear phase multiply carry out convolutional calculation, to obtain multi-scale image space.
S34: the extreme point of scale space is found.Each Harris angle point will be all with it consecutive points compare, see it It is whether bigger than the consecutive points of its image area and scale domain or small.Sampled point and it with scale 8 consecutive points and phase up and down Totally 26 points compare corresponding 9 × 2 points of adjacent scale, to ensure all to detect extreme point in scale space and image space.One If a Harris angle point is maximum or minimum value in multi-scale image this layer of space and bilevel 26 fields, Being considered as the point is an extreme point of the image under the scale.
S35: SIFT feature is calculated.It is specified for each key point using the gradient direction distribution characteristic of characteristic point neighborhood territory pixel Directioin parameter calculates modulus value and the direction of this feature point gradient.The ladder in 8 directions is calculated on the fritter of feature vertex neighborhood 4 × 4 Direction histogram is spent, the accumulated value of each gradient direction is drawn, forms the histogram of 4 × 4 × 8=128 dimension, is i.e. SIFT is special Sign.
Step S4: geometry feature and visual signature are merged with Method of Data with Adding Windows.To the geometrical characteristic of different dimensional with Visual signature carries out normalization and connects to obtain feature vector, calculates feature vector autocorrelation matrix R, carries out feature decomposition to R, point Eigenvalue matrix U and corresponding eigenvectors matrix Λ is not obtained to decompose, each column vector of matrix contains the information of X, It can be considered the fusion feature of X, the corresponding feature vector tectonic transition matrix T of preceding m biggish characteristic values, by original normalizing Change feature and carry out Karhunen-Loeve transformation: Y=T × X is to get to the scene appearance fusion feature Y after dimensionality reduction.
Step S5: closed loop detection operation is carried out using fusion feature and is operated comprising vocabulary training, similarity calculation.
S51: vocabulary training.Using kmeans clustering method construction feature word list, multiple feature vectors are randomly choosed Centered on vector, other features in characteristic set are distributed to apart from nearest center vector, passes through formula (10) and calculates And the average value of center vector is updated, and pass through formula (11) calculation criterion function E.Stop cluster when E meets threshold requirement.
Wherein, x indicates feature vector,Indicate center vector, CiIndicate the characteristic set of the cluster centre, E expressiveness Function.
S52: similarity calculation.For the scene that robot in motion process obtains, all features in scene are distributed Into the cluster centre apart from nearest feature vocabulary, and using the corresponding word of place cluster centre as the word of this feature It indicates.By comparing the word matched degree of current scene and historic scenery, the similarity of scene can be quickly obtained.If Scene similarity is more than threshold value, then it is assumed that closed loop is set up.
Above-mentioned is the preferable embodiment of the present invention, but embodiments of the present invention are not limited by the foregoing content, His any changes, modifications, substitutions, combinations, simplifications made without departing from the spirit and principles of the present invention, should be The substitute mode of effect, is included within the scope of the present invention.

Claims (1)

1. a kind of vision closed loop detection method, which comprises the steps of:
S1, scene image is acquired using laser sensor and is pre-processed;
S2, the geometry feature for extracting scene;
S3, the visual signature for extracting scene;
S4, geometry feature and visual signature are merged with Method of Data with Adding Windows;
S5, closed loop detection is carried out using fusion feature.
CN201711286052.XA 2017-12-07 2017-12-07 A kind of vision closed loop detection method Pending CN109902532A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711286052.XA CN109902532A (en) 2017-12-07 2017-12-07 A kind of vision closed loop detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711286052.XA CN109902532A (en) 2017-12-07 2017-12-07 A kind of vision closed loop detection method

Publications (1)

Publication Number Publication Date
CN109902532A true CN109902532A (en) 2019-06-18

Family

ID=66939260

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711286052.XA Pending CN109902532A (en) 2017-12-07 2017-12-07 A kind of vision closed loop detection method

Country Status (1)

Country Link
CN (1) CN109902532A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276348A (en) * 2019-06-20 2019-09-24 腾讯科技(深圳)有限公司 A kind of image position method, device, server and storage medium
CN110672628A (en) * 2019-09-27 2020-01-10 中国科学院自动化研究所 Method, system and device for positioning edge-covering joint of plate
CN111241986A (en) * 2020-01-08 2020-06-05 电子科技大学 Visual SLAM closed loop detection method based on end-to-end relationship network
CN112800833A (en) * 2019-08-09 2021-05-14 河海大学 Method for realizing overall object identification based on mechanism model for water environment monitoring

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276348A (en) * 2019-06-20 2019-09-24 腾讯科技(深圳)有限公司 A kind of image position method, device, server and storage medium
CN110276348B (en) * 2019-06-20 2022-11-25 腾讯科技(深圳)有限公司 Image positioning method, device, server and storage medium
CN112800833A (en) * 2019-08-09 2021-05-14 河海大学 Method for realizing overall object identification based on mechanism model for water environment monitoring
CN112800833B (en) * 2019-08-09 2022-02-25 河海大学 Method for realizing overall object identification based on mechanism model for water environment monitoring
CN110672628A (en) * 2019-09-27 2020-01-10 中国科学院自动化研究所 Method, system and device for positioning edge-covering joint of plate
CN111241986A (en) * 2020-01-08 2020-06-05 电子科技大学 Visual SLAM closed loop detection method based on end-to-end relationship network
CN111241986B (en) * 2020-01-08 2021-03-30 电子科技大学 Visual SLAM closed loop detection method based on end-to-end relationship network

Similar Documents

Publication Publication Date Title
Lin et al. Facet segmentation-based line segment extraction for large-scale point clouds
Zhu et al. Single image 3d object detection and pose estimation for grasping
US9846946B2 (en) Objection recognition in a 3D scene
Lu et al. Joint dictionary learning for multispectral change detection
CN108108737A (en) Closed loop detecting system and method based on multi-feature fusion
US9189855B2 (en) Three dimensional close interactions
Li et al. Robust rooftop extraction from visible band images using higher order CRF
CN103632132B (en) Face detection and recognition method based on skin color segmentation and template matching
EP1693783B1 (en) Fast method of object detection by statistical template matching
CN109902532A (en) A kind of vision closed loop detection method
Russ et al. A 2D range Hausdorff approach for 3D face recognition
CN109446895A (en) A kind of pedestrian recognition method based on human body head feature
Zang et al. Road network extraction via aperiodic directional structure measurement
Stassopoulou et al. Building detection using Bayesian networks
Kang et al. An efficient planar feature fitting method using point cloud simplification and threshold-independent BaySAC
CN106778491B (en) The acquisition methods and equipment of face 3D characteristic information
Chen et al. An efficient global constraint approach for robust contour feature points extraction of point cloud
Wang et al. Hand posture recognition from disparity cost map
Huang et al. Multi‐class obstacle detection and classification using stereovision and improved active contour models
Berretti et al. 3D partial face matching using local shape descriptors
Wan et al. Novel change detection in SAR imagery using local connectivity
CN109902691A (en) A kind of vision closed loop detection system
Gupta et al. Facial range image matching using the complexwavelet structural similarity metric
Xu et al. MultiView-based hand posture recognition method based on point cloud
Zhou et al. Road detection based on edge feature with GAC model in aerial image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190618