CN106408022B - The sub- construction method of two valued description based on simple sample mode and three-valued strategy - Google Patents

The sub- construction method of two valued description based on simple sample mode and three-valued strategy Download PDF

Info

Publication number
CN106408022B
CN106408022B CN201610832220.XA CN201610832220A CN106408022B CN 106408022 B CN106408022 B CN 106408022B CN 201610832220 A CN201610832220 A CN 201610832220A CN 106408022 B CN106408022 B CN 106408022B
Authority
CN
China
Prior art keywords
point
sampled
groups
valued
characteristic point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201610832220.XA
Other languages
Chinese (zh)
Other versions
CN106408022A (en
Inventor
王志衡
李璐
刘艳
李广武
刘红敏
霍占强
贾利琴
姜国权
王静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan University of Technology
Original Assignee
Henan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University of Technology filed Critical Henan University of Technology
Priority to CN201610832220.XA priority Critical patent/CN106408022B/en
Publication of CN106408022A publication Critical patent/CN106408022A/en
Application granted granted Critical
Publication of CN106408022B publication Critical patent/CN106408022B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention is based on the sub- construction methods of two valued description of simple sample mode and three-valued strategy, it include: to acquire two images under Same Scene different perspectives and input computer, gray level image is converted by color image and carries out Gaussian smoothing, characteristic point is extracted respectively in two images using Harris Corner Detection Algorithm, calculate the principal direction of characteristic point, the sampled point in characteristic point sampling area is obtained to carry out smoothly to and to sampled point, 256 groups of sampled points pair are chosen from 400 groups of sampled point centerings, two valued description is constructed to for each characteristic point using 256 groups of sampled points, Feature Points Matching is carried out based on two valued description.Method provided by the invention calculates Image Feature Matching task simple, with preferable matching performance, suitable for mobile end equipment.

Description

The sub- construction method of two valued description based on simple sample mode and three-valued strategy
Technical field
The present invention relates in image procossing in Feature Points Matching field, especially digital picture the building of two valued description and Characteristic point matching method.
Background technique
Characteristic matching is the major issue of image procossing and computer vision field, and Matching Technology of Feature Point is known in target Not, have in many scenes such as target tracking, scene splicing and be widely applied.Characteristics of image description and matched basic principle are: It selects the regional area centered on characteristic point and characteristic matching is carried out according to texture information construction matching description in region. The Matching Technology of Feature Point of mainstream is based on floating type description, and representative floating type matching description has SIFT [1], SURF [2] and DAISY [3] etc..With the application of intelligent movable equipment, memory space is small, treatment effeciency is high two valued description at For the technology being badly in need of at present.
Existing two valued description mainly has BRISK [4], FREAK [5] and BRIEF [6] etc..Wherein BRISK and FREAK is sampled using fixed mode, is obtained the grayscale information at sampled point, is then compared simultaneously to sampled point gray value Binaryzation comparison result, finally using the character string obtained after binaryzation as description.Both describe son be primarily present it is following Problem: since sample template position is fixed, the grayscale information of specific position can only be obtained, can not be obtained according to local image characteristic More useful information cause to describe sub- description power not high.BRIEF description determines the position of sampled point using stochastical sampling It sets, but the point of directly stochastical sampling acquisition is more to redundancy, affects the matching performance of description.Meanwhile above-mentioned three kinds Description carries out the two-value method that polarization is all employed when gray value a little pair compares --- and non-zero i.e. 1, this binarization method It is highly unstable in the little flat site of gray value of image difference, cause the sub- performance of two valued description obtained also unstable.Cause This, needs to study more effective, the more stable building of two valued description and feature matching method.
Bibliography:
1.D.Lowe,Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision.2004,60(2):91–110.
2.H.Bay,T.Tuytelaars and L.V.Gool,Speeded up robust features(SURF), Computer Vision and Image Understanding.2008,110:346–359.
3.E.Tola,V.Lepetit and P.Fua,Daisy:An efficient dense descriptor applied to wide-baseline stereo.IEEE Trans.on Pattern Analysis and Machine Intelligence,2010,232(5):815–830.
4.S.Leutenegger,M.Chli,and R.Siegwart.Brisk:Binary robust invariant scalable keypoints.International Conference on Computer Vision.2011,2548- 2555.
5.A.Ahi,R.Ortiz and P.Vandergheynst.FREAK:fast retina keypoint.IEEE Conference on Computer Vision and Pattern Recognition.2012,2069-2076.
6.M.Calonder,V.Lepetit and M.Ozuysal,et al.BRIEF:Computing a local binary descriptor very fast,IEEE Trans.on Pattern Analysis and Machine Intelligence,2012,34(7):1281-1298
Summary of the invention
The present invention proposes that one kind is based on for the existing the disadvantages of sub- descriptive power of image two valued description is poor, performance is unstable The sub- construction method of two valued description of simple sample mode and three-valued strategy, mainly comprises the steps that
Step S1: two images under Same Scene different perspectives are acquired and input computer;
Step S2: gray level image is converted by color image and carries out Gaussian smoothing;
Step S3: characteristic point is extracted respectively in two images using Harris Corner Detection Algorithm;
Step S4: the principal direction of characteristic point is calculated;
Step S5: obtaining the sampled point pair in characteristic point sampling area, and carries out to sampled point smooth;
Step S6: 256 groups of sampled points pair are chosen from 400 groups of sampled point centerings;
Step S7: two valued description is constructed to for each characteristic point using 256 groups of sampled points;
Step S8: Feature Points Matching is carried out based on two valued description.
Compared with current some methods using fixed sample mode, two valued description provided by the invention building side Method, sample mode is simple, and sampled point is selected again after being randomly generated by Gaussian Profile, can be adaptive according to image content The strong sampled point pair of the ability of portraying is selected on ground, and redundancy can be rejected under conditions of retaining effective information, improves description The matching performance of son.Three-valued strategy is introduced when comparing result binaryzation, the binarization method for overcoming tradition polarization exists Unstable disadvantage in image flat site.Compared to existing method, method provided by the invention is more accurate and stablizes.
Detailed description of the invention
Fig. 1 is that the present invention is based on the sub- construction method flow charts of the two valued description of simple sample mode and three-valued strategy.
Specific embodiment
As shown in Figure 1 for the present invention is based on the sub- construction method processes of the two valued description of simple sample mode and three-valued strategy Figure mainly comprises the steps that two images under acquisition Same Scene different perspectives and inputs computer, converts color image For gray level image and carries out Gaussian smoothing, extracts feature respectively in two images using Harris Corner Detection Algorithm Point, obtain point to sampling configuration, the principal direction that calculates characteristic point, obtain sampled point in characteristic point sampling area to and to sampling Point carry out it is smooth, chosen from 400 groups of sampled point centerings 256 groups of sampled points to, using 256 groups of sampled points to for each characteristic point structure It builds two valued description, Feature Points Matching is carried out based on two valued description.The specific implementation details of each step are as follows:
Step S1: two images under Same Scene different perspectives are acquired and input computer.
Step S2: gray level image is converted by color image and carries out Gaussian smoothing.
Step S3: characteristic point is extracted respectively in two images using Harris Corner Detection Algorithm.
Step S4: calculating the principal direction of characteristic point, and concrete mode is, for any feature point F in two images, definition with Centered on F, 23 for radius sampling area of the border circular areas as point F, be denoted as G (F), the gradient of calculating G (F) interior all pixels Value obtains the gradient mean value [d of G (F)x,dy], by the corresponding direction θ=atan (d of the gradient mean valuey,dx) it is determined as characteristic point F Principal direction.
Step S5: the sampled point pair in characteristic point sampling area is obtained, and sampled point is carried out smoothly, concrete mode is such as Under:
Step S51: direction rotation into alignment is carried out to characteristic point sampling area, concrete mode is, in two images The sampling area G (F) of point F is rotated clockwise the corresponding angle of F principal direction by any feature point F.
Step S52: obtaining the sampled point pair in characteristic point sampling area, and concrete mode is, in the sampling that step S51 is obtained In region, 400 groups of sampled points pair for meeting Gaussian Profile are randomly generated.
Step S53: sampled point is carried out smoothly, concrete mode is that, for 800 sampled points of acquisition, will arrive point F distance Less than 11 groups of samples at set be denoted as nearly center of circle point set, remaining groups of samples at set be denoted as remote center of circle point set;Make Nearly centre point is carried out smoothly with the mean filter that radius is 1.5, the mean filter that actionradius is 2.5 is to remote centre point It carries out smooth.
Step S6: 256 groups of sampled points pair are chosen from 400 groups of sampled point centerings, concrete mode is as follows:
Step S61: the comparison result of binaryzation sampled point pair, concrete mode are the 400 groups of points pair obtained for step S5 In any point to (pi,pj), compare sampled point piAnd pjGray value I (pi) and I (pj), if I (pi)>I(pj) then by the point Pair comparison result be denoted as 1, be otherwise denoted as 0.
Step S62: the comparison result of storage sampled point pair, concrete mode is to create a table, each column pair in table One group of sampled point pair is answered, totally 400 column;The value of each row represents the group point to the comparison result at different characteristic point under same column, is somebody's turn to do The line number of table is equal to the number of characteristic point in two images.
Step S63: calculating variance and simultaneously select 256 groups of sampled points pair, concrete mode be the variance that is respectively arranged in computation sheet simultaneously Non- ascending sort is carried out to each column according to variance size, picks out 256 groups of forward points pair of ranking results.
Step S7: two valued description is constructed to for each characteristic point using 256 groups of sampled points, concrete mode is, for appointing One characteristic point, the 256 groups of sampled points obtained using step S6 are to two sampled points of more every group of sampled point centering as follows Gray value, obtain one 3 dimension binary set:
The binary set of 256 groups of points pair is attached to obtain 768 dimension two-values of this feature point by wherein Δ value 10~15 Description.
Step S8: Feature Points Matching is carried out based on two valued description, concrete mode is, for any spy in the 1st width image Levy point Fi, remember the 2nd width image in FiThe smallest characteristic point of Hamming distance is F between two valued descriptioni1, distance value is denoted as d1, While and FiThe small characteristic point of Hamming distance time is F between two valued descriptioni2, distance value is denoted as d2If d1/d2Less than threshold Value T, then by characteristic point (Fi,Fi1) be determined as one group of match point and export, wherein the value of T is 0.6~0.85.
Compared with current some methods using fixed sample mode, two valued description provided by the invention building side Method, sample mode is simple, and sampled point is selected again after being randomly generated by Gaussian Profile, can be adaptive according to image content The strong sampled point pair of the ability of portraying is selected on ground, and redundancy can be rejected under conditions of retaining effective information, improves description The matching performance of son.Three-valued strategy is introduced when comparing result binaryzation, the binarization method for overcoming tradition polarization exists Unstable disadvantage in image flat site.Compared to existing method, method provided by the invention is more accurate and stablizes.

Claims (1)

1. a kind of sub- construction method of two valued description based on simple sample mode and three-valued strategy, which is characterized in that this method Specifically comprise the following steps:
Step S1: two images under Same Scene different perspectives are acquired and input computer;
Step S2: gray level image is converted by color image and carries out Gaussian smoothing;
Step S3: characteristic point is extracted respectively in two images using Harris Corner Detection Algorithm;
Step S4: calculating the principal direction of characteristic point, and concrete mode is that, for any feature point F in two images, definition is with F Center, 23 are sampling area of the border circular areas of radius as point F, are denoted as G (F), calculate the gradient value of G (F) interior all pixels, Obtain the gradient mean value [d of G (F)x,dy], by the corresponding direction θ=atan (d of the gradient mean valuey,dx) it is determined as characteristic point F's Principal direction;
Step S5: the sampled point pair in characteristic point sampling area is obtained, and sampled point is carried out smoothly, concrete mode is as follows:
Step S51: direction rotation into alignment is carried out to characteristic point sampling area, concrete mode is, for any in two images The sampling area G (F) of point F is rotated clockwise the corresponding angle of F principal direction by characteristic point F;
Step S52: obtaining the sampled point pair in characteristic point sampling area, and concrete mode is, in the sampling area that step S51 is obtained In, 400 groups of sampled points pair for meeting Gaussian Profile are randomly generated;
Step S53: sampled point is carried out smoothly, concrete mode is that, for 800 sampled points of acquisition, will arrive point F distance and be less than 11 groups of samples at set be denoted as nearly center of circle point set, remaining groups of samples at set be denoted as remote center of circle point set;Use half The mean filter that diameter is 1.5 carries out smoothly nearly centre point, and the mean filter that actionradius is 2.5 carries out remote centre point Smoothly;
Step S6: 256 groups of sampled points pair are chosen from 400 groups of sampled point centerings, concrete mode is as follows:
Step S61: the comparison result of binaryzation sampled point pair, concrete mode are, for the step S5 400 groups of point centerings obtained Any point is to (pi,pj), compare sampled point piAnd pjGray value I (pi) and I (pj), if I (pi)>I(pj) then by the point pair Comparison result is denoted as 1, is otherwise denoted as 0;
Step S62: the comparison result of storage sampled point pair, concrete mode is to create a table, each column corresponding one in table Group sampled point pair 400 arranges totally;The value of each row represents the group point to the comparison result at different characteristic point, the table under same column Line number be equal to two images in characteristic point number;
Step S63: calculating variance and simultaneously select 256 groups of sampled points pair, concrete mode be the variance respectively arranged in computation sheet and according to Variance size carries out non-ascending sort to each column, picks out 256 groups of forward points pair of ranking results;
Step S7: two valued description is constructed to for each characteristic point using 256 groups of sampled points, concrete mode is, for any spy Point is levied, the 256 groups of sampled points obtained using step S6 are to the ashes of two sampled points of more every group of sampled point centering as follows Angle value obtains one 3 dimension binary set:
The binary set of 256 groups of points pair is attached to obtain 768 dimension two valued descriptions of this feature point by wherein Δ value 10~15 Son;
Step S8: Feature Points Matching is carried out based on two valued description, concrete mode is, for any feature point in the 1st width image Fi, remember the 2nd width image in FiThe smallest characteristic point of Hamming distance is F between two valued descriptioni1, distance value is denoted as d1, simultaneously With FiThe small characteristic point of Hamming distance time is F between two valued descriptioni2, distance value is denoted as d2If d1/d2Less than threshold value T, Then by characteristic point (Fi,Fi1) be determined as one group of match point and export, wherein the value of T is 0.6~0.85.
CN201610832220.XA 2016-09-20 2016-09-20 The sub- construction method of two valued description based on simple sample mode and three-valued strategy Expired - Fee Related CN106408022B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610832220.XA CN106408022B (en) 2016-09-20 2016-09-20 The sub- construction method of two valued description based on simple sample mode and three-valued strategy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610832220.XA CN106408022B (en) 2016-09-20 2016-09-20 The sub- construction method of two valued description based on simple sample mode and three-valued strategy

Publications (2)

Publication Number Publication Date
CN106408022A CN106408022A (en) 2017-02-15
CN106408022B true CN106408022B (en) 2019-05-17

Family

ID=57996630

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610832220.XA Expired - Fee Related CN106408022B (en) 2016-09-20 2016-09-20 The sub- construction method of two valued description based on simple sample mode and three-valued strategy

Country Status (1)

Country Link
CN (1) CN106408022B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111461285B (en) * 2019-01-21 2024-03-05 京东科技控股股份有限公司 Method and device for detecting electric equipment
CN114783002B (en) * 2022-06-22 2022-09-13 中山大学深圳研究院 Object intelligent matching method applied to scientific and technological service field
CN118396994B (en) * 2024-06-26 2024-10-18 东莞市中钢模具有限公司 Die-casting die adaptation degree detection method and system based on three-dimensional model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101819636A (en) * 2010-03-30 2010-09-01 河南理工大学 Irregular area automatic matching method in the digital picture
CN104616300A (en) * 2015-02-03 2015-05-13 清华大学 Sampling mode separation based image matching method and device
CN104851094A (en) * 2015-05-14 2015-08-19 西安电子科技大学 Improved method of RGB-D-based SLAM algorithm
CN105740899A (en) * 2016-01-29 2016-07-06 长安大学 Machine vision image characteristic point detection and matching combination optimization method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9158992B2 (en) * 2013-03-14 2015-10-13 Here Global B.V. Acceleration of linear classifiers

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101819636A (en) * 2010-03-30 2010-09-01 河南理工大学 Irregular area automatic matching method in the digital picture
CN104616300A (en) * 2015-02-03 2015-05-13 清华大学 Sampling mode separation based image matching method and device
CN104851094A (en) * 2015-05-14 2015-08-19 西安电子科技大学 Improved method of RGB-D-based SLAM algorithm
CN105740899A (en) * 2016-01-29 2016-07-06 长安大学 Machine vision image characteristic point detection and matching combination optimization method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Binary Keypoint Descriptor for Accelerated Matching;Zhengguang Xu.etc;《2012 Fourth International Symposium on Information Science and Engineering》;20130411;第78-82页 *
一种基于距离变换的不规则区域匹配算法;霍占强等;《计算机工程与科学》;20160731;第38卷(第7期);第1471-1478页 *

Also Published As

Publication number Publication date
CN106408022A (en) 2017-02-15

Similar Documents

Publication Publication Date Title
CN104850850B (en) A kind of binocular stereo vision image characteristic extracting method of combination shape and color
CN110032983B (en) Track identification method based on ORB feature extraction and FLANN rapid matching
CN106408022B (en) The sub- construction method of two valued description based on simple sample mode and three-valued strategy
CN105654421B (en) Based on the projective transformation image matching method for converting constant low-rank texture
Lange et al. Dld: A deep learning based line descriptor for line feature matching
CN103400384A (en) Large viewing angle image matching method capable of combining region matching and point matching
CN106650580B (en) Goods shelf quick counting method based on image processing
CN111126412A (en) Image key point detection method based on characteristic pyramid network
CN110569861A (en) Image matching positioning method based on point feature and contour feature fusion
CN107862319B (en) Heterogeneous high-light optical image matching error eliminating method based on neighborhood voting
CN113095385B (en) Multimode image matching method based on global and local feature description
CN112614167A (en) Rock slice image alignment method combining single-polarization and orthogonal-polarization images
CN109086350B (en) Mixed image retrieval method based on WiFi
CN106408023B (en) Image characteristic point two valued description and matching process based on group comparison strategy
CN113554036A (en) Characteristic point extraction and matching method for improving ORB algorithm
CN110246165B (en) Method and system for improving registration speed of visible light image and SAR image
CN114066954B (en) Feature extraction and registration method for multi-modal image
Huang et al. FAST and FLANN for feature matching based on SURF
CN104036494A (en) Fast matching computation method used for fruit picture
CN108492256B (en) Unmanned aerial vehicle video fast splicing method
Liu et al. Extend point descriptors for line, curve and region matching
Chen et al. Geometric and non-linear radiometric distortion robust multimodal image matching via exploiting deep feature maps
CN107330436A (en) A kind of panoramic picture SIFT optimization methods based on dimensional criteria
CN106327423B (en) Remote sensing image registration method and system based on directed line segment
Wang et al. Deep homography estimation based on attention mechanism

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190517