CN107423772A - A kind of new binocular image feature matching method based on RANSAC - Google Patents
A kind of new binocular image feature matching method based on RANSAC Download PDFInfo
- Publication number
- CN107423772A CN107423772A CN201710668389.0A CN201710668389A CN107423772A CN 107423772 A CN107423772 A CN 107423772A CN 201710668389 A CN201710668389 A CN 201710668389A CN 107423772 A CN107423772 A CN 107423772A
- Authority
- CN
- China
- Prior art keywords
- mtd
- msub
- mtr
- mrow
- msubsup
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000012937 correction Methods 0.000 claims abstract description 22
- 238000001514 detection method Methods 0.000 claims abstract description 14
- 238000000605 extraction Methods 0.000 claims abstract description 6
- 239000011159 matrix material Substances 0.000 claims description 25
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 7
- 238000013519 translation Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 230000008859 change Effects 0.000 claims description 4
- 238000010586 diagram Methods 0.000 claims description 4
- 239000013598 vector Substances 0.000 claims description 4
- 230000015572 biosynthetic process Effects 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000003786 synthesis reaction Methods 0.000 claims description 3
- 238000013461 design Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 4
- 238000005070 sampling Methods 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 6
- 238000011160 research Methods 0.000 description 5
- 238000011161 development Methods 0.000 description 3
- 241000208340 Araliaceae Species 0.000 description 2
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 2
- 235000003140 Panax quinquefolius Nutrition 0.000 description 2
- 235000008434 ginseng Nutrition 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 241001061260 Emmelichthys struhsakeri Species 0.000 description 1
- PEDCQBHIVMGVHV-UHFFFAOYSA-N Glycerine Chemical compound OCC(O)CO PEDCQBHIVMGVHV-UHFFFAOYSA-N 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000013455 disruptive technology Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G06T5/80—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Medicines Containing Antibodies Or Antigens For Use As Internal Diagnostic Agents (AREA)
Abstract
The invention discloses a kind of binocular image feature matching method based on RANSAC, this method, to carrying out selection matching by the binocular feature of SIFT operator extractions, can efficiently be improved the accuracy of matching, greatly reduce erroneous matching using stochastical sampling uniformity.Specifically comprise the steps of:First, image calibration, using Zhang Zhengyou chessboard calibration methods, the nominal data of two cameras of binocular image is obtained;Secondly, image distortion correction, binocular image is read, image distortion correction is carried out using nominal data;Then, feature detection and extraction, are detected using SIFT operators and extract the feature of two images;Finally, RANSAC images match, match point is selected using RANSAC methods.This method principle is simple, and accuracy is high, and erroneous matching can be significantly reduced in SIFT images match, strengthens matching effect.
Description
Technical field
The invention belongs to a kind of image matching technology, particularly a kind of binocular image characteristic matching side based on RANSAC
Method.
Background technology
In recent years, computer vision was in Mobile Robotics Navigation, remote sensing survey, medical imaging, industrial detection, recognition of face
It is widely used Deng numerous areas, shows good development prospect.Stereoscopic vision is one of important branch of computer vision, its
Major function is to recover real three-dimensional scene information from two dimensional image.
Currently, many mobile robots are exactly to use binocular stereo vision airmanship, and U.S.'s jet-propulsion in addition is tested
The unmanned Autonomous Vehicles of DEMOIII of room (JPL) development, " Jade Hare number " lunar rover of Chinese goddess in the moon team independent research, Microsoft
The complete autonomous football location instrument and have been put into that the wide Baseline Stereo vision guided navigation instrument of development, Harbin Institute of Technology develop
Human body three-dimensional size non-contact measuring instrument of production and application etc. has all applied to technique of binocular stereoscopic vision.Moreover, instantly
Binocular stereo vision is exactly combined the newest frontier science and technology such as bionical, digital hologram to people by popular VR virtual technologies the most
Bring 360 ° of scene experience effects true to nature, be it is a kind of change our life styles disruptive technology, even more to such as medical science,
Many key areas such as military aerospace, industrial simulation, education, amusement bring subversive reform tide.Although people in recent years
Stereoscopic vision research is achieved noticeable achievement, but research level is still not mature enough, and application effect can't reach full level of intelligence, grind
Study carefully technology there is also it is many problem of and bottleneck.In fact, accurately identify and understand that environmental information is very tired for computer
Difficulty, to construct the stronger stereo visual system of practicality also much needs the place of Improvement.Wherein camera calibration
Technology and Stereo Matching Technology are the mostly important research modules of stereoscopic vision research field.The accuracy of calibration result is three
The basic guarantee of Information recovering is tieed up, robustness, accuracy and the density of Stereo matching are the foundations of three-dimensional reconstruction.However, mesh
Before there is no practicality, robustness and accuracy to be attained by demarcation and the Stereo matching universal method of perfect effect.
The content of the invention
Technical problem solved by the invention is to provide a kind of new binocular image characteristic matching side based on RANSAC
Method.
The technical solution for realizing the object of the invention is:A kind of new binocular image characteristic matching side based on RANSAC
Method, comprise the following steps:
Step 1:Binocular image gathers;
Step 2:Binocular image is effectively demarcated using Zhang Zhengyou image calibrations method.23 of shooting different angle are
Know the black and white chessboard trrellis diagram piece of accurate dimension, choose clearly 20 pictures therefrom, utilize the side of Corner Detection
Method finds the characteristic point in each image, then calculates each five inner parameters of camera and all external parameters respectively,
Followed by least square method primary Calculation coefficient of radial distortion, finally by minimization, optimize all parameters;Obtain
Parameter represented with following matrix;Transition matrix is:
It is outer ginseng matrix be:
Step 3:The binocular image of input is subjected to distortion correction respectively using image calibration data obtained by step 2 kind, so
Two images binocular is corrected afterwards, reads in nominal data in binocular image, and step 2 first, then each image is carried out abnormal
Become and correct and preserve correction chart picture, the two images for eliminating distortion are finally subjected to binocular correction according to epipolar-line constraint.
Step 4:Using SIFT operator extraction detection image characteristic points, and characteristic point is extracted, first to every width figure
As carrying out being utilized respectively SIFT operators detection characteristic point, and key point is saved as, the key point for then detecting to obtain is calculated
128 dimensional feature vectors;
Step 5:Binocular ranging, the point of debug matching is reduced using RANSAC methods, two images is carried out just to match,
Matching pair is obtained, and affine transformation matrix is generated to calculating by matching, then by assigned error scope, obtains correctly matching pair
Set, judges whether cycle-index is more than threshold value, if being not more than, recalculates affine matrix, if being more than, take all numbers
In obtain correctly matching to most one group, the equation that the affine matrix finally obtained using least square solution is formed, finally
The high characteristic matching of the degree of accuracy is obtained to group.
Compared with prior art, its remarkable advantage is the present invention:1) method of the invention is detected and extracted using SIFT methods
The feature of binocular image, there is preferable robustness;2) present invention by RANSAC methods to match to further being screened,
Substantially increase the degree of accuracy of matching.
The present invention is described in further detail below in conjunction with the accompanying drawings.
Brief description of the drawings
Fig. 1 is the inventive method flow chart.
Fig. 2 is inventive algorithm flow chart.
Fig. 3 is the left and right cameras image after three-dimensional correction, wherein figure (a) is the left view after correction, figure (b) is
Right view after correction.
Fig. 4 is result figure of the embodiment of the present invention, wherein figure (a) is left image, figure (b) is right image.
Embodiment
With reference to accompanying drawing, a kind of binocular image feature matching method based on RANSAC of the invention, comprise the following steps:
Step 1, using binocular camera image is acquired;
Step 2, using Zhang Zhengyou image calibrations method the binocular image that step 1 collects is demarcated, obtain binocular
Image inner parameter and external parameter;Specially:
Step 2-1, the black and white chessboard trrellis diagram piece of M known accurate dimensions of different angle is shot, therefrom
Choose clearly N pictures;Wherein, M>N>15;
Step 2-2, the characteristic point in each image is found using the method for Corner Detection;
Step 2-3, each five inner parameters of camera are calculated respectively, and result of calculation is:
Wherein αx、αyFor camera focal length, μ0、ν0For main point coordinates, γ is reference axis tilt parameters;
External parameter is:
Spin matrixTranslation matrix
Step 2-4, the radial distortion parameter of picture is corresponded to using each camera of least square method calculating;
Step 2-5, every width picture is corrected using distortion parameter, is then recalculated and taken the photograph using the image after correction
The inner parameter and external parameter of camera.
Step 3, binocular image is read, image is corrected using calibrating parameters;Specially:
Step 3-1, binocular image, binocular image inner parameter and external parameter are read in;
Step 3-2, distortion correction is carried out to binocular image using binocular image inner parameter and external parameter and preserves school
Positive image, its updating formula are:
Wherein, k1And k2For coefficient of radial distortion, (u0, v0) be main point coordinates, (u, v) andFor preferable and reality
In the case of pixel coordinate, (x, y) andFor preferable and actual image coordinate;
Step 3-3, the two images for eliminating distortion are subjected to binocular correction according to epipolar-line constraint, finally obtain correction chart
Picture, its updating formula are:
Wherein, R1And R2For the synthesis spin matrix of binocular image, R1' and R'2For corresponding two integral-rotation matrixes;
RrectIt is as obtained from being converted translation matrix T for transformation matrix, conversion is as follows,
Wherein 3=e1×e2。
Step 4, using SIFT operators detect binocular image in characteristic point, characteristic point is extracted;Specially:
Step 4-1, each image is carried out being utilized respectively SIFT operators detection characteristic point, and saves as key point;
Step 4-2,128 dimensional feature vectors are calculated by the key point for detecting to obtain in step 4-1.
Step 5, binocular image is matched, the characteristic point of erroneous matching is removed using RANSAC methods, generate design sketch,
Complete the binocular image characteristic matching based on RANSAC.Specially:
Step 5-1, two images are carried out just matching, it, which is matched, is combined into collection:
P={ (xi,yi),(xi',yi'), i=1,2, N, wherein N is the number of initial matching pair, is chosen initial
To m, n and o, its set expression is for any three groups of matchings in P set:
S1={ ((xm,ym);(x'm,y'm)),((xn,yn);(x'n,y'n)),((xo,yo);(x'o,y'o))}
3 groups of matchings are substituted into following affine equation of change to coordinate data:
If X=(h11,h12,h21,h22,tx,ty)T, above formula can be written as:AX=b, then obtain affine transformation matrix;
Step 5-2, assigned error scope T, the P of condition subset is met, i.e., correctly matched to set:
Step 5-3, judge whether cycle-index is more than threshold value H, if being not more than, jump procedure 5-1, if being more than, jump
Go to step 5-4;
Step 5-4, take element number in K secondary subsets is most to be designated as
Step 5-5, set is utilizedAX=b is solved equation, its least square solution is:X=[ATA]-1ATb;It is final to obtain just
The high matching point set X of true rate, completes matching.
On the one hand the method for the present invention have selected method of the good SIFT methods of stability as extraction feature and detection feature,
With good robustness;On the other hand, matching is substantially increased to matching to further being screened by RANSAC methods
The degree of accuracy.
It is described in more detail below.
With reference to Fig. 2, detailed process of the invention is as follows:
Step 1:Binocular image gathers;
Step 2:Binocular image is effectively demarcated using Zhang Zhengyou image calibrations method.23 of shooting different angle are
Know the black and white chessboard trrellis diagram piece of accurate dimension, choose clearly 20 pictures therefrom, utilize the side of Corner Detection
Method finds the characteristic point in each image, then calculates each five inner parameters of camera and all external parameters respectively,
Followed by least square method primary Calculation coefficient of radial distortion, finally by minimization, optimize all parameters;Obtain
Parameter represented with following matrix;Transition matrix is:
It is outer ginseng matrix be:
Step 3:Distortion correction is carried out to the binocular image of input respectively using gained image calibration data in step 2, so
Two images binocular is corrected afterwards.
Nominal data in binocular image, and step 2 is read in first, and distortion correction then is carried out using following formula to each image
And correction chart picture is preserved,
The two images for eliminating distortion carry out binocular correction according to epipolar-line constraint, and its updating formula is:
Wherein, R1And R2For the synthesis spin matrix of binocular image, R1' and R'2For corresponding two integral-rotation matrixes.
RrectIt is as obtained from being converted translation matrix T for transformation matrix, conversion is as follows,
Whereine3=e1×e2
Image after the final correction obtained as shown in implementation illustration 3.
Step 4:Using SIFT operator extraction detection image characteristic points, and characteristic point is extracted, first to every width figure
As carrying out being utilized respectively SIFT operators detection characteristic point, and key point is saved as, the key point for then detecting to obtain is calculated
128 dimensional feature vectors;
Step 5:Binocular ranging, the point of debug matching is reduced using RANSAC methods, two images are carried out just first
Matching, it is matched is combined into collection:
P={ (xi,yi),(xi',yi'), i=1,2, N,
Wherein N is the number of initial matching pair, chooses any three groups of matchings in initial p set to its collection table of m, n and o
It is shown as:
S1={ ((xm,ym);(x'm,y'm)),((xn,yn);(x'n,y'n)),((xo,yo);(x'o,y'o))}
3 groups of matchings are substituted into following affine equation of change to coordinate data:
If X=(h11,h12,h21,h22,tx,ty)T, above formula can be written as:AX=b, then obtain affine transformation matrix.
By assigned error scope T (taking 4), it is met the P of condition subset (correctly matching is to set):
Then, judge whether cycle-index is more than threshold value (choosing 30), if being not more than, return starts, if being more than, after
It is continuous;Take the conduct that element number is most in K secondary subsets
Finally, set is utilizedAX=b is solved equation, its least square solution is:X=[ATA]-1ATb;It is final to obtain accuracy
High matching point set.
Verified through embodiment, RANSAC binocular images feature matching method of the invention not only has good robustness also
There is very high accuracy.
Claims (5)
1. a kind of binocular image feature matching method based on RANSAC, it is characterised in that comprise the following steps:
Step 1, using binocular camera image is acquired;
Step 2, using Zhang Zhengyou image calibrations method the binocular image that step 1 collects is demarcated, obtain binocular image
Inner parameter and external parameter;
Step 3, binocular image is read, image is corrected using calibrating parameters;
Step 4, using SIFT operators detect binocular image in characteristic point, characteristic point is extracted;
Step 5, binocular image is matched, the characteristic point of erroneous matching is removed using RANSAC methods, generate design sketch, completed
Binocular image characteristic matching based on RANSAC.
2. the binocular image feature matching method according to claim 1 based on RANSAC, schemed in step 2 using Zhang Zhengyou
As scaling method is demarcated to binocular image, binocular image inner parameter and external parameter are obtained, is specially:
Step 2-1, the black and white chessboard trrellis diagram piece of M known accurate dimensions of different angle is shot, is chosen therefrom
Clearly N pictures;Wherein, M>N>15;
Step 2-2, the characteristic point in each image is found using the method for Corner Detection;
Step 2-3, each five inner parameters of camera are calculated respectively, and result of calculation is:
<mrow>
<mi>A</mi>
<mo>=</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msub>
<mi>&alpha;</mi>
<mi>x</mi>
</msub>
</mtd>
<mtd>
<mi>&gamma;</mi>
</mtd>
<mtd>
<msub>
<mi>&mu;</mi>
<mn>0</mn>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<msub>
<mi>&alpha;</mi>
<mi>y</mi>
</msub>
</mtd>
<mtd>
<msub>
<mi>v</mi>
<mn>0</mn>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
Wherein αx、αyFor camera focal length, μ0、ν0For main point coordinates, γ is reference axis tilt parameters;
External parameter is:
Spin matrixTranslation matrix
Step 2-4, the radial distortion parameter of picture is corresponded to using each camera of least square method calculating;
Step 2-5, every width picture is corrected using distortion parameter, then recalculates video camera using the image after correction
Inner parameter and external parameter.
3. the binocular image feature matching method according to claim 1 based on RANSAC, step 3 reads binocular image,
Image is corrected using calibrating parameters, is specially:
Step 3-1, binocular image, binocular image inner parameter and external parameter are read in;
Step 3-2, distortion correction is carried out to binocular image using binocular image inner parameter and external parameter and preserves correction chart
Picture, its updating formula are:
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mover>
<mi>u</mi>
<mo>&OverBar;</mo>
</mover>
<mo>=</mo>
<mi>u</mi>
<mo>+</mo>
<mo>(</mo>
<mi>u</mi>
<mo>-</mo>
<msub>
<mi>u</mi>
<mn>0</mn>
</msub>
<mo>)</mo>
<mo>&lsqb;</mo>
<msub>
<mi>k</mi>
<mn>1</mn>
</msub>
<mo>(</mo>
<msup>
<mi>x</mi>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mi>y</mi>
<mn>2</mn>
</msup>
<mo>)</mo>
<mo>+</mo>
<msub>
<mi>k</mi>
<mn>2</mn>
</msub>
<msup>
<mrow>
<mo>(</mo>
<mrow>
<msup>
<mi>x</mi>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mi>y</mi>
<mn>2</mn>
</msup>
</mrow>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>&rsqb;</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mover>
<mi>v</mi>
<mo>&OverBar;</mo>
</mover>
<mo>=</mo>
<mi>v</mi>
<mo>+</mo>
<mo>(</mo>
<mi>v</mi>
<mo>-</mo>
<msub>
<mi>v</mi>
<mn>0</mn>
</msub>
<mo>)</mo>
<mo>&lsqb;</mo>
<msub>
<mi>k</mi>
<mn>1</mn>
</msub>
<mo>(</mo>
<msup>
<mi>x</mi>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mi>y</mi>
<mn>2</mn>
</msup>
<mo>)</mo>
<mo>+</mo>
<msub>
<mi>k</mi>
<mn>2</mn>
</msub>
<msup>
<mrow>
<mo>(</mo>
<mrow>
<msup>
<mi>x</mi>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mi>y</mi>
<mn>2</mn>
</msup>
</mrow>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>&rsqb;</mo>
</mtd>
</mtr>
</mtable>
</mfenced>
Wherein, k1And k2For coefficient of radial distortion, (u0, v0) be main point coordinates, (u, v) andFor preferable and actual conditions
Under pixel coordinate, (x, y) andFor preferable and actual image coordinate;
Step 3-3, the two images for eliminating distortion are subjected to binocular correction according to epipolar-line constraint, it is final to obtain correction chart picture, its
Updating formula is:
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<msubsup>
<mi>R</mi>
<mn>1</mn>
<mo>&prime;</mo>
</msubsup>
<mo>=</mo>
<msub>
<mi>R</mi>
<mrow>
<mi>r</mi>
<mi>e</mi>
<mi>c</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>&times;</mo>
<msub>
<mi>R</mi>
<mn>1</mn>
</msub>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msubsup>
<mi>R</mi>
<mn>2</mn>
<mo>&prime;</mo>
</msubsup>
<mo>=</mo>
<msub>
<mi>R</mi>
<mrow>
<mi>r</mi>
<mi>e</mi>
<mi>c</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>&times;</mo>
<msub>
<mi>R</mi>
<mn>2</mn>
</msub>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
Wherein, R1And R2For the synthesis spin matrix of binocular image, R '1And R'2For corresponding two integral-rotation matrixes;
RrectIt is as obtained from being converted translation matrix T for transformation matrix, conversion is as follows,
Whereine3=e1×e2。
4. the binocular image feature matching method based on RANSAC according to claim 1, step 4 utilizes SIFT operator extractions
Detection image characteristic point, and characteristic point is extracted, it is specially:
Step 4-1, each image is carried out being utilized respectively SIFT operators detection characteristic point, and saves as key point;
Step 4-2,128 dimensional feature vectors are calculated by the key point for detecting to obtain in step 4-1.
5. the binocular image feature matching method according to claim 1 based on RANSAC, step 5 is carried out to binocular image
Matching, the characteristic point of erroneous matching is removed using RANSAC methods, be specially:
Step 5-1, two images are carried out just matching, it, which is matched, is combined into collection:
P={ (xi,yi),(x′i,y′i), i=1,2 ..., N, wherein N are the number of initial matching pair, are chosen in initial p set
To m, n and o, its set expression is for any three groups of matchings:
S1={ ((xm,ym);(x'm,y'm)),((xn,yn);(x'n,y'n)),((xo,yo);(x'o,y'o))}
3 groups of matchings are substituted into following affine equation of change to coordinate data:
<mrow>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msub>
<mi>x</mi>
<mi>m</mi>
</msub>
</mtd>
<mtd>
<msub>
<mi>y</mi>
<mi>m</mi>
</msub>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<msub>
<mi>x</mi>
<mi>m</mi>
</msub>
</mtd>
<mtd>
<msub>
<mi>y</mi>
<mi>m</mi>
</msub>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>x</mi>
<mi>n</mi>
</msub>
</mtd>
<mtd>
<msub>
<mi>y</mi>
<mi>n</mi>
</msub>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<msub>
<mi>x</mi>
<mi>n</mi>
</msub>
</mtd>
<mtd>
<msub>
<mi>y</mi>
<mi>n</mi>
</msub>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>x</mi>
<mi>o</mi>
</msub>
</mtd>
<mtd>
<msub>
<mi>y</mi>
<mi>o</mi>
</msub>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<msub>
<mi>x</mi>
<mi>o</mi>
</msub>
</mtd>
<mtd>
<msub>
<mi>y</mi>
<mi>o</mi>
</msub>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msub>
<mi>h</mi>
<mn>11</mn>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>h</mi>
<mn>12</mn>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>h</mi>
<mn>21</mn>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>h</mi>
<mn>22</mn>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>t</mi>
<mi>x</mi>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>t</mi>
<mi>y</mi>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msubsup>
<mi>x</mi>
<mi>m</mi>
<mo>&prime;</mo>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>y</mi>
<mi>m</mi>
<mo>&prime;</mo>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>x</mi>
<mi>n</mi>
<mo>&prime;</mo>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>y</mi>
<mi>n</mi>
<mo>&prime;</mo>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>x</mi>
<mi>o</mi>
<mo>&prime;</mo>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>y</mi>
<mi>o</mi>
<mo>&prime;</mo>
</msubsup>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
If X=(h11,h12,h21,h22,tx,ty)T, above formula can be written as:AX=b, then obtain affine transformation matrix;
Step 5-2, assigned error scope T, the P of condition subset is met, i.e., correctly matched to set:
<mrow>
<msubsup>
<mi>S</mi>
<mn>1</mn>
<mo>*</mo>
</msubsup>
<mo>=</mo>
<mo>{</mo>
<mrow>
<mo>(</mo>
<mo>(</mo>
<mrow>
<msub>
<mi>x</mi>
<mi>i</mi>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mi>i</mi>
</msub>
</mrow>
<mo>)</mo>
<mo>;</mo>
<mo>(</mo>
<mrow>
<msubsup>
<mi>x</mi>
<mi>i</mi>
<mo>&prime;</mo>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>y</mi>
<mi>i</mi>
<mo>&prime;</mo>
</msubsup>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mo>,</mo>
<mo>|</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msubsup>
<mi>x</mi>
<mi>i</mi>
<mo>&prime;</mo>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>y</mi>
<mi>i</mi>
<mo>&prime;</mo>
</msubsup>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msub>
<mi>h</mi>
<mn>11</mn>
</msub>
</mtd>
<mtd>
<msub>
<mi>h</mi>
<mn>12</mn>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>h</mi>
<mn>21</mn>
</msub>
</mtd>
<mtd>
<msub>
<mi>h</mi>
<mn>22</mn>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msub>
<mi>x</mi>
<mi>i</mi>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>y</mi>
<mi>i</mi>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msub>
<mi>t</mi>
<mi>x</mi>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>t</mi>
<mi>y</mi>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>|</mo>
<mo><</mo>
<mi>T</mi>
<mo>}</mo>
</mrow>
Step 5-3, judge whether cycle-index is more than threshold value H, if being not more than, jump procedure 5-1, if being more than, redirect step
Rapid 5-4;
Step 5-4, take element number in K secondary subsets is most to be designated as
Step 5-5, set is utilizedAX=b is solved equation, its least square solution is:X=[AT A]-1ATb;It is final to obtain accuracy
High matching point set X, complete matching.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710668389.0A CN107423772A (en) | 2017-08-08 | 2017-08-08 | A kind of new binocular image feature matching method based on RANSAC |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710668389.0A CN107423772A (en) | 2017-08-08 | 2017-08-08 | A kind of new binocular image feature matching method based on RANSAC |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107423772A true CN107423772A (en) | 2017-12-01 |
Family
ID=60437600
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710668389.0A Pending CN107423772A (en) | 2017-08-08 | 2017-08-08 | A kind of new binocular image feature matching method based on RANSAC |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107423772A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109003307A (en) * | 2018-06-11 | 2018-12-14 | 西北工业大学 | Fishing mesh sizing method based on underwater Binocular vision photogrammetry |
CN109166127A (en) * | 2018-07-17 | 2019-01-08 | 同济大学 | A kind of wearable plant phenotype sensory perceptual system |
CN109407547A (en) * | 2018-09-28 | 2019-03-01 | 合肥学院 | Multi-cam assemblage on-orbit test method and system towards panoramic vision perception |
CN109919247A (en) * | 2019-03-18 | 2019-06-21 | 北京石油化工学院 | Characteristic point matching method, system and equipment in harmful influence stacking binocular ranging |
CN111062990A (en) * | 2019-12-13 | 2020-04-24 | 哈尔滨工程大学 | Binocular vision positioning method for underwater robot target grabbing |
CN111160298A (en) * | 2019-12-31 | 2020-05-15 | 深圳市优必选科技股份有限公司 | Robot and pose estimation method and device thereof |
CN111292239A (en) * | 2020-01-21 | 2020-06-16 | 天目爱视(北京)科技有限公司 | Three-dimensional model splicing equipment and method |
CN111383281A (en) * | 2018-12-29 | 2020-07-07 | 天津大学青岛海洋技术研究院 | Video camera calibration method based on RBF neural network |
CN111429493A (en) * | 2020-03-20 | 2020-07-17 | 青岛联合创智科技有限公司 | Method for matching feature points among multiple images |
CN112509034A (en) * | 2020-11-26 | 2021-03-16 | 江苏集萃未来城市应用技术研究所有限公司 | Large-range accurate detection method for body temperature of pedestrian based on image pixel point matching |
CN112509035A (en) * | 2020-11-26 | 2021-03-16 | 江苏集萃未来城市应用技术研究所有限公司 | Double-lens image pixel point matching method for optical lens and thermal imaging lens |
CN112581542A (en) * | 2020-12-24 | 2021-03-30 | 北京百度网讯科技有限公司 | Method, device and equipment for evaluating automatic driving monocular calibration algorithm |
CN113674407A (en) * | 2021-07-15 | 2021-11-19 | 中国地质大学(武汉) | Three-dimensional terrain reconstruction method and device based on binocular vision image and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103278138A (en) * | 2013-05-03 | 2013-09-04 | 中国科学院自动化研究所 | Method for measuring three-dimensional position and posture of thin component with complex structure |
CN105528785A (en) * | 2015-12-03 | 2016-04-27 | 河北工业大学 | Binocular visual image stereo matching method |
-
2017
- 2017-08-08 CN CN201710668389.0A patent/CN107423772A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103278138A (en) * | 2013-05-03 | 2013-09-04 | 中国科学院自动化研究所 | Method for measuring three-dimensional position and posture of thin component with complex structure |
CN105528785A (en) * | 2015-12-03 | 2016-04-27 | 河北工业大学 | Binocular visual image stereo matching method |
Non-Patent Citations (2)
Title |
---|
CHUGUI, Y ET AL: "A Novel Point Matching Method for Stereovision Measurement Using RANSAC Affine Transformation", 《MEASUREMENT TECHNOLOGY AND INTELLIGENT INSTRUMENTS IX》 * |
郑楷鹏: "摄像机标定及立体匹配技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109003307A (en) * | 2018-06-11 | 2018-12-14 | 西北工业大学 | Fishing mesh sizing method based on underwater Binocular vision photogrammetry |
CN109003307B (en) * | 2018-06-11 | 2021-10-22 | 西北工业大学 | Underwater binocular vision measurement-based fishing mesh size design method |
CN109166127A (en) * | 2018-07-17 | 2019-01-08 | 同济大学 | A kind of wearable plant phenotype sensory perceptual system |
CN109166127B (en) * | 2018-07-17 | 2021-05-11 | 同济大学 | Wearable plant phenotype sensing system |
CN109407547A (en) * | 2018-09-28 | 2019-03-01 | 合肥学院 | Multi-cam assemblage on-orbit test method and system towards panoramic vision perception |
CN111383281A (en) * | 2018-12-29 | 2020-07-07 | 天津大学青岛海洋技术研究院 | Video camera calibration method based on RBF neural network |
CN109919247A (en) * | 2019-03-18 | 2019-06-21 | 北京石油化工学院 | Characteristic point matching method, system and equipment in harmful influence stacking binocular ranging |
CN111062990A (en) * | 2019-12-13 | 2020-04-24 | 哈尔滨工程大学 | Binocular vision positioning method for underwater robot target grabbing |
CN111062990B (en) * | 2019-12-13 | 2023-06-02 | 哈尔滨工程大学 | Binocular vision positioning method for underwater robot target grabbing |
CN111160298A (en) * | 2019-12-31 | 2020-05-15 | 深圳市优必选科技股份有限公司 | Robot and pose estimation method and device thereof |
CN111160298B (en) * | 2019-12-31 | 2023-12-01 | 深圳市优必选科技股份有限公司 | Robot and pose estimation method and device thereof |
CN111292239A (en) * | 2020-01-21 | 2020-06-16 | 天目爱视(北京)科技有限公司 | Three-dimensional model splicing equipment and method |
CN111429493A (en) * | 2020-03-20 | 2020-07-17 | 青岛联合创智科技有限公司 | Method for matching feature points among multiple images |
CN112509034A (en) * | 2020-11-26 | 2021-03-16 | 江苏集萃未来城市应用技术研究所有限公司 | Large-range accurate detection method for body temperature of pedestrian based on image pixel point matching |
CN112509035A (en) * | 2020-11-26 | 2021-03-16 | 江苏集萃未来城市应用技术研究所有限公司 | Double-lens image pixel point matching method for optical lens and thermal imaging lens |
CN112581542A (en) * | 2020-12-24 | 2021-03-30 | 北京百度网讯科技有限公司 | Method, device and equipment for evaluating automatic driving monocular calibration algorithm |
CN113674407A (en) * | 2021-07-15 | 2021-11-19 | 中国地质大学(武汉) | Three-dimensional terrain reconstruction method and device based on binocular vision image and storage medium |
CN113674407B (en) * | 2021-07-15 | 2024-02-13 | 中国地质大学(武汉) | Three-dimensional terrain reconstruction method, device and storage medium based on binocular vision image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107423772A (en) | A kind of new binocular image feature matching method based on RANSAC | |
CN108010085B (en) | Target identification method based on binocular visible light camera and thermal infrared camera | |
Wang et al. | 360sd-net: 360 stereo depth estimation with learnable cost volume | |
CN105678742B (en) | A kind of underwater camera scaling method | |
CN108534782B (en) | Binocular vision system-based landmark map vehicle instant positioning method | |
US10334168B2 (en) | Threshold determination in a RANSAC algorithm | |
CN103971378B (en) | A kind of mix the three-dimensional rebuilding method of panoramic picture in visual system | |
CN109242954B (en) | Multi-view three-dimensional human body reconstruction method based on template deformation | |
WO2016037486A1 (en) | Three-dimensional imaging method and system for human body | |
CN109272570A (en) | A kind of spatial point three-dimensional coordinate method for solving based on stereoscopic vision mathematical model | |
CN105654476B (en) | Binocular calibration method based on Chaos particle swarm optimization algorithm | |
CN101750029B (en) | Characteristic point three-dimensional reconstruction method based on trifocal tensor | |
CN107660336A (en) | For the image obtained from video camera, possess the image processing apparatus and its method of automatic compensation function | |
CN110992263B (en) | Image stitching method and system | |
CN106570899B (en) | Target object detection method and device | |
CN106447766A (en) | Scene reconstruction method and apparatus based on mobile device monocular camera | |
CN106919944A (en) | A kind of wide-angle image method for quickly identifying based on ORB algorithms | |
CN113298934B (en) | Monocular visual image three-dimensional reconstruction method and system based on bidirectional matching | |
CN104268876A (en) | Camera calibration method based on partitioning | |
EP3185212B1 (en) | Dynamic particle filter parameterization | |
CN110264527A (en) | Real-time binocular stereo vision output method based on ZYNQ | |
Shen et al. | Distortion-tolerant monocular depth estimation on omnidirectional images using dual-cubemap | |
CN112465796B (en) | Light field feature extraction method integrating focal stack and full-focus image | |
Jin | A three-point minimal solution for panoramic stitching with lens distortion | |
KR101673144B1 (en) | Stereoscopic image registration method based on a partial linear method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171201 |