CN103491361A - Method for improving sparse corresponding point image matching accuracy and correcting stereo image - Google Patents
Method for improving sparse corresponding point image matching accuracy and correcting stereo image Download PDFInfo
- Publication number
- CN103491361A CN103491361A CN201310460007.7A CN201310460007A CN103491361A CN 103491361 A CN103491361 A CN 103491361A CN 201310460007 A CN201310460007 A CN 201310460007A CN 103491361 A CN103491361 A CN 103491361A
- Authority
- CN
- China
- Prior art keywords
- corresponding points
- image
- matrix
- preliminary treatment
- interpolation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Processing (AREA)
Abstract
The invention discloses a method for improving sparse corresponding point image matching accuracy and correcting a stereo image. The method comprises the steps of (1) improving sparse corresponding point image matching accuracy, namely amplifying local image diamonds and finding the smallest SSD value and corresponding central point coordinates by means of the blocky SSD matching method; (2) correcting the stereo image, namely preprocessing the corresponding point coordinates, resolving the last six coefficients in an Hg matrix by means of the SVD method and filling in the first three coefficients, carrying out reversal preprocessing on the Hg matrix to obtain an H matrix, and correcting one image to enable only horizontal parallax to exist between the image and the other image by means of the H matrix. The method for improving the sparse corresponding point image matching accuracy and correcting the stereo image has the advantages that the sparse corresponding point image matching accuracy is improved, searching time can not be increased obviously, the implementation process for stereo image correction is simple, calibration is not needed, corresponding points do not need to be on the same plane, and searching time in a dense corresponding point matching process is shortened.
Description
Technical field
The present invention relates to a kind of image processes and computer vision technique, particularly a kind of method that improves sparse corresponding points images match precision and stereo image correction.
Background technology
Sparse features point coupling for two images, mainly contain two kinds of solutions: 1, manually find corresponding points and mated; 2, utilize the automated characterization point matching algorithms such as Harris, SUSAN isocenter detection algorithm or SIFT, SURF find corresponding points and mated.
At computer vision field, stereo image correction is proofreaied and correct also referred to as polar curve, to a width (or two width) image rectification wherein, make all corresponding points of two images only have horizontal parallax, reduce the search time in dense corresponding point matching process, mainly contain three kinds of solutions: 1, video camera is demarcated, utilize the internal reference K of video camera and outer ginseng R, the T of diverse location to combine, ask for needed homography matrix H picture is proofreaied and correct to (the method need to be demarcated video camera, and corresponding points do not require at grade); 2, the image that only has single plane or plane at infinity is carried out to solving of homography matrix H, and proofread and correct the image (the method is without demarcation, but can not guarantee that out-of-plane image is proofreaied and correct equally, and corresponding points need to limit at grade) on this plane; 3, set up the majorized function of fundamental matrix F and corresponding points, utilize the method for nonlinear optimization to carry out parameter optimization, finally solve homography matrix H(the method without demarcation, corresponding points do not require at grade yet).
In fact, original sparse corresponding point matching method exists artificial coupling to have
+the problem that mistake is mated appears during the not high enough and automated characterization point of the precision of 2 pixel error coupling.Original part method for correcting polar line exists and need to require problem at grade to camera calibration or corresponding points.
Summary of the invention
The object of the invention is to overcome the shortcoming of prior art with not enough, a kind of method that improves sparse corresponding points images match precision and stereo image correction is provided, the method is that a kind of parallel optical axis structure camera is under operational environment, improved sparse corresponding point matching precision and without demarcation, and corresponding points do not require that the camera of the method is used to form the parallel optical axis structure in conplane stereo image correction method.
Purpose of the present invention is achieved through the following technical solutions: a kind of method that improves sparse corresponding points images match precision and stereo image correction comprises the following steps:
The binocular camera shooting head of step 1, use parallel optical axis structure is taken at least two images;
Step 3, improve the matching precision of sparse corresponding points: local image blocking is amplified, utilize block SSD matching process to find out minimum SSD value and corresponding center point coordinate;
Step 4, stereo image correction: the preliminary treatment of corresponding points coordinate, utilize the SVD method to solve H
grear 6 coefficients of matrix are also inserted front 3 coefficients, to H
gmatrix carries out anti-preliminary treatment and tries to achieve the H matrix, and a recycling H matrix wherein figure is corrected into another figure and only has horizontal parallax.
Described step 3 comprises the following steps:
The amplification of A, topography's square: utilize the brightness of bilinear interpolation image blocking centered by corresponding points by left figure and right two figure or part that color value carries out image to amplify;
B, block SSD coupling: by the right figure square after interpolation, in the left figure after interpolation, traversal SSD mates, and tries to achieve the minimum value of SSD, and the records center point coordinates; Left figure and right figure interchangeable, without loss of generality, the corresponding points of right figure of take are ideal value, improving the corresponding point matching precision in left figure is example.
Described steps A comprises and comprising the following steps:
A1, the i*i(by right figure centered by corresponding points are as 11*11) image brightness or the color square of size, utilize bilinear interpolation to obtain wide and high (2i ?1) * (2i ?1) (as 21*21) image blocking that respectively amplifies 2 times,
A2, left figure is extracted to (i+2j) * (i+2j) (as the 21*21) image blocking centered by selected corresponding points; And utilize bilinear interpolation to obtain wide and high (2i+4j ?1) * (2i+4j ?1) (as 41*41) image blocking that respectively amplifies 2 times, and wherein, the span of i is 5≤i≤15, the span of j is: 2≤j≤10.
Described step B comprises the following steps:
B1, by the right figure square after interpolation, carry out the order traversal of onesize (2i ?1) * (2i ?1) in the large square of left figure after interpolation relatively, try to achieve the minimum value of SSD;
B2, records center point coordinates.
Described step 4 comprises the following steps:
(1) corresponding points coordinate preliminary treatment;
(2) utilize the SVD method to solve H
grear 6 coefficients of matrix are also inserted front 3 coefficients;
(3) to H
gmatrix carries out anti-preliminary treatment and tries to achieve the H matrix;
(4) calculate all corresponding points coordinates of image in public domain according to H matrix bilinear interpolation;
(5) utilize the image after corresponding points coordinate and bilinear interpolation obtain a wherein figure correction.
In described step (1), the preliminary treatment of described corresponding points coordinate has following three kinds of selections:
(a) do not process;
(b) center translation x
zl=x
l-E (x
l), y
zl=y
l-E (y
l), x
zr=x
r-E (x
r), y
zr=y
r-E (y
r);
(c) normalization x
gl=(x
l-E (x
l))/D (x
l), y
gl=(y
l-E (y
l))/D (y
l), x
gr=(x
r-E (x
r))/D (x
r), y
gr=(y
r-E (y
r))/D (y
r);
Wherein, subscript l represents Zuo Tu, subscript r represents right figure, and subscript z represents center translation processing, and subscript g represents normalized, x is the abscissa of point, y is the ordinate of point, and E (x) is the average of abscissa before preliminary treatment, and D (x) is the variance of abscissa before preliminary treatment, E (y) is the average of ordinate before preliminary treatment, and D (y) is the variance of ordinate before preliminary treatment.
In described step (2), any correct image in left figure and right figure is got final product, if left figure is proofreaied and correct, according to the identical structure equation of the image y coordinate Ah=0 after proofreading and correct:
The A matrix is carried out to SVD and decompose A=UDV
t, try to achieve pretreated homography matrix:
In described step (3), for the preliminary treatment of having carried out, to H
gmatrix carries out anti-preliminary treatment, tries to achieve required homography matrix H:
(I) do not processed;
The translation of (II) anticentre
(III) renormalization:
If right figure is proofreaied and correct, its way and correction to left figure are in like manner;
Described step (4) comprises the following steps:
(4-1) bilinear interpolation is carried out one by one according to the square frame of 16*16, f (1), and f (2), f (3), f (4) refers to respectively the mapping point of four vertex positions of square frame, in each square frame inside, utilizes formula:
F
12(k, 1)=(f (1) * (16-k)+f (2) * k)>>2, f
23(16, k)=(f (2) * (16-k)+f (3) * k)>>2, f
34(k, 16)=(f (3) * (16-k)+f (4) * k)>>2, f
41(1, k)=(f (4) * (16-k)+f (1) * k)>>point coordinates of 2 first interpolation four edges;
(4-2) utilize formula:
F (x, y)=(f (1) * (16-x) * (16-y)+f (2) * (x) * (16-y)+f (3) * (16-x) * (y)+f (4) * x*y) > > point coordinates of the interior zone of 3 interpolation square frames;
Wherein, subscript represents the mark on the first summit and second summit on interpolation limit, k representative be positioned on the interpolation limit and with this interpolation limit on first vertex distance coordinate that is k-1 pixel, the span of k is 2~15; X, y represent the coordinate in square frame, and the span of x and y value is 2~15.The implication of formula is according to all corresponding points coordinates in H matrix interpolation computed image public domain.
In described step (5), utilize corresponding points coordinate and bilinear interpolation to obtain the image after left figure proofreaies and correct; Brightness or the color value of 4 the most contiguous corresponding points coordinates at impact point, obtain brightness or the color value under needed rounded coordinate with bilinear interpolation method; Utilize formula:
f(x,y)=(f(1)*(64-x)*(64-y)+f(2)*(x)*(64-y)+f(3)*(64-x)*(y)+f(4)*x*y)>>12,
Wherein, f (1), f (2), f (3) and f (4) refer to respectively brightness or the color value of 4 the most contiguous coordinates.
This programme specifically can be described below:
First two width images are manually looked for a little or the detection of automated characterization point and coupling, obtained being greater than 5 groups of sparse corresponding points; Then topography is carried out to interpolation, carry out the essence coupling of corresponding points; Finally carry out not requiring at conplane stereo image correction without demarcation, corresponding points.
Specifically comprise following key step:
(1) manually or automatically choose image and be no less than arbitrarily 5 groups of corresponding points, obtain the coordinate of corresponding points;
(2) brightness or the color value of topography are carried out to bilinear interpolation, carry out the essence coupling of corresponding points.Think that right figure corresponding points position is standard, the corresponding points that left figure finds must approach (as error exists with real left figure corresponding points very much
+in 2 pixels), and, because camera is the parallel optical axis structure, can utilize block SSD coupling;
(2.1) first right figure the i*i(centered by corresponding points as 11*11) image brightness or the color square of size, utilize bilinear interpolation to obtain wide and high (2i-1) * (2i-1) (as the 21*21) image blocking that respectively amplifies 2 times.At original left figure, with the image blocking of onesize i*i, searched for, hunting zone be near corresponding points j*j(as 5*5) square in, the 2j*2j(after amplifying is as 10*10) square in;
(2.2) then left figure is extracted to (i+2j) * (i+2j) (as the 21*21) image blocking centered by selected corresponding points; And utilize bilinear interpolation to obtain wide and high (2i+4j-1) * (2i+4j-1) (as the 41*41) image blocking that respectively amplifies 2 times;
(2.3) by the right figure square after interpolation, carry out the order traversal of onesize (2i-1) * (2i-1) in the large square of left figure after interpolation relatively, try to achieve the minimum value of SSD, and the records center point coordinates, now the precision of corresponding points is
+0.5 individual pixel.The comparison formula of SSD is
(3) carry out not requiring at conplane stereo image correction without demarcation, corresponding points.Two figure any correct image in left and right is got final product, might as well think left figure is proofreaied and correct, if right figure is proofreaied and correct to way in like manner.Left figure corresponding points coordinate [x
1, y
1], right figure corresponding points coordinate [x
r, y
r],
(3.1) preliminary treatment of corresponding points coordinate: (a) do not process; (b) center translation x
zl=x
l-E (x
l), y
zl=y
l-E (y
l), x
zr=x
r-E (x
r), y
zr=y
r-E (y
r); (c) normalization x
gl=(x
l-E (x
l))/D (x
l), y
gl=(y
l-E (y
l))/D (y
l), x
gr=(x
r-E (x
r))/D (x
r), y
gr=(y
r-E (y
r))/D (y
r).Wherein, E (x) is the average of coordinate before normalization, and D (x) is the variance of coordinate before normalization;
(3.2) utilize the method for SVD to solve rear 6 coefficients of H matrix, according to the identical structure equation of the image y coordinate Ah=0 after proofreading and correct:
The A matrix carries out SVD and decomposes A=UDV
t, try to achieve homography matrix H
grear 6 coefficients be last row of V, adding front 3 coefficients is [h
g1h
g2h
g3]=[1 0 0] or [h
g1h
g2h
g3]=[h
g5-h
g40];
(3.3) for the preliminary treatment of having carried out, to H
gmatrix carries out anti-preliminary treatment, tries to achieve homography matrix H;
In described step (1), the preliminary treatment of described corresponding points coordinate has following three kinds of selections:
(a) do not process;
(b) anticentre translation:
(c) renormalization:
(3.4) calculate all corresponding points of image in public domain according to the H matrix interpolation.For improving arithmetic speed, the square frame of every 16*16 just uses homography matrix to obtain 4 groups of corresponding points coordinates, and all the other corresponding points coordinates obtain with method and 4 groups of corresponding points coordinates of bilinear interpolation.Bilinear interpolation method carries out one by one according to the square frame of 16*16, f (1), and f (2), f (3), f (4) refers to respectively the mapping point of four vertex positions of square frame.In each square frame inside, utilize formula f
12(k, 1)=(f (1) * (16-k)+f (2) * k)>>2, f
23(16, k)=(f (2) * (16-k)+f (3) * k)>>2, f
34(k, 16)=(f (3) * (16-k)+f (4) * k)>>2, f
41(1, k)=(f (4) * (16-k)+f (1) * k)>>2 first interpolation four edges, then utilize formula: f (x, y)=(f (1) * (16-x) * (16-y)+f (2) * (x) * (16-y)+f (3) * (16-x) * (y)+f (4) * x*y)>>interior zone of 3 interpolation square frames.Wherein, subscript represents the mark on the first summit and second summit on interpolation limit, k representative be positioned on the interpolation limit and with this interpolation limit on first vertex distance coordinate that is k-1 pixel, the k value is 2~15; X, y represent the coordinate in square frame, and value is 2~15;
(3.5) another image conversion is become with draft the image of corresponding element is arranged in public domain.The brightness of new images element or color value will obtain at former figure by the corresponding points coordinate, because the corresponding points coordinate may have decimal, so need to utilize the pixel value of bilinear interpolation method 4 coordinates the most contiguous at impact point to obtain the pixel value under needed rounded coordinate.Utilize the pixel value of bilinear interpolation method 4 the corresponding points coordinates the most contiguous at impact point to obtain the pixel value under needed rounded coordinate.Utilize formula:
f(x,y)=(f(1)*(64-x)*(64-y)+f(2)*(x)*(64-y)+f(3)*(64-x)*(y)+f(4)*x*y)>>12,
Wherein, f (1), f (2), f (3) and f (4) refer to respectively the pixel value of 4 the most contiguous coordinates.
Operation principle of the present invention:
1, for the sparse corresponding points of having mated, near corresponding points, utilize bilinear interpolation to obtain local enlarged image, then utilize the method for piece match search to select brightness or the most similar square of color value in local enlarged image, thus matching precision is brought up to+0.5 pixel.
2, stereo image correction is to utilize the corresponding points of coupling to solve homography matrix H, and solving of this matrix makes improvements the normalization direct linear transformation, introduces the constant constraint of y coordinate, then utilizes this matrix to carry out a wherein figure and shone upon and resample.
The present invention has following advantage and effect with respect to prior art:
1, the meticulous matching process of the corresponding points for parallel optical axis structure camera image that the present invention adopts, can improve the matching precision of corresponding points, will manually mate and may have
+the matching precision of 2 pixels is brought up to
+0.5 the matching precision of individual pixel.
2, utilize near the topography's amplification of corresponding points and the method for piece match search, rather than coupling after entire image is amplified, can significantly reduce search time, avoid generation coupling by mistake simultaneously.
3, the stereo image correction method implementing procedure that the present invention adopts is simple, with part stereo image correction method, compares, and without carrying out camera calibration and corresponding points, does not require at grade.
4, the stereo image correction method that the present invention adopts, the characteristics that keep original stereo image correction method, make all corresponding points of image after correction only have horizontal parallax, reduced the search time in dense corresponding point matching process, there is actual engineering significance.
The accompanying drawing explanation
Fig. 1 is the overall structure schematic diagram of an embodiment of the present invention.
Fig. 2 is the general flow chart of the specific embodiment of the invention.
Fig. 3 is the flow chart of the raising corresponding point matching precision of the specific embodiment of the invention.
Fig. 4 is the flow chart of the stereo image correction of the specific embodiment of the invention.
Embodiment
Below in conjunction with embodiment and accompanying drawing, the present invention is described in further detail, but embodiments of the present invention are not limited to this.
Embodiment
The parallel optical axis structure of the present embodiment is the binocular camera shooting head, as shown in Figure 1, the overall structure figure of a kind of embodiment comprises binocular camera shooting camera lens 1, described binocular camera shooting head has the first pick-up lens and the second pick-up lens, the first pick-up lens is parallel mode with the second pick-up lens to be arranged, and with the optical axis 4 that guarantees two cameras, is parallel constructions; The corresponding transducer 3 of each camera; Light source 2 is guarantee to gather in the process of image enough illuminations are arranged.
The general flow chart of raising corresponding points precision of the present invention and stereo image correction method as shown in Figure 2; Concrete steps are: first with parallel optical axis structure camera, take two pictures; Recycling automated characterization point detection and matching algorithm or artificial the searching are no less than 5 groups of corresponding points, and corresponding points preferably can be evenly distributed on the image each several part; Then to topography's bilinear interpolation, carry out the essence coupling of corresponding points; Finally carry out without demarcating, corresponding points are without at conplane stereo image correction.
As shown in Figure 3, for improving the flow chart of corresponding point matching precision; The first step is that the image blocking bilinear interpolation centered by corresponding points becomes wide, the high image blocking that respectively amplifies 2 times by right figure; Second step is that the image blocking centered by corresponding points adds that the scope bilinear interpolation that needs search becomes wide, the high image blocking that respectively amplifies 2 times by left figure; The 3rd step is that right figure square and left figure square are carried out to onesize order traversal relatively, tries to achieve minimum SSD numerical value, the coordinate of records center point; The 4th step is the repetition aforesaid operations, until all corresponding points all complete the essence coupling.
As shown in Figure 4, be the flow chart of stereo image correction; The first step is to carry out the preliminary treatment of corresponding points, can select not process, center translation and three kinds of modes of normalization, with the normalized best results; Second step is to utilize the SVD method to solve H after preliminary treatment
grear 6 coefficients [h4, h5, h6, h7, h8, h9] of matrix, whole coefficients to amplify together and make h9=1, and add front 3 coefficients for [1,0,0] or [h5 ,-h4,0]; The 3rd step is to H
gmatrix carries out anti-preliminary treatment (do not process, center translation, renormalization), tries to achieve needed homography matrix H; The 4th step is image to be divided into to the square frame of a plurality of 16*16, utilizes homography matrix H to calculate the corresponding points coordinate at 4 angles of all square frames, and utilizes the method for bilinear interpolation to ask for all corresponding points coordinates in public domain; The 5th step is to utilize corresponding points coordinate and bilinear interpolation to obtain the image after a figure wherein carries out three-dimensional correction.
Above-described embodiment is preferably execution mode of the present invention; but embodiments of the present invention are not restricted to the described embodiments; other any do not deviate from change, the modification done under Spirit Essence of the present invention and principle, substitutes, combination, simplify; all should be equivalent substitute mode, within being included in protection scope of the present invention.
Claims (10)
1. a method that improves sparse corresponding points images match precision and stereo image correction, is characterized in that, comprises the following steps:
The binocular camera shooting head of step 1, use parallel optical axis structure is taken at least two images;
Step 2, utilize the detection of automated characterization point and matching algorithm or manually choose the sparse corresponding points of at least 5 group, record the corresponding points coordinate;
Step 3, improve the matching precision of sparse corresponding points: local image blocking is amplified, utilize block SSD matching process to find out minimum SSD value and corresponding center point coordinate;
Step 4, stereo image correction: the preliminary treatment of corresponding points coordinate, utilize the SVD method to solve H
grear 6 coefficients of matrix are also inserted front 3 coefficients, to H
gmatrix carries out anti-preliminary treatment and tries to achieve the H matrix, and a recycling H matrix wherein figure is corrected into another figure and only has horizontal parallax.
2. according to the method for the described raising of right 1 sparse corresponding points images match precision and stereo image correction, it is characterized in that, described step 3 comprises the following steps:
The amplification of A, topography's square: utilize the brightness of bilinear interpolation image blocking centered by corresponding points by left figure and right two figure or part that color value carries out image to amplify;
B, block SSD coupling: by the right figure square after interpolation, in the left figure after interpolation, traversal SSD mates, and tries to achieve the minimum value of SSD, and the records center point coordinates.
3. according to the method for the described raising of right 2 sparse corresponding points images match precision and stereo image correction, it is characterized in that, described steps A comprises the following steps:
Image brightness or the color square of A1, the i by right figure centered by corresponding points * i size, utilize bilinear interpolation to obtain wide and high (2i ?1) * (2i ?the 1) image blocking that respectively amplifies 2 times;
A2, left figure is extracted to (i+2j) * (i+2j) image blocking centered by selected corresponding points, and utilize bilinear interpolation to obtain wide and high (2i+4j ?1) * (2i+4j ?the 1) image blocking that respectively amplifies 2 times, wherein, the span of i is 5≤i≤15, and the span of j is: 2≤j≤10.
4. according to the method for the described raising of right 2 sparse corresponding points images match precision and stereo image correction, it is characterized in that, described step B comprises the following steps:
B1, by the right figure square after interpolation, carry out the order traversal of onesize (2i ?1) * (2i ?1) in the large square of left figure after interpolation relatively, try to achieve the minimum value of SSD;
B2, records center point coordinates.
5. according to the method for the described raising of right 1 sparse corresponding points images match precision and stereo image correction, it is characterized in that, described step 4 comprises the following steps:
(1) corresponding points coordinate preliminary treatment;
(2) utilize the SVD method to solve H
grear 6 coefficients of matrix are also inserted front 3 coefficients;
(3) to H
gmatrix carries out anti-preliminary treatment and tries to achieve the H matrix;
(4) calculate all corresponding points coordinates of image in public domain according to H matrix bilinear interpolation;
(5) utilize the image after corresponding points coordinate and bilinear interpolation obtain a wherein figure correction.
6. according to the method for the described raising of right 5 sparse corresponding points images match precision and stereo image correction, it is characterized in that, in described step (1), the preliminary treatment of described corresponding points coordinate has following three kinds of selections:
(a) do not process;
(b) center translation: x
zl=x
l-E (x
l), y
zl=y
l-E (y
l), x
zr=x
r-E (x
r), y
zr=y
r-E (y
r);
(c) normalization: x
gl=(x
l-E (x
l))/D (x
l), y
gl=(y
l-E (y
l))/D (y
l), x
gr=(x
r-E (x
r))/D (x
r), y
gr=(y
r-E (y
r))/D (y
r);
Wherein, under be designated as l symbology data used belong to left figure, under be designated as r symbology data used belong to right figure, under be designated as z the translation of symbology center process, under be designated as the symbology normalized of g, x is the abscissa of point, y is the ordinate of point, and E (x) is the average of abscissa before preliminary treatment, and D (x) is the variance of abscissa before preliminary treatment, E (y) is the average of ordinate before preliminary treatment, and D (y) is the variance of ordinate before preliminary treatment.
7. according to the method for the described raising of right 5 sparse corresponding points images match precision and stereo image correction, it is characterized in that, in described step (2), any correct image in left figure and right figure is got final product, if left figure is proofreaied and correct, according to the identical structure equation of the image y coordinate Ah=0 after proofreading and correct, that is:
The A matrix is carried out to SVD and decompose A=UDV
t, try to achieve pretreated homography matrix:
Wherein, under be designated as g symbology through pretreated data, under be designated as l symbology data used belong to left figure, under be designated as r symbology data used belong to right figure, H
gpretreated homography matrix is tried to achieve in representative; Formula Ah=0 is
Expansion, be that order is proofreaied and correct rear left figure y coordinate and equated with the y coordinate of right figure.
8. according to the method for the described raising of right 5 sparse corresponding points images match precision and stereo image correction, it is characterized in that, in described step (3), for the preliminary treatment of having carried out, to H
gmatrix carries out anti-preliminary treatment, tries to achieve required homography matrix H:
(I) do not processed;
The translation of (II) anticentre:
(III) renormalization:
Wherein, under be designated as l symbology data used belong to left figure, under be designated as r symbology data used belong to right figure, under be designated as z the translation of symbology center process, under be designated as the symbology normalized of g, x is the abscissa of point, y is the ordinate of point, and E (x) is the average of abscissa before preliminary treatment, and D (x) is the variance of abscissa before preliminary treatment, E (y) is the average of ordinate before preliminary treatment, and D (y) is the variance of ordinate before preliminary treatment.
9. according to the method for the described raising of right 5 sparse corresponding points images match precision and stereo image correction, it is characterized in that, described step (4) comprises the following steps:
(4-1) bilinear interpolation is carried out one by one according to 16 * 16 square frame, in each square frame inside, utilizes formula:
F
12(k, 1)=(f (1) * (16-k)+f (2) * k)>>2, f
23(16, k)=(f (2) * (16-k)+f (3) * k)>>2, f
34(k, 16)=(f (3) * (16-k)+f (4) * k)>>2, f
41(1, k)=(f (4) * (16-k)+f (1) * k)>>2, the point coordinates of first interpolation four edges;
Wherein, k representative be positioned on the interpolation limit and with this interpolation limit on first vertex distance coordinate that is k-1 pixel, the span of k is 2~15; X represents the abscissa in square frame, and y represents the ordinate in square frame, and the span of x and y is 2~15, and the implication of formula used is with all corresponding points coordinates in H matrix interpolation computed image public domain;
(4-2) utilize formula:
F (x, y)=(f (1) * (16-x) * (16-y)+f (2) * (x) * (16-y)+f (3) * (16-x) * (y)+f (4) * x*y) > > 3, the point coordinates of the interior zone of interpolation square frame.
10. according to the method for the described raising of right 5 sparse corresponding points images match precision and stereo image correction, it is characterized in that, in described step (5), utilize corresponding points coordinate and bilinear interpolation to obtain the image after left figure proofreaies and correct; Brightness or the color value of 4 the most contiguous corresponding points coordinates at impact point, the formula of use bilinear interpolation:
F (x, y)=(f (1) * (64-x) * (64-y)+f (2) * (x) * (64-y)+f (3) * (64-x) * (y)+f (4) * x*y) > > 12, obtain brightness or color value under needed rounded coordinate;
Wherein, f (1), f (2), f (3) and f (4) refer to respectively brightness or the color value of 4 the most contiguous coordinates.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310460007.7A CN103491361B (en) | 2013-09-30 | 2013-09-30 | A kind of method improving sparse corresponding points images match precision and stereo image correction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310460007.7A CN103491361B (en) | 2013-09-30 | 2013-09-30 | A kind of method improving sparse corresponding points images match precision and stereo image correction |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103491361A true CN103491361A (en) | 2014-01-01 |
CN103491361B CN103491361B (en) | 2015-09-02 |
Family
ID=49831283
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310460007.7A Expired - Fee Related CN103491361B (en) | 2013-09-30 | 2013-09-30 | A kind of method improving sparse corresponding points images match precision and stereo image correction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103491361B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106412441A (en) * | 2016-11-04 | 2017-02-15 | 珠海市魅族科技有限公司 | Video anti-shake control method and terminal |
CN113449742A (en) * | 2021-07-08 | 2021-09-28 | 中国工商银行股份有限公司 | Image processing method, device and equipment |
CN115272491A (en) * | 2022-08-12 | 2022-11-01 | 哈尔滨工业大学 | Binocular PTZ camera dynamic self-calibration method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100231802A1 (en) * | 2009-03-12 | 2010-09-16 | Sony Corporation | Method and system for carrying out reliability classification for motion vectors in a video |
CN102034235A (en) * | 2010-11-03 | 2011-04-27 | 山西大学 | Rotary model-based fisheye image quasi dense corresponding point matching diffusion method |
CN103106659A (en) * | 2013-01-28 | 2013-05-15 | 中国科学院上海微系统与信息技术研究所 | Open area target detection and tracking method based on binocular vision sparse point matching |
-
2013
- 2013-09-30 CN CN201310460007.7A patent/CN103491361B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100231802A1 (en) * | 2009-03-12 | 2010-09-16 | Sony Corporation | Method and system for carrying out reliability classification for motion vectors in a video |
CN102034235A (en) * | 2010-11-03 | 2011-04-27 | 山西大学 | Rotary model-based fisheye image quasi dense corresponding point matching diffusion method |
CN103106659A (en) * | 2013-01-28 | 2013-05-15 | 中国科学院上海微系统与信息技术研究所 | Open area target detection and tracking method based on binocular vision sparse point matching |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106412441A (en) * | 2016-11-04 | 2017-02-15 | 珠海市魅族科技有限公司 | Video anti-shake control method and terminal |
CN106412441B (en) * | 2016-11-04 | 2019-09-27 | 珠海市魅族科技有限公司 | A kind of video stabilization control method and terminal |
CN113449742A (en) * | 2021-07-08 | 2021-09-28 | 中国工商银行股份有限公司 | Image processing method, device and equipment |
CN115272491A (en) * | 2022-08-12 | 2022-11-01 | 哈尔滨工业大学 | Binocular PTZ camera dynamic self-calibration method |
Also Published As
Publication number | Publication date |
---|---|
CN103491361B (en) | 2015-09-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109615652B (en) | Depth information acquisition method and device | |
CN111784778B (en) | Binocular camera external parameter calibration method and system based on linear solving and nonlinear optimization | |
CN110969668A (en) | Stereoscopic calibration algorithm of long-focus binocular camera | |
CN111080709B (en) | Multispectral stereo camera self-calibration algorithm based on track feature registration | |
CN108648264B (en) | Underwater scene reconstruction method based on motion recovery and storage medium | |
CN109509156B (en) | Image defogging processing method based on generation countermeasure model | |
CN110910456B (en) | Three-dimensional camera dynamic calibration method based on Harris angular point mutual information matching | |
CN113129430B (en) | Underwater three-dimensional reconstruction method based on binocular structured light | |
CN103945207B (en) | A kind of stereo-picture vertical parallax removing method based on View Synthesis | |
CN112929626B (en) | Three-dimensional information extraction method based on smartphone image | |
CN115880344B (en) | Binocular stereo matching data set parallax true value acquisition method | |
CN110440761B (en) | Processing method of aerial photogrammetry data of unmanned aerial vehicle | |
CN110517309A (en) | A kind of monocular depth information acquisition method based on convolutional neural networks | |
CN103491361B (en) | A kind of method improving sparse corresponding points images match precision and stereo image correction | |
CN114119987A (en) | Feature extraction and descriptor generation method and system based on convolutional neural network | |
CN114463521A (en) | Building target point cloud rapid generation method for air-ground image data fusion | |
CN113936047A (en) | Dense depth map generation method and system | |
Do | Application of neural networks for stereo-camera calibration | |
CN110910457B (en) | Multispectral three-dimensional camera external parameter calculation method based on angular point characteristics | |
CN109859263B (en) | Wide-view angle positioning method based on fisheye lens | |
CN110599416B (en) | Non-cooperative target image blind restoration method based on spatial target image database | |
CN104794680A (en) | Multi-camera image mosaicking method and multi-camera image mosaicking device based on same satellite platform | |
CN113487516B (en) | Defogging processing method for image data | |
CN112700504B (en) | Parallax measurement method of multi-view telecentric camera | |
CN115272569A (en) | Generation method and generation device of deviation and color difference graph of three-dimensional data and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20150902 Termination date: 20210930 |