CN106023230B - A kind of dense matching method of suitable deformation pattern - Google Patents

A kind of dense matching method of suitable deformation pattern Download PDF

Info

Publication number
CN106023230B
CN106023230B CN201610390400.7A CN201610390400A CN106023230B CN 106023230 B CN106023230 B CN 106023230B CN 201610390400 A CN201610390400 A CN 201610390400A CN 106023230 B CN106023230 B CN 106023230B
Authority
CN
China
Prior art keywords
pixel
image
right image
new
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610390400.7A
Other languages
Chinese (zh)
Other versions
CN106023230A (en
Inventor
徐辛超
徐爱功
车丽娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning Technical University
Original Assignee
Liaoning Technical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning Technical University filed Critical Liaoning Technical University
Priority to CN201610390400.7A priority Critical patent/CN106023230B/en
Publication of CN106023230A publication Critical patent/CN106023230A/en
Application granted granted Critical
Publication of CN106023230B publication Critical patent/CN106023230B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Abstract

A kind of dense matching method of suitable deformation pattern, belongs to technical field of image matching, and the method comprising the steps of 1:Matching characteristic point pair is manually extracted in left image and right image respectively;Step 2:The relative deformation that original right image is corrected using quadratic polynomial, the right image after being corrected;Step 3:Right image to left image and after correcting carries out dense matching;Step 4:Determine the pixel coordinate of matched pixel point of the pixel in original right image in left image;Polynomial correction and coordinate correspondence relationship preservation mechanism are used for the dense matching of deformation pattern by the present invention, can be used for it is a variety of in the case of images match, especially preferable matching effect can be also obtained in the case where moderate finite deformation occurs for image, simultaneously new resolving ideas is provided for the dense matching of multi-source image, relative parallax Constrain Searching region is used in image after correction, the search range for reducing match point, improves matching efficiency, increases the reliability of matching result.

Description

A kind of dense matching method of suitable deformation pattern
Technical field
The invention belongs to technical field of image matching, and in particular to a kind of dense matching method of suitable deformation pattern.
Background technology
For the matching between deformation pattern pair, domestic scholars have carried out the more research about characteristic matching, Yang Heng Deng proposing in 2010 a kind of new local invariant feature detection and description algorithm, extracted on each layer of scale image first Then Harris angle points search for the extreme value in three dimension scale space in the search window of fixed size centered on by Harris angle points Position and characteristic dimension of the point as local feature region, finally calculate principal direction, and using the distance of gradient for each characteristic point Local feature is described with direction histogram, completes characteristic matching;Chen Mengting etc. was proposed in 2012 based on Harris angle points With the high-resolution remote sensing image matching algorithm of SIFT descriptors, Harris angle point grids are carried out first, then the point to obtaining SIFT descriptions are carried out, arest neighbors/time neighbour's ratio method is finally used to complete matching;Zhao Xian etc. proposed a kind of tool in 2012 There are scale and the stereopsis automatic matching method of rotational invariance, is primarily based on directional wavelet transform and constructs 3 scale feature points Operator carries out two scale matchings, ensures its Scale invariant sex chromosome mosaicism, and secondly construction feature point 64 ties up description vectors, solves image Matched rotational invariance, is finally completed characteristic matching;Ye Yuan is prosperous equal to propose a kind of combination SIFT and edge letter in 2013 The multi-source Remote Sensing Images matching process of breath carries out characteristic point detection in Gaussian difference scale space first, and consistent using phase Property the reliable marginal information of extraction, characteristic point is described then in conjunction with improved SIFT and Shape context, finally by Europe Family name's distance obtains same place as similarity measure, completes characteristic matching;It is poly- that Zhang Zhengpeng etc. proposed Optical-flow Feature in 2014 The vehicle-mounted panoramic sequential images matching process of class, it is multiple dimensioned with SIFT using the average drifting feature clustering thought of imparametrization The position quantity and light stream vector of characteristic matching point construct spatial domain and the codomain in image feature space, using right in feature space The specific image Optical-flow Feature answered is cluster condition, realizes the matching of panorama sequence image, and using epipolar geom etry constrain into The rejecting of error hiding is gone;Xu Qiuhui etc. proposed the image matching method of improved DCCD and SIFT descriptors in 2015, The key point on image is quickly detected using improved DCCD, then determines the principal direction of key point, generates characteristic point, is used SIFT descriptors Expressive Features point, after obtaining characteristic matching result, passes through BBF algorithms and stochastical sampling consistency algorithm (RANSAC) carrying out characteristic point, slightly matching and error hiding characteristic point are rejected;Zhang Zhengpeng etc. proposed adaptive motion knot in 2015 The vehicle-mounted panoramic sequential images matching process of structure feature, it is fixed in spatial domain and the local space structure of optical flow field with sampled point first The adaptive bandwidth matrices of justice describe kinematic similarity knot on optical flow field using the distance weighting method of local light stream feature vector The relaxation diffusion process of structure feature, then provides the expression-form of adaptive multivariable kernel density function, and gives mean value drift The solution for the amount of shifting to, the selection method of end condition and seed point finally merge SIFT Expressive Features and motion structure feature, Establish unified full-view image the matching frame;Xiao Xiongwu etc. proposed a kind of inclination image with affine-invariant features in 2015 Fast matching method passes through inverse affine change first by estimating that the camera axis orientation parameter of image calculates initial affine matrix It gets in return and corrects image, carry out SIFT matchings to correcting image, and matching precision and reliability are improved by multiple constraint;Yan Li Deng the spherical panoramic image matching process for proposing in 2015 fragment correction, the full-view image for first projecting equidistant shape is by longitude and latitude Degree is divided into several sub-image blocks, carries out projection transform to each sub-block to obtain the image after distortion rectification, and be SIFT to it Feature description and extraction merge all results and can be obtained the feature set of whole picture full-view image, and then complete several full-view images Between characteristic matching;The Improved Hough Transform multispectral image matching process that Zhao Baowei etc. proposed disparity constraint in 2015, This method use pyramid image matching strategy, top layer pyramid image by scale invariant feature operator matched come Initial disparity constraint condition is provided, other each layer pyramids or so image utilizes the progress of Improved Hough Transform image method Match, provides disparity constraint condition for current layer Image Matching using upper layer matching result in matching process, be finally completed feature Match.
Although the result of study of above-mentioned scholar all achieves preferable matching effect, can be only done by characteristic matching The three-dimensional surface rebuilding of low precision cannot complete the higher three-dimensional reconstruction of required precision, still need to get to the frame of three-dimensional surface It to be completed by further dense matching.General dense matching method is typically using certain match measure as weighing apparatus Two pixels of amount whether be match point foundation, such as:Absolute difference and poor quadratic sum, block absolute difference, normalized crosscorrelation, The methods of mean normalization cross-correlation is gone, such method can only complete dense matching to noise-sensitive to ideal image, for The image that moderate finite deformation occurs, when due to carrying out match measure calculating using match window, the pixel for being originally used for match point occurs Deformation causes match measure calculating deviation occur, to influencing final matching result.
Chen Wang etc. proposed the dense matching algorithm that optimization is cut based on zone boundary constraint and figure, this method profit in 2009 Energy function is built with the constraint between zone boundary and boundary pixel so that the function can acquire globally optimal solution and make Final reconstruction surface is more smooth, finally the classical image in computer vision field is used to obtain preferable dense matching As a result, this method, when image scales, the Region Matching based on window will appear deviation during realizing.Ge Liang etc. exists A kind of improved stereogram dense matching algorithm is proposed within 2010, this method finds image first with region growth technique In texture single area, then whole region is obtained to the dense disparity map of texture single area as Matching unit, most After test image of adopting international standards tested, it was demonstrated that the feasibility and accuracy of method, this method emphasis are handled Texture single area but when image deforms, the single region of texture can also deform, and the efficiency of algorithm is caused to drop It is low.Li Haibin etc. proposed the three-dimensional scenic reconstructing method based on candidate point dense matching in 2012, established generation in space The grid node of table depth information, and make rational planning for the Node distribution of depth direction, matched timeliness is improved, simultaneously Binocular is replaced using trinocular vision system, improves matched reliability by carrying out second judgement, this method is mainly for classics Binocular stereo image handled, when relative rotation occurring when outer parameter matrix is unknown, and between image, then can not complete image Polar curve correction, so that follow-up dense matching cannot be completed.Hu Chunhai etc. proposed parallax growth in 2013 and is mutually tied with tensor The more baseline dense matchings closed, preliminary characteristic matching are carried out using SIFT operators, using characteristic matching result as dense The root point matched carries out parallax growth, and carries out matching constraint using three-view diagram, improves dense matching precision, but uses three-view diagram As the constraint of dense matching, when image is only two width, then the constraints of algorithm proposition can not be completed, to influence most The reliability of whole matching result.Zhang Jielin etc. proposed a kind of layered matching process in 2014, first with heat kernel signal letter Number detection characteristic point, and using the optimization of meromixis strategy and farthest point sampling method progress characteristic matching result, in characteristic point Heat kernel signal description is constructed on the basis of collection, and characteristic point is made into initiation layer matching by conspicuousness sequence using entropy, then is passed through The local matching of each etale neighborhood of characteristic point finally realizes dense matching from thick to thin, the region meeting that this method repeats texture The problem of causing topological relation to obscure.
Invention content
In view of the deficiency of the prior art, the present invention provides a kind of dense matching method of suitable deformation pattern.
Technical scheme is as follows:
A kind of dense matching method of suitable deformation pattern, includes the following steps:
Step 1:Matching characteristic point pair is manually extracted in left image and right image respectively;
The left image and right image are the original image to be matched of a pair of relative deformation;The matching characteristic point is to being more than 5 pairs;The matching characteristic point can cover the overlapping region of left image and original right image to constituting matching characteristic point;
Step 2:The relative deformation that original right image is corrected using quadratic polynomial, the right image after being corrected;
Step 2-1:The pixel coordinate of matching characteristic point in left image is converted into image plane rectangular co-ordinate, by original right figure The pixel coordinate of all pixels point is converted to image plane rectangular co-ordinate as in;
Step 2-2:The image plane rectangular co-ordinate of matching characteristic point is substituted into quadratic polynomial shown in formula (1), solving should The coefficient of quadratic polynomial;
Wherein, (X', Y') is the image plane rectangular co-ordinate of matching characteristic point in original right image;(x', y') is left image The image plane rectangular co-ordinate of middle matching characteristic point, a0、a1、a2、a3、a4、a5、b0、b1、b2、b3、b4、b5For quadratic polynomial coefficient;
Step 2-3:All pixels point in original right image is proceeded as follows, the right image after being corrected:First, The corresponding image plane rectangular co-ordinate (X, Y) of pixel that pixel coordinate in original right image is (I, J) is substituted into shown in formula (2) Quadratic polynomial, the image plane rectangular co-ordinate (X of the pixel after being correctednew, Ynew), then the image plane right angle is sat again Mark (Xnew, Ynew) be converted to pixel coordinate (Inew, Jnew);
Wherein, (X, Y) is the image plane rectangular co-ordinate of original right image slices vegetarian refreshments;(Xnew, Ynew) be correct after right figure The image plane rectangular co-ordinate of pixel as in;a0、a1、a2、a3、a4、a5、b0、b1、b2、b3、b4、b5It is more to be solved by step 2-2 Binomial coefficient;
Step 2-4:According to the pixel coordinate (I, J) of pixel in original right image and pixel picture in right image after correction Plain coordinate (Inew, Jnew) correspondence, divide correct after right image in pixel classification, according to the right image after correction The classification of middle pixel determines correct after in right image pixel pixel coordinate (Inew, Jnew) position gray value, specific side Method is:Judge the pixel coordinate (I, J) of pixel in original right image and the pixel coordinate (I for correcting rear right image slices vegetarian refreshmentsnew, Jnew) correspondence, if in original right image pixel pixel coordinate (I, J) with correct after in right image pixel picture Plain coordinate (Inew, Jnew) be one-to-one relationship, then the correction rear right image slices vegetarian refreshments is denoted as a class pixels, and record and entangle Pixel coordinate (the I of positive rear right image slices vegetarian refreshmentsnew, Jnew) with original right image in pixel pixel coordinate (I, J) correspondence Relationship, and the gray value of position pixel coordinate (I, J) of pixel in original right image is assigned to positive rear right image slices vegetarian refreshments Pixel coordinate (Inew, Jnew) position;If the pixel coordinate (I, J) for having multiple pixels in original right image and right figure after correction Pixel coordinate (the I of the same pixel as innew, Jnew) corresponding, then the correction rear right image slices vegetarian refreshments is denoted as b class pixels Point, and record the pixel coordinate (I for correcting rear right image slices vegetarian refreshmentsnew, Jnew) with original right image in multiple pixels pixel The correspondence of coordinate (I, J), and by it is all with correct rear right image slices vegetarian refreshments pixel coordinate (Inew, Jnew) corresponding original The gray value of position pixel coordinate (I, J) of pixel is assigned to correct the picture of rear right image slices vegetarian refreshments after being averaged in right image Plain coordinate (Inew, Jnew) position;If there is no pixel in original right image and correcting the pixel coordinate of rear right image slices vegetarian refreshments (Inew, Jnew) corresponding, then such correction rear right image slices vegetarian refreshments is denoted as c class pixels, and calculate using bilinear interpolation method Go out to correct the pixel coordinate (I of rear right image slices vegetarian refreshmentsnew, Jnew) position gray value;
Step 3:Right image to left image and after correcting carries out dense matching;
Step 4:According to the classification of the matched pixel point in right image after correction of pixel in left image, determine in left image The pixel coordinate position of matched pixel point of the pixel in original right image, specific method are:To each picture in left image Vegetarian refreshments proceeds as follows successively:If the matched pixel point in left image in right image of the pixel after correction is a class pictures Vegetarian refreshments, the then coordinate correspondence relationship preserved according to a class pixels are directed in step 2-4, with the pixel coordinate in original right image Instead of the pixel coordinate of matched pixel point in the right image after correction;If in left image in right image of the pixel after correction Matched pixel point be b class pixels, then to pixel in left image using SIFT operators generate feature descriptor, after correction Right image in the corresponding original right image of matched pixel point in multiple pixels using SIFT operators generate feature description Symbol carries out SIFT feature matching, determines matched pixel point pixel coordinate of the pixel in original right image in left image;If Matched pixel point in left image in right image of the pixel after correction is c class pixels, will be in the right image after the correction The pixel coordinate of matched pixel point be converted to image plane rectangular co-ordinate, then according still further to quadratic polynomial shown in formula (2) Inverse transformation, after solving the image plane rectangular co-ordinate of the matched pixel point in original right image, respectively to pixel in left image Feature descriptor is generated using SIFT operators with corresponding pixel in original right image, carries out SIFT feature matching, is determined left The pixel coordinate of matched pixel point of the pixel in original right image in image.
Advantageous effect:One kind that the present invention is proposed for the image matching problems to deform between matching image is suitble to become The dense matching method of shape image, has the advantages that:
1, the relatively existing method for solving deformation pattern matching problem, the present invention can obtain dense matching result, be Fine three-dimensional surface reconstruct offer condition is provided, feature can only be used when overcoming previous solution deformation pattern matching problem The deficiency matched;
2, relative to existing dense matching method, the present invention be adapted to it is more in the case of images match, especially scheming As can also obtain better matching effect in the case of moderate finite deformation occurs;
3, polynomial correction and coordinate correspondence relationship preservation mechanism are used for the dense matching of deformation pattern for the first time, are multi-source The dense matching of image provides new resolving ideas;
4, deformed using between polynomial correction image, in the right image after correction using disparity constraint region of search come into Row matching point search, reduces the search range of match point, reduces matching process and takes, improves matching efficiency;
5, the constraints using the position after polynomial correction as matching point search, increases the reliable of matching process Property, especially increase the reliability of the Region Matchings results such as texture repetition.
Description of the drawings
Fig. 1 is the dense matching method flow diagram of the suitable deformation pattern of one embodiment of the present invention;
Fig. 2 (a) is the image pair left image schematic diagram to be matched of one embodiment of the present invention;
Fig. 2 (b) is the image pair right image schematic diagram to be matched of one embodiment of the present invention;
Fig. 3 is the right image schematic diagram after one embodiment of the present invention is corrected;
Fig. 4 (a) is 3 × 3 matching stencil schematic diagrames of one embodiment of the present invention;
Fig. 4 (b) is 3 × 3 region of search schematic diagrames of one embodiment of the present invention.
Specific implementation mode
It elaborates below in conjunction with the accompanying drawings to one embodiment of the present invention.
As shown in Figure 1, the dense matching method of the suitable deformation pattern of present embodiment, includes the following steps:
Step 1:9 pairs of matching characteristic points pair are manually extracted in left image and right image respectively, left image and right image are The original image to be matched of a pair of relative deformation;Matching characteristic point can cover left image and right figure to constituting matching characteristic point The overlapping region of picture, as shown in Fig. 2, wherein Fig. 2 (a) is left image, Fig. 2 (b) is right image, is in dark border in Fig. 2 (b) The overlapping region of right image and left image, white round frame represent the position of characteristic matching point distribution;
Step 2:The relative deformation that original right image is corrected using quadratic polynomial, the right image after being corrected;
Step 2-1:Respectively by all pixels point in the pixel coordinate of matching characteristic point in left image and original right image Pixel coordinate is converted to image plane rectangular co-ordinate;
The pixel coordinate of pixel in left image or right image is set as (I, J), the origin of wherein pixel coordinate system is located at The upper left corner of image, X-axis horizontal direction are that just, Y-axis vertical direction is just, in conjunction with image pixel size and principal point downwards to the right Pixel coordinate is converted to image plane rectangular co-ordinate (X, Y) by coordinate, and the wherein origin of image plane rectangular coordinate system is located in image The heart, X-axis horizontal direction are just that Y-axis vertical direction is upwards for just, conversion formula is as follows to the right:
Wherein:W is the width of image, and H is the height of image, and P is pixel size, x0For principal point abscissa, y0For as main Point ordinate;
Step 2-2:9 pairs of image plane rectangular co-ordinates by matching characteristic point are substituted into secondary multinomial shown in formula (2), solution The coefficient of the quadratic polynomial:
Wherein, (X', Y') is the image plane rectangular co-ordinate of matching characteristic point in original right image;(x', y') is left image The image plane rectangular co-ordinate of middle matching characteristic point, a0、a1、a2、a3、a4、a5、b0、b1、b2、b3、b4、b5For quadratic polynomial coefficient; The method for solving of quadratic polynomial coefficient is as follows in the present embodiment:
Step 2-2-1, since in practical solution procedure, the numerical value at left and right sides of formula (2) can not possibly be absolutely identical, because Formula (2), is rewritten as the form of error equation by this:
Wherein, vx'And vy'For the solution error of quadratic polynomial;
Step 2-2-2, above-mentioned error equation is rewritten as to the form of matrix and vector:V=AZ-l, specific formula are as follows:
Wherein:
The image plane rectangular co-ordinate of 9 pairs of matching characteristic points is substituted into, least square method is passed through:Z=(ATA)-1ATL, meter Calculate polynomial each term coefficient;
Step 2-3:All pixels point in original right image is proceeded as follows, after obtaining correction as shown in Figure 3 Right image:First, it is pixel corresponding image plane rectangular co-ordinate (X, the Y) generation of (I, J) by pixel coordinate in original right image Enter quadratic polynomial shown in formula (2), the image plane rectangular co-ordinate (X of the pixel point after being correctednew, Ynew), then again will The image plane rectangular co-ordinate (Xnew, Ynew) be converted to pixel coordinate (Inew, Jnew)
Wherein, (X, Y) is the image plane rectangular co-ordinate of original right image slices vegetarian refreshments;(Xnew, Ynew) it is in original right image Image plane rectangular co-ordinate is the image plane rectangular co-ordinate after the pixel of (X, Y) is corrected;a0、a1、a2、a3、a4、a5、b0、b1、b2、 b3、b4、b5The multinomial coefficient solved for step 2-2;
Described is specifically to add the abscissa of initial pixel coordinate and ordinate respectively to the progress rounding of initial pixel coordinate 0.5 and downward rounding;
Step 2-4:For right image shown in Fig. 3, since floor operation makes original pixel coordinate (I, J) and pixel coordinate (Inew, Jnew) there are 3 kinds of correspondences, first according to the pixel coordinate (I, J) of pixel in original right image and correction rear right Pixel pixel coordinate (I in imagenew, Jnew) correspondence, divide the classification of the right image pixel after correcting, then root Pixel coordinate (the I of pixel in right image after correcting is determined according to the classification of pixel in the right image after correctionnew, Jnew) position Gray value, specific method is:Judge the pixel coordinate (I, J) and correction rear right image slices vegetarian refreshments of pixel in original right image Pixel coordinate (Inew, Jnew) correspondence, if in original right image pixel pixel coordinate (I, J) with correct after right figure As the pixel coordinate (I of pixelnew, Jnew) be one-to-one relationship, then such correction rear right image slices vegetarian refreshments is denoted as a classes Pixel, and record the pixel coordinate (I for correcting rear right image slices vegetarian refreshmentsnew, Jnew) with original right image in pixel pixel The gray value of position pixel coordinate (I, J) of pixel in original right image is assigned to positive rear right by the correspondence of coordinate (I, J) Pixel coordinate (the I of image slices vegetarian refreshmentsnew, Jnew) position;If there is the pixel coordinate (I, J) of pixel in multiple original right images With the pixel coordinate (I of the same pixel in right image after correctionnew, Jnew) corresponding, then by such correction rear right image pixel Point is denoted as b class pixels, and records the pixel coordinate (I for correcting rear right image slices vegetarian refreshmentsnew, Jnew) with it is multiple in original right image The correspondence of the pixel coordinate (I, J) of pixel, by all pixel coordinate (I with correction rear right image slices vegetarian refreshmentsnew, Jnew) The gray value of position pixel coordinate (I, J) of pixel is assigned to right image after correcting in corresponding original right image after being averaged Pixel coordinate (the I of pixelnew, Jnew) position;If the pixel coordinate (I, J) without pixel in original right image and correction Pixel coordinate (the I of rear right image slices vegetarian refreshmentsnew, Jnew) corresponding, then such correction rear right image slices vegetarian refreshments is denoted as c class pixels Point calculates the pixel coordinate (I for correcting rear right image slices vegetarian refreshments using bilinear interpolation methodnew, Jnew) position gray value;
Step 3:Right image to left image and after correcting carries out dense matching;
Step 3-1:The region of the odd sized of odd number more than 1 × more than 1 is selected in left image as matching template, The pixel of selection region center as pixel to be matched, selected in present embodiment in left image pixel coordinate for The pixel of (3,3) selects the region of 3 × 3 sizes as matching template as pixel to be matched, dark-grey as shown in Fig. 4 (a) Region in color solid box;
Step 3-2:A pair of of match point is arbitrarily manually selected in the right image from left image and after correcting first, calculates left figure The horizontal parallax p and vertical parallax q of right image after picture and correction;Then in the right image after correction with location of pixels be (3 + q, 3+p) pixel centered on, select more than 1 odd number × more than 1 odd sized region as region of search, i.e. picture Plain coordinate includes (2+q, 2+p), (2+q, 3+p), (2+q, 4+p), (3+q, 2+p), (3+q, 3+p), (3+q, 4+p), (4+q, 2+ P), the region of (4+q, 3+p), (4+q, 4+p) location of pixels, p=566 in present embodiment, q=370, present embodiment pixel Position be (373,569) pixel as region of search center, select the region of 3 × 3 sizes as region of search, such as Fig. 4 (b) region in dotted line frame shown in;
Step 3-3:Successively by pixel centered on pixel in region of search in the right image after correction, with each center Pixel is formed with an equal amount of region of matching template with several surrounding pixels as target area, such as to scheme Pixel position in region of search in dotted line frame shown in 4 (b) be (2+q, 2+p) pixel centered on and surrounding 8 An equal amount of region of 3 × 3 matching templates is as target area with shown in Fig. 4 (a) for a pixel composition, as shown in Fig. 4 (b) Region in grey filled lines frame;The phase of pixel and pixel in matching stencil in each target area is calculated using formula (6) Relationship number;Then select the maximum pixel of related coefficient as the matching picture in right image of the pixel to be matched after correction Vegetarian refreshments;
The related coefficient calculation formula is as follows:
In formula:ρ is related coefficient;The difference of pixel pixel coordinate and pixel pixel coordinate to be matched centered on (c, r); gi,jWhen for using the left image matching stencil upper left corner as pixel coordinate system coordinate origin, the gray value of the position pixel coordinate (i, j); g′i,jWhen for using the right image target area upper left corner after correction as pixel coordinate system coordinate origin, the position pixel coordinate (i, j) Gray value;M, n is respectively the height and width of matching stencil;gi+r,j+cTo be sat by pixel of the left image matching stencil upper left corner When mark system coordinate origin, the gray value of the position pixel coordinate (i+r, j+c);g′i+r,j+cFor with the right image target area after correction When the domain upper left corner is pixel coordinate system coordinate origin, the gray value of the position pixel coordinate (i+r, j+c);The height and width with Pixel is unit, such as 3 × 3 matching area, indicates that matching area is the width of the height and 3 pixels of 3 pixels;
Step 4:According to the classification of the matched pixel point in right image after correction of pixel in left image, determine in left image The pixel coordinate position of matched pixel point of the pixel in original right image, specific method are:To each picture in left image Vegetarian refreshments proceeds as follows successively:If the matched pixel point in left image in right image of the pixel after correction is a class pictures Vegetarian refreshments, according to the coordinate correspondence relationship for being directed to the preservation of a class pixels in step 2-4, with the pixel coordinate generation in original right image For the pixel coordinate of matched pixel point in the right image after correction;If in left image in right image of the pixel after correction Matched pixel point is b class pixels, feature descriptor is generated using SIFT operators to pixel in left image, to the right side after correction Multiple pixels in the corresponding original right image of matched pixel point in image generate feature descriptor using SIFT operators, into Row SIFT feature matches, and determines matched pixel point pixel coordinate of the pixel in original right image in left image;If left figure Matched pixel point as in right image of the pixel after correction is c class pixels, by the right image after the correction Pixel coordinate with pixel is converted to image plane rectangular co-ordinate, then according still further to the inversion of quadratic polynomial shown in formula (5) It changes, after solving the image plane rectangular co-ordinate of the matched pixel point in original right picture, respectively to pixel in left image and original Corresponding pixel generates feature descriptor using SIFT operators in right image, carries out SIFT feature matching, determines in left image The pixel coordinate of matched pixel point of the pixel in original right image, corresponding pixel in the original right image, refers to After being the pixel coordinate that the image plane rectangular co-ordinate in original right picture is converted to non-rounding, then the pixel of aforementioned non-rounding sat 4 pixels around aforementioned non-rounding pixel coordinate that target abscissa and ordinate obtain after rounding upward or downward respectively Point.

Claims (1)

1. a kind of dense matching method of suitable deformation pattern, it is characterised in that:The method includes:
Step 1:Matching characteristic point pair is manually extracted in left image and right image respectively;
The left image and right image are the original image to be matched of a pair of relative deformation;The matching characteristic point is to being more than 5 pairs; The matching characteristic point can cover the overlapping region of left image and right image to constituting matching characteristic point;
Step 2:The relative deformation that original right image is corrected using quadratic polynomial, the right image after being corrected;It specifically includes Following steps:
Step 2-1:The pixel coordinate of matching characteristic point in left image is converted into image plane rectangular co-ordinate, it will be in original right image The pixel coordinate of all pixels point is converted to image plane rectangular co-ordinate;
Step 2-2:The image plane rectangular co-ordinate of matching characteristic point is substituted into quadratic polynomial shown in formula (1), it is secondary to solve this Polynomial coefficient;
Wherein, (X ', Y ') is the image plane rectangular co-ordinate of matching characteristic point in original right image;(x ', y ') is in left image Image plane rectangular co-ordinate with characteristic point, a0、a1、a2、a3、a4、a5、b0、b1、b2、b3、b4、b5For quadratic polynomial coefficient;
Step 2-3:All pixels point in original right image is proceeded as follows, the right image after being corrected:It first, will be former Pixel coordinate is secondary shown in corresponding image plane rectangular co-ordinate (X, Y) the substitution formula (2) of pixel of (I, J) in beginning right image Multinomial, the image plane rectangular co-ordinate (X of the pixel after being correctednew, Ynew), then again by the image plane rectangular co-ordinate (Xnew, Ynew) be converted to pixel coordinate (Inew, Jnew);
Wherein, (X, Y) is the image plane rectangular co-ordinate of original right image slices vegetarian refreshments;(Xnew, Ynew) in the right image after correction The image plane rectangular co-ordinate of pixel;a0、a1、a2、a3、a4、a5、b0、b1、b2、b3、b4、b5For the multinomial solved by step 2-2 Coefficient;
Step 2-4:It is sat according to the pixel coordinate (I, J) of pixel in original right image and pixel pixel in right image after correction Mark (Inew, Jnew) correspondence, divide correct after right image pixel classification, according to pixel in the right image after correction The classification of point determines correct after in right image pixel pixel coordinate (Inew, Jnew) position gray value;Specific method is:
Judge the pixel coordinate (I, J) of pixel in original right image and the pixel coordinate (I for correcting rear right image slices vegetarian refreshmentsnew, Jnew) correspondence, if in original right image pixel pixel coordinate (I, J) with correct rear right image slices vegetarian refreshments pixel Coordinate (Inew, Jnew) be one-to-one relationship, then such correction rear right image slices vegetarian refreshments is denoted as a class pixels, and record and entangle Pixel coordinate (the I of positive rear right image slices vegetarian refreshmentsnew, Jnew) with original right image in pixel pixel coordinate (I, J) correspondence The gray value of position pixel coordinate (I, J) of pixel in original right image is assigned to correct rear right image slices vegetarian refreshments by relationship Pixel coordinate (Inew, Jnew) position;If the pixel coordinate (I, J) for having multiple pixels in original right image and right figure after correction Pixel coordinate (the I of the same pixel as innew, Jnew) corresponding, then such correction rear right image slices vegetarian refreshments is denoted as b class pixels Point, and record the pixel coordinate (I for correcting rear right image slices vegetarian refreshmentsnew, Jnew) with original right image in multiple pixels pixel The correspondence of coordinate (I, J), by all pixel coordinate (I with correction rear right image slices vegetarian refreshmentsnew, Jnew) corresponding original right The gray value of position pixel coordinate (I, J) of pixel is assigned to correct the pixel of rear right image slices vegetarian refreshments after being averaged in image Coordinate (Inew, Jnew) position;If there is no pixel in original right image and correcting the pixel coordinate of rear right image slices vegetarian refreshments (Inew, Jnew) corresponding, then such correction rear right image slices vegetarian refreshments is denoted as c class pixels, is calculated using bilinear interpolation method Correct the pixel coordinate (I of rear right image slices vegetarian refreshmentsnew, Jnew) position gray value;
Step 3:Right image to left image and after correcting carries out dense matching;
Step 4:Determine that the pixel coordinate of matched pixel point of the pixel in original right image in left image, specific method are:
Each pixel in left image is proceeded as follows successively:If right image of the pixel after correction in left image In matched pixel point be a class pixels, according to the coordinate correspondence relationship for being directed to a class pixels in step 2-4 and preserving, use is original Pixel coordinate in right image replaces the pixel coordinate of matched pixel point in the right image after correcting;If pixel in left image Matched pixel point in right image after correction is b class pixels, and spy is generated using SIFT operators to pixel in left image Descriptor is levied, according to the coordinate correspondence relationship for being directed to the preservation of b class pixels in step 2-4, in the right image after correction Feature descriptor is generated using SIFT operators with multiple pixels in the corresponding original right image of pixel, carries out SIFT feature Matching, determines matched pixel point pixel coordinate of the pixel in original right image in left image;If pixel in left image Matched pixel point in right image after correction is c class pixels, by the matched pixel point in the right image after the correction Pixel coordinate is converted to image plane rectangular co-ordinate, then according still further to the inverse transformation of quadratic polynomial shown in formula (2), solves this After image plane rectangular co-ordinate with pixel in original right image, respectively to phase in pixel in left image and original right image The pixel answered generates feature descriptor using SIFT operators, carries out SIFT feature matching, determines that pixel is in original in left image The pixel coordinate of matched pixel point in beginning right image.
CN201610390400.7A 2016-06-02 2016-06-02 A kind of dense matching method of suitable deformation pattern Active CN106023230B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610390400.7A CN106023230B (en) 2016-06-02 2016-06-02 A kind of dense matching method of suitable deformation pattern

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610390400.7A CN106023230B (en) 2016-06-02 2016-06-02 A kind of dense matching method of suitable deformation pattern

Publications (2)

Publication Number Publication Date
CN106023230A CN106023230A (en) 2016-10-12
CN106023230B true CN106023230B (en) 2018-07-24

Family

ID=57090727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610390400.7A Active CN106023230B (en) 2016-06-02 2016-06-02 A kind of dense matching method of suitable deformation pattern

Country Status (1)

Country Link
CN (1) CN106023230B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10453204B2 (en) * 2016-12-06 2019-10-22 Adobe Inc. Image alignment for burst mode images
CN107590502B (en) * 2017-09-18 2020-05-22 西安交通大学 Full-field dense point fast matching method
CN108021886B (en) * 2017-12-04 2021-09-14 西南交通大学 Method for matching local significant feature points of repetitive texture image of unmanned aerial vehicle
CN108364013B (en) * 2018-03-15 2021-10-29 苏州大学 Image key point feature descriptor extraction method and system based on neighborhood Gaussian differential distribution
CN108961322B (en) * 2018-05-18 2021-08-10 辽宁工程技术大学 Mismatching elimination method suitable for landing sequence images
CN108986150B (en) * 2018-07-17 2020-05-22 南昌航空大学 Image optical flow estimation method and system based on non-rigid dense matching
CN113034556B (en) * 2021-03-19 2024-04-16 南京天巡遥感技术研究院有限公司 Frequency domain correlation semi-dense remote sensing image matching method
CN117036488B (en) * 2023-10-07 2024-01-02 长春理工大学 Binocular vision positioning method based on geometric constraint

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101702056A (en) * 2009-11-25 2010-05-05 安徽华东光电技术研究所 Stereo image displaying method based on stereo image pairs
CN103136750A (en) * 2013-01-30 2013-06-05 广西工学院 Stereo matching optimization method of binocular visual system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101702056A (en) * 2009-11-25 2010-05-05 安徽华东光电技术研究所 Stereo image displaying method based on stereo image pairs
CN103136750A (en) * 2013-01-30 2013-06-05 广西工学院 Stereo matching optimization method of binocular visual system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
An Optimized Method for Terrain Reconstruction Based on Descent Images;Xu Xinchao 等;《Journal of Engineering and Technological Sciences》;20160229;第48卷(第1期);31-48 *
利用SIFT算子与图像插值实现图像匹配;卜凡艳 等;《计算机工程与应用》;20111231;第47卷(第16期);156-158,162 *
基于多项式的遥感图像快速几何校正;曹玲玲 等;《电脑开发与应用》;20111231;第24卷(第1期);5-7,34 *
震害遥感图像变化检测技术研究;窦爱霞;《中国优秀硕士学位论文全文数据库基础科学辑》;20031215;第2003年卷(第4期);第3.1-3.4节,图3.2-3.5 *

Also Published As

Publication number Publication date
CN106023230A (en) 2016-10-12

Similar Documents

Publication Publication Date Title
CN106023230B (en) A kind of dense matching method of suitable deformation pattern
CN106780590B (en) Method and system for acquiring depth map
JP7159057B2 (en) Free-viewpoint video generation method and free-viewpoint video generation system
CN107578404B (en) View-based access control model notable feature is extracted complete with reference to objective evaluation method for quality of stereo images
CN104346608B (en) Sparse depth figure denseization method and apparatus
CN107909640B (en) Face relighting method and device based on deep learning
CN106023303B (en) A method of Three-dimensional Gravity is improved based on profile validity and is laid foundations the dense degree of cloud
CN103345736B (en) A kind of virtual viewpoint rendering method
CN109598754B (en) Binocular depth estimation method based on depth convolution network
CN104240289B (en) Three-dimensional digitalization reconstruction method and system based on single camera
CN106447601B (en) Unmanned aerial vehicle remote sensing image splicing method based on projection-similarity transformation
CN104537707B (en) Image space type stereoscopic vision moves real-time measurement system online
CN104299228B (en) A kind of remote sensing image dense Stereo Matching method based on Accurate Points position prediction model
CN109447919B (en) Light field super-resolution reconstruction method combining multi-view angle and semantic texture features
CN106856012B (en) A kind of real-time large scale scene 3-D scanning modeling method and system
CN115205489A (en) Three-dimensional reconstruction method, system and device in large scene
US10771776B2 (en) Apparatus and method for generating a camera model for an imaging system
CN109345502B (en) Stereo image quality evaluation method based on disparity map stereo structure information extraction
CN108648264A (en) Underwater scene method for reconstructing based on exercise recovery and storage medium
CN111683221B (en) Real-time video monitoring method and system for natural resources embedded with vector red line data
CN112184547A (en) Super-resolution method of infrared image and computer readable storage medium
CN110691236B (en) Panoramic video quality evaluation method
CN114429555A (en) Image density matching method, system, equipment and storage medium from coarse to fine
Zhou et al. PADENet: An efficient and robust panoramic monocular depth estimation network for outdoor scenes
Wan et al. Drone image stitching using local mesh-based bundle adjustment and shape-preserving transform

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant