CN106780574B - A kind of texture-free region matching process of image - Google Patents

A kind of texture-free region matching process of image Download PDF

Info

Publication number
CN106780574B
CN106780574B CN201611033115.6A CN201611033115A CN106780574B CN 106780574 B CN106780574 B CN 106780574B CN 201611033115 A CN201611033115 A CN 201611033115A CN 106780574 B CN106780574 B CN 106780574B
Authority
CN
China
Prior art keywords
pixel
value
template
image
centroid distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201611033115.6A
Other languages
Chinese (zh)
Other versions
CN106780574A (en
Inventor
贾迪
朱红
宋伟东
孙劲光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning Technical University
Original Assignee
Liaoning Technical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning Technical University filed Critical Liaoning Technical University
Priority to CN201611033115.6A priority Critical patent/CN106780574B/en
Publication of CN106780574A publication Critical patent/CN106780574A/en
Application granted granted Critical
Publication of CN106780574B publication Critical patent/CN106780574B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The present invention proposes a kind of texture-free region matching process of image, belong to technical field of image processing, the present invention utilizes the invariant during affine transformation, DTTC figure (Distance transform towards centroid) is constructed, and is matched to allow based on this and texture-free region is matched using template matching method;It is noted that the present invention is suitable for color image, when handling natural image matching, the generation of color texture is more advantageous to the matching in the difference texture-free region of different colours.

Description

A kind of texture-free region matching process of image
Technical field
The invention belongs to technical field of image processing, and in particular to a kind of texture-free region matching process of image.
Background technique
With the continuous development of science and technology, video camera has also obtained development at full speed, the type and function of video camera Also more and more, people have also enjoyed the various conveniences of development in science and technology bring.For example, people can use video camera setting On vehicle, the environmental information in parking lot is known in such a way that video camera acquires image.
Current video camera, such as wide angle cameras etc. need to demarcate parameter when in use, more smart to obtain True parameter.More common calibration technique is demarcated using Zhang Zhengyou calibration algorithm at present, however this calibration mode It is only applicable to plane reference plate, and needs several distort scaling board images, for there was only the case where width fault image, this mark Determining mode cannot be applicable in, and the precision of calibration is not also high.
Summary of the invention
In view of the deficiencies of the prior art, the present invention proposes a kind of texture-free region matching process of image, and this method utilizes Two attributes during affine transformation: 1) straight line is become straight line by affine transformation, 2) affine transformation keeps the linear pass of vector It is constant, is the texture region with affine-invariant features matter by non-textured area domain construction, it is therefore an objective to it is matched to improve template sampling Accuracy.
A kind of texture-free region matching process of image, comprising the following steps:
Step 1 carries out binary conversion treatment to original image, and pixel in region texture-free in binary image is denoted as 1, Pixel is denoted as 0 in other regions;
Step 2, the mass center for seeking texture-free region;
Each pixel in step 3, the texture-free region of traversal, constructs vector matrix;
Each vector units in the matrix are by the Euclidean distance of pixel and mass center, the vector of pixel and mass center The ordinate at angle, the abscissa of the pixel and the pixel is constituted;
Step 4, setting one value range be [0:359] angle k, with step-length be 1 traversal angle k, construction mass center away from From Transformation Graphs;It is specific as follows:
Step 4-1,
(1) k=0 under original state is set;
(2) each vector units in vector matrix are traversed, the unit construction set that all azimuths are equal to k is searched;
(3) k, if k is not more than 359, is re-executed (2) from increasing 1, no to then follow the steps 4-2;
Step 4-2, Euclidean distance corresponding to each pixel in each unit construction set is obtained, by each pixel Corresponding Euclidean distance obtains each pixel in centroid distance Transformation Graphs divided by Euclidean distance maximum in the unit construction set The ratio value of point;
Step 4-3, judge whether original image is gray level image, if so, 4-4 is thened follow the steps, it is no to then follow the steps 4- 5;
Step 4-4, by corresponding position in the gray value of pixel each in original-gray image and centroid distance Transformation Graphs Ratio value be multiplied, and using product be used as the end value of the position in centroid distance Transformation Graphs, complete construction centroid distance convert Figure executes step 5;
Step 4-5, by the R channel value, G channel value and channel B value of pixel each in original color image respectively with matter The ratio value of corresponding position is multiplied in heart range conversion figure, and using product as the R of the position in centroid distance Transformation Graphs Channel end value, the channel G end value and channel B end value complete construction centroid distance Transformation Graphs, execute step 5;
Step 5, selection template, carry out affine transformation for the template of selection in centroid distance Transformation Graphs, convert template Side length, position and angle obtain transformed region in the second width image according to transformed position;
Step 6 randomly selects and converts rear region in centroid distance Transformation Graphs in the pixel of template and the second width image Pixel obtains the pixel seat for converting rear region in centroid distance Transformation Graphs in the pixel coordinate and the second width image of template Mark similarity difference value;
Step 7 executes step 5 to step 6 repeatedly, carries out multiple affine transformation to template, obtains transformed phase every time Like degree difference value, affine transformation matrix corresponding to wherein similarity difference value minimum value is selected, affine transformation is carried out to template, The corresponding region in the second width image is obtained, matching is completed.
Euclidean distance corresponding to each pixel in each unit construction set of acquisition described in step 4-2, will be each Euclidean distance corresponding to pixel obtains every in centroid distance Transformation Graphs divided by Euclidean distance maximum in the unit construction set The ratio value of a pixel, specific formula is as follows:
Wherein, DTTC (x, y) indicates the ratio value of pixel (x, y) in centroid distance Transformation Graphs, Vk(x, y) [d] is indicated Unit construction set VkEuclidean distance corresponding to middle pixel (x, y), max (Vk[d]) indicate unit construction set VkMiddle maximum Euclidean distance.
The template of selection is carried out affine transformation in centroid distance Transformation Graphs by selection template described in step 5, converts mould Side length, position and the angle of plate obtain transformed region in the second width image according to transformed position;
Affine transformation formula is as follows:
[x y 1]=pT (2)
Wherein, [x y 1] indicates that the coordinate after affine transformation, p=[X Y 1] indicate template in centroid distance Transformation Graphs The coordinate of middle pixel;
T indicates transformation matrix, specific formula is as follows:
Wherein, λxIndicate lateral scaling coefficient, value range is [0.1,10], λyIndicate longitudinal scaling coefficient, Value range is [0.1,10], and θ indicates rotation angle, and value range is 0 °~359 °, xoAnd yoIndicate that template center's pixel is sat Mark.
It is randomly selected described in step 6 in centroid distance Transformation Graphs and converts back zone in the pixel of template and the second width image The pixel in domain obtains the pixel for converting rear region in centroid distance Transformation Graphs in the pixel coordinate and the second width image of template Point coordinate similarity difference value;
Similarity difference value ΔT(I1, I2) calculation formula is as follows:
Wherein, I1Indicate centroid distance Transformation Graphs, I2Indicate the second width image;n1Indicate the pixel number randomly selected; A indicates that the range from red to green, value range are [127, -128];B indicates the range from yellow to blue, value range For [127, -128];Indicate centroid distance Transformation Graphs I1The a channel value of the pixel p of middle template,Indicate mass center away from From Transformation Graphs I1The b channel value of the pixel p of middle template,Indicate the second width image I2The pixel T's (p) of middle template A channel value,Indicate the second width image I2The b channel value of the pixel T (p) of middle template.
The invention has the advantages that
The present invention proposes a kind of texture-free region matching process of image, matched can not ask to solve texture-free region Topic constructs DTTC and schemes (Distance transform towards using the invariant during affine transformation Centroid it), and is based on this matched to allow and texture-free region is matched using template matching method;Value It obtains one to be mentioned that, the present invention is suitable for color image, and when handling natural image matching, the generation of color texture is more advantageous to area The matching in the other texture-free region of different colours.
Detailed description of the invention
Fig. 1 is the scaling method flow chart of the wide angle cameras of an embodiment of the present invention;
Fig. 2 is the step 4 theoretical foundation schematic diagram of an embodiment of the present invention, wherein figure (a) is a texture-free figure Picture, figure (b) is the image obtained by scheming (a) through affine transformation T, and figure (c) is the image schemed (a) and DTTC method is used to obtain, figure It (d) is the image obtained by scheming (b) through DTTC method, figure (e) is the image obtained to figure (c) through T transformation;
Fig. 3 is that the DTTC of an embodiment of the present invention converts experimental result contrast schematic diagram, wherein figure (a) is by Fig. 2 Scheme (a) and first carry out DTTC transformation, then carries out the image that affine transformation obtains;Figure (b) be by scheming (a) first through affine transformation, then by The result that DTTC transformation obtains;Figure (c) is that figure (a) and figure (b) do the result obtained after difference operation;
Fig. 4 is the matching comparing result in the little situation of affine transformation scale of an embodiment of the present invention, wherein figure It (a) is the first comparative result figure, figure (b) is second pair of result than figure, and figure (c) is third comparative result figure, and figure (d) is the 4th knot Fruit comparison diagram, figure (e) are the 5th comparative result figure, and figure (f) is the 6th comparative result figure;
Fig. 5 is the large scale affine transformation experimental result contrast schematic diagram of an embodiment of the present invention, wherein scheming (a) is First comparative result figure, figure (b) are the second comparative result figure, and figure (c) is third comparative result figure;
Fig. 6 is the experimental image schematic diagram of an embodiment of the present invention, wherein figure (s1) is the first width experimental image, figure It (s2) is the second width experimental image, figure (s3) is third width experimental image.
Specific embodiment
An embodiment of the present invention is described further with reference to the accompanying drawing.
In the embodiment of the present invention, the texture-free region matching process of image, as shown in Figure 1, comprising the following steps:
Step 1 carries out binary conversion treatment to original image, and pixel in region texture-free in binary image I is denoted as 1, it is denoted as D, pixel is denoted as 0 in other regions;
Step 2, the mass center (cx, cy) for seeking texture-free region D;
Each pixel in step 3, the texture-free region D of traversal, constructs vector matrix V (d, θ, xi, yi);
Each vector units in the matrix are by the Euclidean distance d (i, j) of pixel and mass center, pixel and mass center The abscissa x of azimuth θ (i, j), the pixeliWith the ordinate y of the pixeliIt constitutes;
D (i, j)=sqrt ((i-cy)2+(j-cx)2) (5)
Wherein, d (i, j) is the Euclidean distance of the pixel and mass center (cx, cy) on the position (i, j), and θ (i, j) is for (i, j) The azimuth of pixel and mass center (cx, cy) on position;
Step 4, setting one value range be [0:359] angle k, with step-length be 1 traversal angle k, construction mass center away from From Transformation Graphs;
In the embodiment of the present invention, as shown in Fig. 2, intuitively to explain this paper technology path schematic diagram, wherein scheming (a) is One texture-free image, the image that figure (b) is obtained by scheming (a) through affine transformation T;Two images are carried out using Fast-match When matching, no matter which in selection region is partially used as matching template in figure (a), all will be unable to obtain correctly matching knot Fruit;Figure (c) be to scheme (a) to use the obtained image of DTTC method, schemes the image that (d) is obtained by scheming (b) through DTTC method, from this two Width figure can see different zones inside image and generate different colours value, i.e., using figure (c) with scheme that (d) carry out matching again will more Be conducive to template matching method;In order to verify the validity of this method, (e) is schemed through T transformation to figure (c), comparison diagram (d) and figure (e) color difference, more approach 0 then represent generate texture influenced by affine transformation it is smaller, due to generation texture need according to Rely in the equal than property of affine transformation, therefore the correlation theory that this method is given below proves:
In the embodiment of the present invention, it is assumed thatFor the first width figure I1In vector,For the first width figure I1Through affine transformation τ Obtain the second width figure I2In corresponding vector, then have
In the embodiment of the present invention, enable(multiple that λ is unit vector) proves:
It proves:
In the embodiment of the present invention, above-mentioned formula explanation,After affine transformation, A is in the second width figure I2In corresponding position A ' is related to k, should be at the second width figure I2Ratio is between profile and center of gravity O 'Level set on;According to the property, divide It Que Ding not the first width figure I1With the second width figure I2Edge is to the vector between respective center of gravityAndAnd it is total using 1 as vector It is long,AndRespectively as equal part quantity, thenAndUnit vector modulus value on as equidirectional enables Vector is constructed by center of gravityAnd it willAndFill in The place A and A ' establishes the center of gravity range conversion figure with affine-invariant features, when carrying out template matching, due to selecting template area Gray value by center of gravity to distance between the edge composition of proportions, therefore after affine transformation with target area center of gravity/Edge Distance Than identical, then it can be obtained corresponding matching result according to SAD sample calculation area pixel gray scale;It can be seen that the above method The center of gravity range conversion figure of construction can not only reflect the shape information at edge to texture-free region, while maintain again each The vector corresponding relationship of position;
In the embodiment of the present invention, the specific steps are as follows:
Step 4-1,
(1) k=0 under original state is set;
(2) each vector units in vector matrix are traversed, the unit construction set V that all azimuths are equal to k is searchedk
(3) k, if k is not more than 359, is re-executed (2) from increasing 1, no to then follow the steps 4-2;
Step 4-2, each unit construction set V is obtainedkIn Euclidean distance corresponding to each pixel, by each pixel The corresponding Euclidean distance of point obtains each picture in centroid distance Transformation Graphs divided by Euclidean distance maximum in the unit construction set The ratio value of vegetarian refreshments;
Specific formula is as follows:
Wherein, DTTC (x, y) indicates the ratio value of pixel (x, y) in centroid distance Transformation Graphs, Vk(x, y) [d] is indicated Unit construction set VkEuclidean distance corresponding to middle pixel (x, y), max (Vk[d]) indicate unit construction set VkMiddle maximum Euclidean distance;
Step 4-3, judge whether original image is gray level image, if so, 4-4 is thened follow the steps, it is no to then follow the steps 4- 5;
Step 4-4, by corresponding position in the gray value of pixel each in original-gray image and centroid distance Transformation Graphs Ratio value be multiplied, and using product be used as the end value of the position in centroid distance Transformation Graphs, complete construction centroid distance convert Figure executes step 5;
Step 4-5, by the R channel value, G channel value and channel B value of pixel each in original color image respectively with matter The ratio value of corresponding position is multiplied in heart range conversion figure, and using product as the R of the position in centroid distance Transformation Graphs Channel end value, the channel G end value and channel B end value complete construction centroid distance Transformation Graphs, execute step 5;
Step 5, selection template, carry out affine transformation for the template of selection in centroid distance Transformation Graphs, convert template Side length, position and angle obtain transformed region in the second width image according to transformed position;
Affine transformation formula is as follows:
[x y 1]=pT (2)
Wherein, [x y 1] indicates that the coordinate after affine transformation, p=[X Y 1] indicate template in centroid distance Transformation Graphs The coordinate of middle pixel;
T indicates transformation matrix, specific formula is as follows:
Wherein, λxIndicate lateral scaling coefficient, value range is [0.1,10], λyIndicate longitudinal scaling coefficient, Value range is [0.1,10], and θ indicates rotation angle, and value range is 0 °~359 °, xoAnd yoIndicate that template center's pixel is sat Mark;
Step 6 randomly selects and converts rear region in centroid distance Transformation Graphs in the pixel of template and the second width image Pixel obtains the pixel seat for converting rear region in centroid distance Transformation Graphs in the pixel coordinate and the second width image of template Mark similarity difference value;
Similarity difference value ΔT(I1, I2) calculation formula is as follows:
Wherein, I1Indicate centroid distance Transformation Graphs, I2Indicate the second width image;n1Indicate the pixel number randomly selected; First by color image I1With I2It is transformed into Lab space from rgb space, L indicates the L * component in Lab color space, for indicating picture The brightness of element, value range is [0,100], is black when value is 0, is pure white when value is 100;A is indicated from red to green The range of color, value range are [127, -128];B indicates that the range from yellow to blue, value range are [127, -128];Indicate centroid distance Transformation Graphs I1The a channel value of the pixel p of middle template,Indicate centroid distance Transformation Graphs I1Middle mould The b channel value of the pixel p of plate,Indicate the second width image I2The a channel value of the pixel T (p) of middle template, Indicate the second width image I2The b channel value of the pixel T (p) of middle template;
Step 7 executes step 5 to step 6 repeatedly, carries out multiple affine transformation to template, obtains transformed phase every time Like degree difference value, affine transformation matrix corresponding to wherein similarity difference value minimum value is selected, affine transformation is carried out to template, The corresponding region in the second width image is obtained, matching is completed.
Experiment test:
Experimental data is tested using the figure of self-structuring, and condition is that there is no symmetric parts, i.e. guarantee mould in image Plate selection region Ying Weiyi in present image;
In the embodiment of the present invention, affine transformation is carried out according to following formula to image affine transformation:
[x y 1]=[w z 1] T (7)
Wherein, w and z indicates that the coordinate before affine transformation, x and y indicate the coordinate obtained after transformation,;T indicates affine transformation Matrix;
Wherein, sxWith syFor controlling scaling transformation, θ is for controlling rotation transformation, and b and c are for controlling shear Transformation, δxAnd δyFor controlling translation transformation, different affine transformation matrixs is obtained by changing these parameter values;
In the embodiment of the present invention, it is first to be carried out by scheming (a) in Fig. 2 that (a) is schemed as schemed shown in (a) to figure (c) in Fig. 3, in Fig. 3 DTTC transformation, then carry out the image that affine transformation obtains;It is by figure (a) in Fig. 3 first through affine transformation that (b) is schemed in Fig. 3, then by The result that DTTC transformation obtains;Scheme in Fig. 3 (c) be in Fig. 3 in figure (a) and Fig. 3 figure (b) do it is obtaining after difference operation as a result, In order to be conducive to observation, which enhances region contrast by histogram equalization algorithm.Ideally difference should be 0, from Fig. 3 Scheme in (c) as it can be seen that the difference close to marginal portion is more obvious, the gray value of other parts is also not much different;
It is Experimental comparison's knot under different affine transformations if schemed shown in (a) to figure (f) in Fig. 4 in the embodiment of the present invention Fruit, wherein the lower region of brightness is first to carry out DTTC transformation after scheming (a) indicia matched region in Fig. 2, then carry out affine transformation Obtained region;(a) will be schemed in Fig. 2 first through affine transformation, then after DTTC is converted and obtained DTTC image, utilize CFAST algorithm The matching result of acquisition as shown in the higher region of brightness compares the position in two regions as it can be seen that two regions all overlap Together, since the lower region of brightness is standard label as a result, then demonstrating the validity and accuracy of context of methods;In Fig. 5 Scheming (a) extremely figure (c) is the experimental result under one group of large scale transformation, wherein figure (a) is erroneous matching as a result, figure with figure (c) It (b) is correct matching result;By the result as it can be seen that scheming (b) in only Fig. 5 is that correctly, i.e. large scale transformation may to become Occurs similitude region in image after changing, therefore the accuracy rate of context of methods in this case is not high;
In the embodiment of the present invention, experimental data is used as using the image (S1) in Fig. 6~figure (S3), first is classified as in table 1 Four kinds of different value ranges, as the increase affine transformation range of line number is also bigger, on different rows, the random sampling model Data and template position in enclosing test S1~S3, as a result as shown in 2~4 column in table 1, from experimental data, when When affine transformation scale is not very big, the accuracy rate of context of methods can achieve 75% or more, therefore carry out natural image Timing, error hiding region can be rejected by carrying out multizone matching consistency detection by RANSAC random sampling consistent method;
Table 1: random sampling test result

Claims (4)

1. a kind of texture-free region matching process of image, which comprises the following steps:
Step 1 carries out binary conversion treatment to original image, and pixel in region texture-free in binary image is denoted as 1, other Pixel is denoted as 0 in region;
Step 2, the mass center for seeking texture-free region;
Each pixel in step 3, the texture-free region of traversal, constructs vector matrix;
Each vector units in the matrix by the Euclidean distance of pixel and mass center, pixel and mass center azimuth, should The abscissa of pixel and the ordinate of the pixel are constituted;
One step 4, setting value range are the angle k of [0:359], are that l traverses angle k with step-length, construction centroid distance becomes Change figure;It is specific as follows:
Step 4-1,
(1) k=0 under original state is set;
(2) each vector units in vector matrix are traversed, the unit construction set that all azimuths are equal to k is searched;
(3) k, if k is not more than 359, is re-executed (2) from increasing 1, no to then follow the steps 4-2;
Step 4-2, Euclidean distance corresponding to each pixel in each unit construction set is obtained, each pixel institute is right The Euclidean distance answered obtains each pixel in centroid distance Transformation Graphs divided by Euclidean distance maximum in the unit construction set Ratio value;
Step 4-3, judge whether original image is gray level image, if so, 4-4 is thened follow the steps, it is no to then follow the steps 4-5;
Step 4-4, by the ratio of corresponding position in the gray value and centroid distance Transformation Graphs of pixel each in original-gray image Example value is multiplied, and using product as the end value of the position in centroid distance Transformation Graphs, completes construction centroid distance Transformation Graphs, hold Row step 5;
Step 4-5, by the R channel value, G channel value and channel B value of pixel each in original color image respectively with mass center away from Ratio value from corresponding position in Transformation Graphs is multiplied, and using product as the channel R of the position in centroid distance Transformation Graphs End value, the channel G end value and channel B end value complete construction centroid distance Transformation Graphs, execute step 5;
Step 5, selection template, carry out affine transformation for the template of selection in centroid distance Transformation Graphs, convert template side length, Position and angle obtain transformed region in the second width image according to transformed position;
Step 6 randomly selects the pixel for converting rear region in centroid distance Transformation Graphs in the pixel of template and the second width image Point obtains the pixel coordinate of template and the pixel coordinate phase that rear region is converted in the second width image in centroid distance Transformation Graphs Like degree difference value;
Step 7 executes step 5 to step 6 repeatedly, carries out multiple affine transformation to template, obtains transformed similarity every time Difference value selects affine transformation matrix corresponding to wherein similarity difference value minimum value, carries out affine transformation to template, obtains Matching is completed in corresponding region in the second width image.
2. the texture-free region matching process of image according to claim 1, which is characterized in that obtained described in step 4-2 Euclidean distance corresponding to each pixel in each unit construction set is taken, Euclidean distance corresponding to each pixel is removed With Euclidean distance maximum in the unit construction set, the ratio value of each pixel in centroid distance Transformation Graphs is obtained, it is specific public Formula is as follows:
Wherein, DTTC (x, y) indicates the ratio value of pixel (x, y) in centroid distance Transformation Graphs, Vk(x, y) [d] indicates unit structure Make set VkEuclidean distance corresponding to middle pixel (x, y), max (Vk[d]) indicate unit construction set VkMiddle maximum it is European away from From.
3. the texture-free region matching process of image according to claim 1, which is characterized in that selection described in step 5 The template of selection is carried out affine transformation in centroid distance Transformation Graphs, converts side length, position and the angle of template, root by template According to transformed position, transformed region is obtained in the second width image;
Affine transformation formula is as follows:
[x y 1]=pT (2)
Wherein, [x y 1] indicates the coordinate after affine transformation, picture in template in p=[X Y 1] expression centroid distance Transformation Graphs The coordinate of vegetarian refreshments;
T indicates transformation matrix, specific formula is as follows:
Wherein, λxIndicate lateral scaling coefficient, value range is [0.1,10], λyIndicate longitudinal scaling coefficient, value Range is [0.1,10], θ indicates rotation angle, and value range is 0 °~359 °, xoAnd yoIndicate template center's pixel coordinate.
4. the texture-free region matching process of image according to claim 1, which is characterized in that random described in step 6 The pixel for converting rear region in centroid distance Transformation Graphs in the pixel and the second width image of template is chosen, centroid distance is obtained The pixel coordinate similarity difference value of rear region is converted in Transformation Graphs in the pixel coordinate of template and the second width image;
Similarity difference value ΔT(I1, I2) calculation formula is as follows:
Wherein, I1Indicate centroid distance Transformation Graphs, I2Indicate the second width image;n1Indicate the pixel number randomly selected;A table Show that the range from red to green, value range are [127, -128];B indicates that the range from yellow to blue, value range are [127, -128];Indicate centroid distance Transformation Graphs I1The a channel value of the pixel p of middle template,Indicate centroid distance Transformation Graphs I1The b channel value of the pixel p of middle template,Indicate the second width image I2The a of the pixel T (p) of middle template Channel value,Indicate the second width image I2The b channel value of the pixel T (p) of middle template.
CN201611033115.6A 2016-11-18 2016-11-18 A kind of texture-free region matching process of image Expired - Fee Related CN106780574B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611033115.6A CN106780574B (en) 2016-11-18 2016-11-18 A kind of texture-free region matching process of image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611033115.6A CN106780574B (en) 2016-11-18 2016-11-18 A kind of texture-free region matching process of image

Publications (2)

Publication Number Publication Date
CN106780574A CN106780574A (en) 2017-05-31
CN106780574B true CN106780574B (en) 2019-06-25

Family

ID=58970895

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611033115.6A Expired - Fee Related CN106780574B (en) 2016-11-18 2016-11-18 A kind of texture-free region matching process of image

Country Status (1)

Country Link
CN (1) CN106780574B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109064488B (en) * 2018-07-05 2022-08-09 北方工业大学 Method for matching and tracking specific building in unmanned aerial vehicle video
CN109829502B (en) * 2019-02-01 2023-02-07 辽宁工程技术大学 Image pair efficient dense matching method facing repeated textures and non-rigid deformation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184418A (en) * 2011-06-10 2011-09-14 上海应用技术学院 Triangle-area-representation-histogram-based image registration method
CN104599277A (en) * 2015-01-27 2015-05-06 中国科学院空间科学与应用研究中心 Image registration method for area-preserving affine transformation
CN105741297A (en) * 2016-02-02 2016-07-06 南京航空航天大学 Repetitive pattern image matching method with affine invariance

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184418A (en) * 2011-06-10 2011-09-14 上海应用技术学院 Triangle-area-representation-histogram-based image registration method
CN104599277A (en) * 2015-01-27 2015-05-06 中国科学院空间科学与应用研究中心 Image registration method for area-preserving affine transformation
CN105741297A (en) * 2016-02-02 2016-07-06 南京航空航天大学 Repetitive pattern image matching method with affine invariance

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Colour FAST(CFAST)match: fast affine template matching for colour images;DiJia 等;《ELECTRONICSLETTERS》;20160731;第52卷(第14期);第1220-1221页 *

Also Published As

Publication number Publication date
CN106780574A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
Hu et al. Revisiting single image depth estimation: Toward higher resolution maps with accurate object boundaries
CN106548462B (en) Non-linear SAR image geometric correction method based on thin-plate spline interpolation
CN104200461B (en) The remote sensing image registration method of block and sift features is selected based on mutual information image
CN103729649B (en) A kind of image rotation angle detection method and device
CN108596867A (en) A kind of picture bearing calibration and system based on ORB algorithms
CN106355607B (en) A kind of width baseline color image template matching method
CN110111248A (en) A kind of image split-joint method based on characteristic point, virtual reality system, camera
CN110009690A (en) Binocular stereo vision image measuring method based on polar curve correction
Quan et al. Deep generative matching network for optical and SAR image registration
CN111709980A (en) Multi-scale image registration method and device based on deep learning
CN107516322A (en) A kind of image object size based on logarithm pole space and rotation estimation computational methods
CN102982561B (en) Method for detecting binary robust scale invariable feature of color of color image
CN106780574B (en) A kind of texture-free region matching process of image
CN110009670A (en) The heterologous method for registering images described based on FAST feature extraction and PIIFD feature
Ju et al. Demultiplexing colored images for multispectral photometric stereo via deep neural networks
CN103714544B (en) A kind of optimization method based on SIFT feature Point matching
CN108764309A (en) A kind of progressive dynamic hyperspectral image classification method
CN106485277B (en) A kind of high-resolution Multitemporal Remote Sensing Images classification method based on the alignment of multi-connection decision manifold
CN108765472A (en) Image set method for registering based on sparse digraph
Dad et al. 2D color shapes description by quaternion disc-harmonic moments
CN110222688A (en) A kind of instrument localization method based on multi-level correlation filtering
CN110108231A (en) Huge body cabinet three-dimensional dimension measurement method and storage medium based on Corner Detection
US11232289B2 (en) Face identification method and terminal device using the same
CN103971118A (en) Detection method of wine bottles in static pictures
Sun et al. Attention-Based Grayscale Image Colorization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190625

Termination date: 20201118

CF01 Termination of patent right due to non-payment of annual fee