WO2017113692A1 - 一种图像匹配方法及装置 - Google Patents

一种图像匹配方法及装置 Download PDF

Info

Publication number
WO2017113692A1
WO2017113692A1 PCT/CN2016/088691 CN2016088691W WO2017113692A1 WO 2017113692 A1 WO2017113692 A1 WO 2017113692A1 CN 2016088691 W CN2016088691 W CN 2016088691W WO 2017113692 A1 WO2017113692 A1 WO 2017113692A1
Authority
WO
WIPO (PCT)
Prior art keywords
sample
column
row
template
matched
Prior art date
Application number
PCT/CN2016/088691
Other languages
English (en)
French (fr)
Inventor
杨帆
刘阳
蔡砚刚
白茂生
魏伟
Original Assignee
乐视控股(北京)有限公司
乐视云计算有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 乐视控股(北京)有限公司, 乐视云计算有限公司 filed Critical 乐视控股(北京)有限公司
Priority to US15/247,000 priority Critical patent/US20170185865A1/en
Publication of WO2017113692A1 publication Critical patent/WO2017113692A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to an image matching method and apparatus.
  • the prior art uses a line camera to acquire images of each column on the curved surface, and then splicing the images of each column collected into a complete template sample, but because of the high cost and large volume of the line array camera, it is compact. And low-cost equipment can be a big burden.
  • each part of the pattern is taken, and then the image mosaic algorithm is used to directly form a complete template sample.
  • this scheme requires higher precision for the drawing, and The closer the surface is to the sides, the greater the degree of distortion, which makes the accuracy of the drawing in this scheme difficult to meet the requirements of the algorithm, and the stitching effect in the twisted part is not good; moreover, for the pattern with less features, the image stitching The algorithm is too large to be used, and even the template sample is inaccurate, and accurate matching of the surface image cannot be achieved.
  • the present invention provides an image matching method and apparatus for solving the problem that the surface image cannot be accurately matched in the prior art.
  • the invention provides an image matching method, comprising:
  • S101 Determine, according to the search box, an area to be matched in the image, where a size of the area to be matched is larger than a size of the template sample;
  • the area determined by the row/column pixel points of the to-be-matched area corresponding to the maximum similarity is used as the target area.
  • the invention also provides an image matching device comprising:
  • a selection module configured to determine, in the image, a region to be matched according to the search box, wherein a size of the to-be-matched region is larger than a size of the template sample;
  • a matching template sample obtaining module configured to calculate a row/column gray mean value of the gray value of each row/column pixel of the to-be-matched region, and compare each row/column of the to-be-matched region with a plurality of pre-created template samples a gradation mean value, and determining a template sample having the highest similarity to a partial adjacent row/column gray mean value in the to-be-matched region as a matching template sample;
  • a positioning module configured to determine, as the target area, an area in the to-be-matched area that matches each row/column gray mean value of the matched template sample.
  • the present invention also provides an image matching device, including: a memory and a processor, wherein
  • the memory is configured to store one or more instructions, wherein the one or more instructions are Executed by the processor for execution;
  • the processor is configured to determine, in the image, an area to be matched according to the search box, where a size of the area to be matched is larger than a size of the template sample;
  • a region for determining a row/column pixel point of the to-be-matched region corresponding to the maximum similarity is used as a target region.
  • the present invention can obtain the following technical effects:
  • the image matching method and device provided by the invention pre-create a plurality of template samples to provide a matching basis for the image, and the process of stitching after the drawing in the prior art is omitted, thereby avoiding the problem that the template sample is inaccurate due to the stitching, according to
  • An independent template sample can be used to analyze and match the image to obtain an accurate matching result.
  • the row/column gray mean value is compared and calculated by the matching region and several template samples, and the similarity of each row/column gray mean value is obtained, and further
  • the matching template sample and the target area in the area to be matched are determined according to the maximum similarity.
  • the feature analysis of the image is performed on the gray value of each row/column pixel in the region to be matched, and as a matching basis, the accuracy of the matching can be further ensured.
  • FIG. 1 is a flow chart of an image matching method according to the present invention.
  • FIG. 2 is a flow chart of an embodiment of an image matching method according to the present invention.
  • FIG. 3 is a flow chart of an embodiment of an image matching method according to the present invention.
  • FIG. 4 is a flow chart of an embodiment of an image matching method according to the present invention.
  • FIG. 5 is a flowchart of an embodiment of an image matching method according to the present invention.
  • Figure 6 is a schematic view showing the process of drawing a standard sample image of the present invention.
  • FIG. 7 is a schematic diagram of a binarized sample in a template sample creation process of the present invention.
  • Figure 8 is a schematic view of a region to be matched according to the present invention.
  • FIG. 9 is a schematic diagram of a column area of a to-be-matched area determined by column matching according to the present invention.
  • FIG. 10 is a schematic structural diagram of an image matching device according to the present invention.
  • FIG. 11 is a schematic structural diagram of an embodiment of an image matching apparatus according to the present invention.
  • FIG. 12 is a schematic structural diagram of an image matching device according to the present invention.
  • Embodiments of the present invention provide an image matching method and apparatus, which can be applied to an image detection scenario.
  • the prior art usually uses a line camera to perform a column scan and splicing to generate a complete image as a template sample, but the cost is too high; or the image mosaic algorithm directly directly colors the image.
  • the picture is spliced into a complete image as a template sample, but the effect of splicing is not good, resulting in inaccurate image matching results.
  • the matching method and device provided by the embodiments of the present invention are to overcome the defects of the prior art, and create a plurality of independent template samples in advance. Matching the to-be-matched regions in the image with several template samples, and determining various quality parameters of the sample to be matched according to the matching result, such as whether the pattern has defects or the like.
  • image matching method and device provided by the embodiments of the present invention are also applicable to other image matching scenarios, which are not limited herein.
  • an embodiment of the present invention provides an image matching method, including:
  • S100 Determine, according to the search box, an area to be matched in the image, where a size of the area to be matched is larger than a size of the template sample;
  • S300 The area determined by the row/column pixel of the to-be-matched area corresponding to the maximum similarity is used as the target area.
  • the S100 determines, according to the search box, the area to be matched in the image as an area for comparison with the template sample in the subsequent process, and the size of the area to be matched is set larger than the size of the template sample to avoid The position of the pattern is not fixed and the matching is not accurate.
  • the finally matched template samples will correspond to a part of the area to be matched.
  • the range of the black background corresponds to the size of the area to be matched
  • the range of the line in the black background represents the size of the template sample.
  • S200 first calculating a row/column gray mean value of the to-be-matched region, and comparing each row of grayscale mean values of the to-be-matched regions with respective row grayscale mean values of the plurality of template samples, or the to-be-matched region
  • the gray scale mean of each column is compared with the gray scale mean of each column of several template samples.
  • the size of the row/column gray mean value of the to-be-matched region is necessarily greater than the row/column grayscale mean value of the template sample, because the size of the to-be-matched region is larger than the size of the template sample.
  • the region of the frame line range is determined by the matching comparison as the target.
  • the area is determined to have the same size as the template sample.
  • the feature of each row/column pixel in the to-be-matched region can be determined according to the row/column gray average of the to-be-matched region.
  • the image may be converted into a binary image and the row/column gray mean value may be calculated, and when the row/column gray mean value is calculated, the gray value of the pixel in the region to be matched is larger than the template sample. Both are considered to be 0. Therefore, the number of pixels per row/column is considered to be the same as the number of pixels per row/column of the template sample when calculating the row/column gray mean.
  • the row/column gray mean value of the template sample matches the corresponding number and adjacent row/column gray mean values in the to-be-matched region, and it is proved that the pattern of the template sample is the same or very similar to the pattern of the target region in the to-be-matched region.
  • matching template samples for industrial processing or other application scenarios can be determined and the target area to be processed can be located in the image. For example, pattern defect analysis, print quality analysis, and the like may be performed on the determined target area in an industrial process or other application scenarios.
  • S200 calculates a row/column gray mean value of any consecutive row/column pixel corresponding to the template sample row/column number in the to-be-matched region, and has a similarity with the row/column grayscale mean of the template sample.
  • the calculation scheme is exemplified in the following various embodiments.
  • S200 includes:
  • the template sample corresponding to the determined sample row vector is used as the matching template sample.
  • S201 can be based on Calculating a column gray mean value of each column of pixels of the to-be-matched region, where Q(i, j) represents a gray value of each pixel in the region to be matched, i represents a row coordinate of the pixel point, and j represents a pixel point Column coordinates, K w represents the number of columns in the template sample.
  • the L w column gray averages of the to-be-matched regions can be obtained by calculation, and any adjacent K w columns gray mean values are defined as one search row vector, for example, 0 to K w -1 columns can be grayed out.
  • the mean value is defined as a search line vector, or the first to K w , the second to K w +1, the third to K w + 2 column gray mean values are respectively defined as search line vectors, and so on.
  • L w -K w +1 search line vectors can be defined.
  • the length of the search row vector is the same as the length of the sample row vector in the template sample.
  • S202 compares each search line vector with the sample line vectors of several template samples created in advance, and calculates the search line vector and the sample line vector. Similarity.
  • the number of template samples is N
  • the number of similarities obtained by the calculation will be N*(L w -K w +1), and then selected from N*(L w -K w +1) similarities
  • the maximum similarity is determined, and the template sample corresponding to the sample row vector corresponding to the maximum similarity is determined as the matching template sample, and the S300 determines the region determined by the pixel in the region to be matched corresponding to the search row vector corresponding to the maximum similarity as target area.
  • S200 includes:
  • the template sample corresponding to the determined sample column vector is used as the matching template sample.
  • S211 can be Calculating a row gray mean value of each row of pixels of the to-be-matched region, where Q(i,j) represents a gray value of each pixel in the region to be matched, i represents a row coordinate of the pixel point, and j represents a pixel point Column coordinates, K h represents the number of rows in the template sample.
  • the gray mean value of the L h rows of the to-be-matched region can be obtained by calculation, and the mean value of any adjacent K h rows is defined as a search column vector, for example, 0 to K h -1 row gray
  • the mean value is defined as a search column vector, or the first to K h , the second to K h +1, the third to K h + 2 rows of gray mean values are respectively defined as search column vectors, and so on.
  • L h -K h +1 search column vectors can be defined.
  • the length of the search column vector is the same as the length of the sample column vector in the template sample.
  • S212 compares each search column vector with the sample column vectors of several template samples created in advance, and calculates the search column vector and the sample column vector. Similarity. If the number of template samples is N, then the number of similarities obtained by the calculation will be N*(L h -K h +1), and then selected from N*(L h -K h +1) similarities The maximum similarity is determined, and the template sample corresponding to the sample column vector corresponding to the maximum similarity is determined as the matching template sample, and the S300 determines the region determined by the pixel in the region to be matched corresponding to the search column vector corresponding to the maximum similarity as target area.
  • S102 includes:
  • the template samples corresponding to the determined plurality of sample row vectors are used as intermediate template samples.
  • S224 Calculate a row gray mean value according to a gray value of each row of pixel points of the to-be-matched region, and define L h -K h +1 search column vectors according to any adjacent K h row gray mean values, where L h represents the number of rows of the area to be matched, and K h represents the number of rows of the template sample, L h >K h ;
  • the intermediate template sample corresponding to the determined sample column vector is used as the matching template sample.
  • S221 can be based on Calculating a column gray mean value of each column of pixels of the to-be-matched region, where Q(i, j) represents a gray value of each pixel in the region to be matched, i represents a row coordinate of the pixel point, and j represents a pixel point Column coordinates, K w represents the number of columns in the template sample.
  • the L w columns gray mean values of the to-be-matched regions can be obtained by calculation, and any adjacent K w columns gray mean values are defined as one search row vector, and L h -K h +1 search lines are defined. vector.
  • the length of the search row vector is the same as the length of the sample row vector in the template sample.
  • S222 compares each search line vector with the sample line vectors of several template samples created in advance, and calculates the search line vector and the sample line vector. Similarity. If the number of template samples is N, then the number of similarities obtained by the calculation will be N*(L w -K w +1), and then selected from N*(L w -K w +1) similarities If the threshold is greater than the predetermined threshold or the similarity is sorted from the largest to the smallest, the predetermined degree of the similarity at the front end is selected, and the template sample corresponding to the selected similarity is determined as the intermediate template sample, and the column matching process is performed through S221 to 223.
  • the number of intermediate template samples determined will be much smaller than the total number of template samples.
  • the maximum similarity is selected from the similarity between each search column vector of the to-be-matched region and the sample column vector of the plurality of intermediate template samples.
  • the intermediate template samples corresponding to the maximum similarity are determined as matching template samples.
  • the column matching process is performed on the matching region and all the template samples, and a plurality of template samples with higher column matching degree are selected as intermediate template samples, and then the matching process is performed on the matching region and the intermediate template samples. And selecting the highest intermediate template sample as the matching template sample, and through the two matching processes, the matching relationship between the to-be-matched region and the template sample can be more accurately determined, thereby more accurately determining the target in the to-be-matched region.
  • S200 includes:
  • S236 Determine a maximum value of a similarity between a sample column vector of the intermediate template sample and a search column vector of a corresponding column region in the to-be-matched region, and use the intermediate template sample corresponding to the maximum value as a matching template. sample.
  • S231 can be based on Calculating a column average value of the gray value of each column of the pixel to be matched, wherein Q(i, j) represents a gray value of a pixel in the area to be matched, i represents a row coordinate of the pixel, and j represents The column coordinates of the pixel, K w represents the number of columns of the template sample.
  • the grayscale value of each pixel of each column is 0 compared with the template sample.
  • the L w column gray mean values of the to-be-matched regions can be obtained by calculation, and any adjacent K w columns gray mean values are defined as one search row vector, and L w -K w +1 searches can be defined. Line vector.
  • each template sample its sample row vector will be compared with the L w -K w +1 search row vectors of the region to be matched, and each template sample will obtain L w -K w +1 Similarity, the maximum similarity is selected from L w - K w +1 similarities. If the total number of template samples is N, then S232 will determine the N maximum similarities. S233 selects a predetermined number or a maximum similarity greater than a predetermined threshold from the N maximum similarities, and selects the N largest similarities from large to small according to the predetermined number and selects a predetermined number of maximums from one end of the large value. Similarity. The template sample corresponding to the selected maximum similarity will be used as the intermediate template sample.
  • the column region corresponding to the intermediate template sample shown in FIG. 9 is determined in the to-be-matched region according to the correspondence between the sample row vector and the search row vector of the to-be-matched region.
  • S234 performing inverse matching on the to-be-matched area from the perspective of the intermediate template sample, in the determined column area of the to-be-matched area, according to Continue to calculate the line gray mean value of each row of pixels in the column region, obtain Lh rows of gray mean values, and define L h -K h +1 lengths according to the obtained L h rows of gray mean values K h column vector search, the search column length column vector with the sample vector of the sample is the same as intermediate template.
  • S235 calculates a similarity between the sample row vector of the intermediate template sample and the search column vector of the corresponding column region in the to-be-matched region, and determines a maximum similarity corresponding to each intermediate template sample, and S236 determines by comparing S235.
  • the plurality of maximum similarities obtain a maximum value, and the intermediate template sample corresponding to the maximum value is used as a matching template sample, and the search row vector and the search column vector corresponding to the maximum value can determine the target region in the to-be-matched region.
  • the column matching process is performed on the matching region and all template samples, and the maximum similarity between each template sample and the to-be-matched region is calculated, and the optimal few are selected according to the maximum similarity of each template template.
  • the row matching process calculates the row gray mean value only in the determined column region, and S234 defines the search column vector of the region to be matched according to the calculated grayscale mean of each row of the column region, and S235 is calculated according to this manner.
  • the search column vector is compared with the sample column vector of the intermediate template sample for similarity calculation.
  • n*(L h -K h +1) similarities will be obtained, and S235 will select the maximum from n*(L h -K h +1) similarities, and The intermediate template sample corresponding to the maximum value is taken as the matching template sample.
  • S235 selects the maximum value from n*(L h -K h +1) similarities, and can first calculate the maximum similarity corresponding to each intermediate template sample, and then select the maximum value from the n maximum similarities. Or, the maximum value is selected directly from n*(L h -K h +1) similarities, and other methods are used, which are not limited herein.
  • the most matching row/column regions of the to-be-matched region and the template sample can be locked first, and then the column/row matching processing is performed on the matched row/column regions to determine a more accurate target region.
  • the matching result can be accurate to the position of the pixel in the image, so accurate image matching can be performed and a more accurate matching result than the prior art can be obtained.
  • the angle between the search line vector defined by the 0th to Kw -1 column average values and the sample line vector of the template sample may be Calculated and obtained It can be used as the similarity d n,0 of two vectors.
  • the similarity between the search column vector of the to-be-matched region and the template sample may be based on Calculated.
  • different numbers of similarities can be obtained according to different calculation parameters and comparison objects, and finally, a pair of sample row/column vectors and search row/column vectors with the smallest difference and the highest matching degree can be selected according to the similarity degree. And according to this accurate drop to determine the matching template sample and the target area in the area to be matched.
  • the pre-creation process of the template samples is described in detail below in an embodiment.
  • the pre-creation process of the template samples provides a basis for calculation for the similarity calculation process in the embodiments of the above embodiments, and the pre-creation process of the template samples is completed at least before the step of performing the calculation similarity in the above embodiment.
  • the pre-creation process of the template sample includes:
  • S402. Calculate a row average value of gray values of each row of pixels of the binarized sample and a column average value of gray values of each column of pixel points, and define according to all row average values of the binarized samples.
  • a sample column vector of length K w defining a sample row vector of length K h according to all column averages of the binarized samples;
  • the period of the drawing may be set according to the actual situation, generally taking a picture every 3 to 5 degrees, and avoiding excessive time loss caused by excessive drawing, and ensuring
  • the sampling rate of the sample of the certificate is in accordance with the requirements.
  • Standard sample images require the use of standard positive samples to avoid inaccurate template samples generated after sampling based on non-standard samples.
  • the drawing frame should face the standard sample image when the drawing is taken, and the size of the drawing frame should not be too large, so as to avoid the distortion of the drawing sample caused by the excessive drawing size, resulting in inaccurate template samples.
  • the binarization processing described herein refers to performing gray value conversion on the image of 0 and 255, and there are various conversion methods.
  • the binarization threshold method may be used to extract the pattern for grayscale.
  • other methods such as an image grayscale hat, a black cap, and an edge extraction, may be used, which are not limited in the present invention.
  • step S402 the binarized samples obtained by the binarization process are performed row by row and column by column, and the calculation of the row gray mean value and the column gray mean value is performed, specifically according to Calculating a line gray mean value of the binarized sample, according to Calculating a column gray mean value of the binarized sample, wherein I(i, j) represents a gray value of each pixel in the binarized sample, i represents a row coordinate of the pixel point, and j represents a column of the pixel point
  • the coordinates, K w and K h represent the number of columns and the number of rows of the binarized samples, respectively. Obtained in accordance with step S402 columns gray value K w, K h rows gray value, respectively binarized sample K w columns is defined as the average gray value of the sample row vector, rows K h the gray value defined Is the sample column vector.
  • Step S403 marking the sample column vector and the sample row vector calculated in step S402 to the binarized sample generation template sample, and additionally numbering the template samples, so as to be convenient to find in the subsequent matching process on the image. Go to the matching template sample.
  • the binarization sample is subjected to binarization processing to obtain a binarized sample, and the method further includes:
  • the useless sample of the drawing is deleted, and the sample of the useless drawing includes: a sample of the drawing having a difference in the angle of the drawing from the previous drawing sample that is less than a predetermined threshold or without a pattern.
  • Deleting useless sampling samples can effectively reduce the analysis time of the sample, and finally the remaining sampling samples
  • the number of samples can be between 80 and 180 to meet the sampling analysis requirements.
  • the traditional complete image is changed as a template sample, and multiple independent template samples are used as the basis for image matching. Therefore, the process of image stitching is eliminated, thereby avoiding the image.
  • the problem that the template samples brought by the splicing is inaccurate; moreover, the sample column vector and the sample row vector are marked in each template sample in this embodiment, and high-precision image matching can be realized.
  • the cylindrical standard positive sample in Fig. 6 is rotated, and the image frame is aligned with the position in the middle of the front side of the sample to take images on the surface of the sample, and several sample samples are obtained, and the useless samples are deleted.
  • the sample sample is binarized for the N sample samples to obtain the binarized samples as shown in Fig. 4, and then the row analysis and column analysis are performed on the N binarized samples, respectively, and the binarized samples are defined.
  • the sample column vector and the sample row vector, and label n each binary assay, obtain N template samples, the sample column vector of each template sample can be recorded as P h (n), and the sample row vector can be recorded as P v (n), at this point, the creation of the template sample is completed.
  • the image is first binarized, and then the area to be matched as shown in FIG. 8 is selected in the image according to the search box, and the size of the area to be matched (the size corresponding to the black background in FIG.
  • the template samples corresponding to the five largest similarities will be used as the intermediate template samples, and according to the five largest similarities.
  • the search row vector corresponding to the region to be matched determines the corresponding five column regions, and thus the column matching process for the region to be matched is completed.
  • FIG. 9 shows a column region in a region to be matched corresponding to a single intermediate template sample.
  • an arbitrary K h row gray mean value definition L can be taken from top to bottom similarly to the above column matching process.
  • an embodiment of the present invention provides an image matching apparatus, including:
  • the selecting module 11 is configured to determine, in the image, the area to be matched according to the search box, wherein the size of the area to be matched is larger than the size of the template sample;
  • the matching template sample obtaining module 12 is configured to calculate a row/column gray mean value of the gray value of each row/column pixel of the to-be-matched region, and calculate a corresponding region in the to-be-matched region for any template sample that is pre-created.
  • the positioning module 13 is configured to use, as the target area, an area determined by the row/column pixel points of the to-be-matched area corresponding to the maximum similarity.
  • the selection module 11 uses the area to be matched determined in the image according to the search box as an area for comparison with the template sample in the subsequent process, and the size of the area to be matched is set larger than the size of the template sample to avoid The position of the pattern in the image is not fixed and the matching is not accurate.
  • the final matched template sample will correspond to a part of the area to be matched, for example, in Figure 8, black
  • the range of the background corresponds to the size of the area to be matched, and the range of the border in the black background represents the size of the template sample.
  • the matching template sample obtaining module 12 first calculates a row/column gray mean value of the to-be-matched region, and compares each row of grayscale mean values of the to-be-matched regions with each row of grayscale mean values of the plurality of template samples, or The gray mean values of the columns of the to-be-matched regions are compared with the grayscale mean values of the columns of the plurality of template samples, respectively.
  • the size of the row/column gray mean value of the to-be-matched region is necessarily greater than the row/column grayscale mean value of the template sample, because the size of the to-be-matched region is larger than the size of the template sample.
  • any template sample calculate a row/column gray mean of any consecutive row/column pixel corresponding to the number of rows/columns of the template sample in the region to be matched, and a row/column of the template sample
  • the similarity of the gray mean values, the matching template samples corresponding to the maximum similarity and the target regions in the to-be-matched region can be found through the matching comparison.
  • the region of the frame line range is determined as the target region by the matching comparison, and the determination is performed.
  • the size of the target area is the same as the size of the template sample.
  • the feature of each row/column pixel in the to-be-matched region can be determined according to the row/column gray average of the to-be-matched region.
  • the image may be converted into a binary image and the row/column gray mean value may be calculated, and when the row/column gray mean value is calculated, the gray value of the pixel in the region to be matched is larger than the template sample. Both are considered to be 0. Therefore, the number of pixels per row/column is considered to be the same as the number of pixels per row/column of the template sample when calculating the row/column gray mean.
  • the row/column gray mean value of the template sample matches the corresponding number and adjacent row/column gray mean values in the to-be-matched region, and it is proved that the pattern of the template sample is the same or very similar to the pattern of the target region in the to-be-matched region.
  • matching template samples for industrial processing or other application scenarios can be determined and the target area to be processed can be located in the image. For example, pattern defect analysis, print quality analysis, and the like may be performed on the determined target area in an industrial process or other application scenarios.
  • the matching template sample obtaining module 12 calculates a row/column gray mean value of any consecutive row/column pixel corresponding to the template sample row/column number in the region to be matched, which is similar to the row/column gray mean of the template sample.
  • the manner of the degrees is varied, and the calculation scheme will be exemplified in the following various embodiments.
  • the matching template sample obtaining module 12 is configured to:
  • the template sample corresponding to the determined sample row vector is taken as the matching template sample.
  • the matching template sample obtaining module 12 can be Calculating a column gray mean value of each column of pixels of the to-be-matched region, where Q(i, j) represents a gray value of each pixel in the region to be matched, i represents a row coordinate of the pixel point, and j represents a pixel point Column coordinates, K w represents the number of columns in the template sample.
  • the L w column gray averages of the to-be-matched regions can be obtained by calculation, and any adjacent K w columns gray mean values are defined as one search row vector, for example, 0 to K w -1 columns can be grayed out.
  • the mean value is defined as a search line vector, or the first to K w , the second to K w +1, the third to K w + 2 column gray mean values are respectively defined as search line vectors, and so on.
  • L w -K w +1 search line vectors can be defined.
  • the length of the search row vector is the same as the length of the sample row vector in the template sample.
  • S202 compares each search line vector with the sample line vectors of several template samples created in advance, and calculates the search line vector and the sample line vector. Similarity.
  • the number of template samples is N
  • the number of similarities obtained by the calculation will be N*(L w -K w +1), and then selected from N*(L w -K w +1) similarities
  • the maximum similarity is determined, and the template sample corresponding to the sample row vector corresponding to the maximum similarity is determined as the matching template sample, and the positioning module 13 determines the pixel in the to-be-matched region corresponding to the search row vector corresponding to the maximum similarity.
  • the area is the target area.
  • the matching template sample obtaining module 12 is configured to:
  • a template sample corresponding to the determined sample column vector is used as a matching template sample.
  • the matching template sample obtaining module 12 can be Calculating a row gray mean value of each row of pixels of the to-be-matched region, where Q(i,j) represents a gray value of each pixel in the region to be matched, i represents a row coordinate of the pixel point, and j represents a pixel point Column coordinates, K h represents the number of rows in the template sample.
  • the gray mean value of the L h rows of the to-be-matched region can be obtained by calculation, and the mean value of any adjacent K h rows is defined as a search column vector, for example, 0 to K h -1 row gray
  • the mean value is defined as a search column vector, or the first to K h , the second to K h +1, the third to K h + 2 rows of gray mean values are respectively defined as search column vectors, and so on.
  • L h -K h +1 search column vectors can be defined.
  • the length of the search column vector is the same as the length of the sample column vector in the template sample.
  • S212 compares each search column vector with the sample column vectors of several template samples created in advance, and calculates the search column vector and the sample column vector. Similarity. If the number of template samples is N, then the number of similarities obtained by the calculation will be N*(L h -K h +1), and then selected from N*(L h -K h +1) similarities The maximum similarity is determined, and the template samples corresponding to the sample column vector corresponding to the maximum similarity are determined as matching template samples, and the positioning module 13 determines the pixel points in the to-be-matched region corresponding to the search column vector corresponding to the maximum similarity. The area is the target area.
  • the matching template sample obtaining module 12 is configured to:
  • the intermediate template sample corresponding to the determined sample column vector is taken as the matching template sample.
  • the matching template sample obtaining module 12 can be Calculating a column gray mean value of each column of pixels of the to-be-matched region, where Q(i, j) represents a gray value of each pixel in the region to be matched, i represents a row coordinate of the pixel point, and j represents a pixel point Column coordinates, K w represents the number of columns in the template sample.
  • the L w columns gray mean values of the to-be-matched regions can be obtained by calculation, and any adjacent K w columns gray mean values are defined as one search row vector, and L h -K h +1 search lines are defined. vector.
  • the length of the search row vector is the same as the length of the sample row vector in the template sample.
  • the matching template sample acquisition module 12 compares each search line vector with the sample line vectors of a plurality of template samples created in advance, and calculates a search line vector. Similarity to the sample row vector. If the number of template samples is N, then the number of similarities obtained by the calculation will be N*(L w -K w +1), and then selected from N*(L w -K w +1) similarities The matching template sample obtaining module 12 determines the selected template sample corresponding to the similarity as the intermediate template sample, and the matching template is obtained, if the predetermined similarity is greater than the predetermined threshold or the similarity is selected from the largest to the small.
  • the number of intermediate template samples determined by the column matching process of the sample acquisition module 12 will be much smaller than the total number of template samples.
  • the maximum of the similarity between the search column vectors of the to-be-matched regions and the sample column vectors of the plurality of intermediate template samples is selected. The similarity is determined, and the intermediate template sample corresponding to the maximum similarity is determined as the matching template sample.
  • the column matching process is performed on the matching region and all the template samples, and a plurality of template samples with higher column matching degree are selected as intermediate template samples, and then the matching process is performed on the matching region and the intermediate template samples. And selecting the highest intermediate template sample as the matching template sample, and through the two matching processes, the matching relationship between the to-be-matched region and the template sample can be more accurately determined, thereby more accurately determining the target in the to-be-matched region.
  • the matching template sample obtaining module 12 first performs the column matching process of the region to be matched and all the template samples, and then executes the row of the matching region and the intermediate template sample.
  • the scheme of the matching process in the same way, the matching template sample obtaining module 12 first performs the row matching process of the to-be-matched region and all the template samples, and then executes the scheme of the column matching process of the to-be-matched region and the intermediate template sample, and can also obtain the same accuracy. Match results.
  • the matching template sample obtaining module 12 is configured to:
  • the matching template sample obtaining module 12 can be Calculating a column average value of the gray value of each column of the pixel to be matched, wherein Q(i, j) represents a gray value of a pixel in the area to be matched, i represents a row coordinate of the pixel, and j represents The column coordinates of the pixel, K w represents the number of columns of the template sample.
  • Q(i, j) represents a gray value of a pixel in the area to be matched
  • i represents a row coordinate of the pixel
  • j represents The column coordinates of the pixel
  • K w represents the number of columns of the template sample.
  • the grayscale value of each pixel of each column is 0 compared with the template sample.
  • each template sample its sample row vector will be compared with the L w -K w +1 search row vectors of the region to be matched, and each template sample will obtain L w -K w +1 Similarity, the maximum similarity is selected from L w - K w +1 similarities. If the total number of template samples is N, the matching template sample acquisition module 12 will determine the N maximum similarities. S233 selects a predetermined number or a maximum similarity greater than a predetermined threshold from the N maximum similarities, and selects the N largest similarities from large to small according to the predetermined number and selects a predetermined number of maximums from one end of the large value. Similarity. The template sample corresponding to the selected maximum similarity will be used as the intermediate template sample.
  • the column region corresponding to the intermediate template sample shown in FIG. 9 is determined in the to-be-matched region according to the correspondence between the sample row vector and the search row vector of the to-be-matched region.
  • the matching template sample obtaining module 12 performs reverse matching on the to-be-matched area from the perspective of the intermediate template sample, and in the determined column area of the to-be-matched area, according to Continue to calculate the line gray mean value of each row of pixels in the column region, obtain Lh rows of gray mean values, and define L h -K h +1 lengths according to the obtained L h rows of gray mean values K h column vector search, the search column length column vector with the sample vector of the sample is the same as intermediate template.
  • the matching template sample obtaining module 12 calculates the similarity between the sample row vector of the intermediate template sample and the search column vector of the corresponding column region in the to-be-matched region, and determines the maximum similarity corresponding to each intermediate template sample, and matches
  • the template sample obtaining module 12 obtains a maximum value by comparing the plurality of determined maximum similarities, and the intermediate template sample corresponding to the maximum value is used as a matching template sample, and the search row vector and the search column vector corresponding to the maximum value may be waiting for The target area is determined in the matching area.
  • the column matching process is performed on the matching region and all template samples, and the maximum similarity between each template sample and the to-be-matched region is calculated, and the optimal few are selected according to the maximum similarity of each template template.
  • the row matching process calculates the row gray mean value only in the determined column region, and the matching template sample obtaining module 12 defines the search column vector of the region to be matched according to the calculated gray mean values of the column regions, and according to the manner The calculated search column vector is compared with the sample column vector of the intermediate template sample.
  • n*(L h -K h +1) similarities will be obtained, and the matching template sample obtaining module 12 selects from n*(L h -K h +1) similarities
  • the maximum value is taken, and the intermediate template sample corresponding to the maximum value is taken as the matching template sample.
  • the matching template sample obtaining module 12 selects the maximum value from n*(L h -K h +1) similarities, and can first calculate the maximum similarity corresponding to each intermediate template sample, and then the n largest similarities.
  • the maximum value is selected in the degree, or the maximum value is selected directly from n*(L h -K h +1) similarities, and other methods are used, which are not limited herein.
  • the matching template sample obtaining module 12 first performs the column matching process of the region to be matched and all the template samples, and then performs the matching region determined by the column matching process.
  • the matching template sample obtaining module 12 first performs the row matching process of the to-be-matched region and all the template samples, and then performs the row matching process to determine the to-be-matched region.
  • the same exact matching result can be obtained by the scheme of the column matching process of the row region and the intermediate template sample. Both schemes fall within the scope of protection of the present invention.
  • the most matching row/column regions of the to-be-matched region and the template sample can be locked first, and then the column/row matching processing is performed on the matched row/column regions to determine a more accurate target region.
  • the matching result can be accurate to the position of the pixel in the image, so accurate image matching can be performed and a more accurate matching result than the prior art can be obtained.
  • the matching template sample obtaining module 12 calculates the similarity
  • a plurality of manners may be adopted.
  • a similar scheme is calculated according to the vector angle to illustrate.
  • the matching template sample obtaining module 12 is configured to:
  • the angle between the search line vector defined by the 0th to Kw -1 column average values and the sample line vector of the template sample may be Calculated and obtained It can be used as the similarity d n,0 of two vectors.
  • the similarity between the search column vector of the to-be-matched region and the template sample may be based on Calculated.
  • different numbers of similarities can be obtained according to different calculation parameters and comparison objects, and finally, a pair of sample row/column vectors and search row/column vectors with the smallest difference and the highest matching degree can be selected according to the similarity degree. And according to this accurate drop to determine the matching template sample and the target area in the area to be matched.
  • the pre-creation process of the template samples is described in detail below with an embodiment.
  • the pre-creation process of the template samples provides the basis for the calculation of the similarity calculation process in the embodiments of the above embodiments.
  • a template sample pre-creation module 14 for:
  • the standard sample image is taken according to the drawing frame, and a plurality of drawing samples are obtained, and the sampling sample is binarized to obtain a binarized sample, wherein the size of the drawing frame is set to K w * K h ;
  • Each of the binarized samples is numbered, and a number of binarized samples after numbering and defining a sample column vector and a sample row vector are used as template samples.
  • the cycle of the drawing can be set according to the actual situation, generally taking a picture every 3 to 5 degrees, and avoiding excessive time loss caused by excessive drawing, and Ensure that the sampling rate of the sampling sample meets the requirements.
  • Standard sample images require the use of standard positive samples to avoid inaccurate template samples generated after sampling based on non-standard samples.
  • the drawing frame should face the standard sample image when the drawing is taken, and the size of the drawing frame should not be too large, so as to avoid the distortion of the drawing sample caused by the excessive drawing size, resulting in inaccurate template samples.
  • the binarization processing described herein refers to performing gray value conversion on the image of 0 and 255, and there are various conversion methods.
  • the binarization threshold method may be used to extract the pattern for grayscale.
  • other methods such as an image grayscale hat, a black cap, and an edge extraction, may be used, which are not limited in the present invention.
  • the binarized samples obtained by the binarization processing are performed row-by-row and column-by-column, and the calculation of the row gray mean value and the column gray mean value, specifically according to Calculating a line gray mean value of the binarized sample, according to Calculating a column gray mean value of the binarized sample, wherein I(i, j) represents a gray value of each pixel in the binarized sample, i represents a row coordinate of the pixel point, and j represents a column of the pixel point
  • I(i, j) represents a gray value of each pixel in the binarized sample
  • i represents a row coordinate of the pixel point
  • j represents a column of the pixel point
  • the coordinates, K w and K h represent the number of columns and the number of rows of the binarized samples, respectively.
  • the template sample pre-creation module 14 defines Kw columns gray mean values of the binarized samples as sample line vectors according to the obtained K w column gray mean values and K h line gray average values, and K h rows
  • the grayscale mean is defined as the sample column vector.
  • the template sample pre-creation module 14 marks the calculated sample column vector and the sample row vector to the binarized sample generation template sample, and further numbers the template samples so as to be able to perform subsequent matching on the image. It's easy to find matching template samples.
  • the template sample pre-creation module 14 is configured to:
  • the useless sample of the drawing is deleted, and the sample of the useless drawing includes: a sample of the drawing having a difference in the angle of the drawing from the previous drawing sample that is less than a predetermined threshold or without a pattern.
  • Deleting useless sampling samples can effectively reduce the analysis time of the sample.
  • the remaining number of sampling samples can be between 80 and 180 to meet the sampling analysis requirements.
  • the traditional complete image is changed as a template sample, and multiple independent template samples are used as the basis for image matching. Therefore, the process of image stitching is eliminated, thereby avoiding the image.
  • the problem that the template samples brought by the splicing is inaccurate; moreover, the sample column vector and the sample row vector are marked in each template sample in this embodiment, and high-precision image matching can be realized.
  • the image matching method and device provided by the embodiments of the present invention can accurately match various patterns in an image, including dense images, and can accurately achieve matching, and the splicing process is omitted by using multiple independent template samples.
  • the matching results are accurate and the calculation speed is faster.
  • an embodiment of the present invention provides an image matching device, including: a memory 401 and a processor 402, where
  • the memory 401 is configured to store one or more instructions, where the one or more instructions are used by the processor 402 to invoke execution;
  • the processor 402 is configured to determine, in the image, an area to be matched according to the search box, where a size of the area to be matched is larger than a size of the template sample;
  • a region for determining a row/column pixel point of the to-be-matched region corresponding to the maximum similarity is used as a target region.
  • the processor 402 is further configured to calculate a column gray mean value according to a gray value of each column of the pixel to be matched, and define L w -K w according to any adjacent K w column gray mean values +1 search line vector, where L w represents the number of columns of the region to be matched, and K w represents the number of columns of the template sample, L w >K w ;
  • the template sample corresponding to the determined sample row vector is taken as the matching template sample.
  • the processor 402 is further configured to calculate a line gray mean value according to the gray value of each row of pixel points of the to-be-matched area, and define L h -K h according to any adjacent K h line gray mean values +1 search column vector, where L h represents the number of rows of the region to be matched, and K h represents the number of rows of the template sample, L h >K h ;
  • a template sample corresponding to the determined sample column vector is used as a matching template sample.
  • the processor 402 is further configured to calculate a column gray mean value according to a gray value of each column of the pixel to be matched, and define L w -K w according to any adjacent K w column gray mean values +1 search line vector, where L w represents the number of columns of the region to be matched, and K w represents the number of columns of the template sample, L w >K w ;
  • the intermediate template sample corresponding to the determined sample column vector is taken as the matching template sample.
  • the processor 402 is further configured to: calculate a column gray mean value according to a gray value of each column of the to-be-matched region, and define L w -K w + according to any adjacent K w column gray mean values 1 search row vector, where L w represents the number of columns of the region to be matched, and K w represents the number of columns of the template sample, L w >K w ;
  • the processor 402 is further configured to: perform a drawing on the standard sample image according to the drawing frame, obtain a plurality of drawing samples, and perform binarization processing on the sampling sample to obtain a binarized sample, wherein the mining sample
  • the size of the frame is set to K w *K h ;
  • Each of the binarized samples is numbered, and a number of binarized samples after numbering and defining a sample column vector and a sample row vector are used as template samples.
  • the processor 402 is further configured to delete the useless sample of the drawing, the sample of the useless drawing comprising: a sample of the drawing having a difference in the angle of the drawing from the previous sampling sample that is less than a predetermined threshold or a pattern.
  • the processor 402 is further configured to:
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art can understand and implement without deliberate labor.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

一种图像匹配方法及装置,包括:根据搜索框在图像中确定出待匹配区域,其中待匹配区域的尺寸大于模板样本的尺寸(S100);计算所述待匹配区域的每行/列像素点的灰度值的行/列灰度均值,针对预创建的任一个模板样本,计算待匹配区域中对应所述模板样本行/列数量的任意连续行/列像素点的行/列灰度均值,与所述模板样本的行/列灰度均值的相似度,将最大相似度对应的模板样本作为匹配模板样本(S200);将所述最大相似度对应的所述待匹配区域的行/列像素点确定的区域作为目标区域(S300)。上述方法及装置省去了现有技术中采图后拼接的过程,避免了拼接导致的模板样本不准确的问题,根据多个独立的模板样本对图像进行匹配,可获得准确地匹配结果。

Description

一种图像匹配方法及装置
交叉引用
本申请引用于2015年12月29日递交的名称为“一种图像匹配方法及装置”的第201511017712.5号中国专利申请,其通过引用被全部并入本申请。
技术领域
本发明涉及图像处理技术领域,尤其涉及一种图像匹配方法及装置。
背景技术
当需要针对如圆柱体等曲面上的图案生成模板样本时,如果采用常规的采图方式,一次采图只能采到图案正面的一部分,无法在一张图上采到所有面的图案。
对此,现有技术采用线阵相机采集曲面上每一列的图像,之后将采集的每一列的图像拼接成完整的模板样本,但是,由于线阵相机的成本较高、体积较大,对于小巧且低成本的设备来说会造成很大的负担。
现有技术的另一解决方案中,将图案中的每一部分都进行采图,然后使用图像拼接算法直接拼成完整的模板样本,但是,这种方案对采图的精度要求较高,而由于曲面越靠近两侧的部分扭曲的程度越大,这导致该方案中采图的精度很难达到算法的要求,在扭曲部分的拼接效果并不好;而且,对于特征比较少的图案,图像拼接算法存在较大无法,甚至会导致模板样本不准确,无法实现对曲面图像的准确匹配。
发明内容
本发明提供一种图像匹配方法及装置,用以解决现有技术中无法便捷地为曲面图像进行准确匹配的问题。
本发明提供一种图像匹配方法,包括:
S101,根据搜索框在图像中确定出待匹配区域,其中,待匹配区域的尺寸大于模板样本的尺寸;
S102,计算所述待匹配区域的每行/列像素点的灰度值的行/列灰度均值,针对预创建的任一个模板样本,计算待匹配区域中对应所述模板样本行/列数量的任意连续行/列像素点的行/列灰度均值,与所述模板样本的行/列灰度均值的相似度,将最大相似度对应的模板样本作为匹配模板样本;
S103,将所述最大相似度对应的所述待匹配区域的行/列像素点确定的区域作为目标区域。
本发明还提供一种图像匹配装置,包括:
选取模块,用于根据搜索框在图像中确定出待匹配区域,其中,待匹配区域的尺寸大于模板样本的尺寸;
匹配模板样本获取模块,用于计算所述待匹配区域的每行/列像素点的灰度值的行/列灰度均值,比较所述待匹配区域与预创建的若干模板样本的各行/列灰度均值,确定出与所述待匹配区域中部分相邻行/列灰度均值相似度最高的模板样本作为匹配模板样本;
定位模块,用于将所述待匹配区域中与所述匹配模板样本的各行/列灰度均值匹配对应的区域确定为目标区域。
本发明还提供一种图像匹配设备,包括:存储器和处理器,其中,
所述存储器,用于存储一条或多条指令,其中,所述一条或多条指令以 供所述处理器调用执行;
所述处理器,用于根据搜索框在图像中确定出待匹配区域,其中,待匹配区域的尺寸大于模板样本的尺寸;
用于计算所述待匹配区域的每行/列像素点的灰度值的行/列灰度均值,针对预创建的任一个模板样本,计算待匹配区域中对应所述模板样本行/列数量的任意连续行/列像素点的行/列灰度均值,与所述模板样本的行/列灰度均值的相似度,将最大相似度对应的模板样本作为匹配模板样本;
用于将所述最大相似度对应的所述待匹配区域的行/列像素点确定的区域作为目标区域。
与现有技术相比,本发明可以获得包括以下技术效果:
本发明提供的图像匹配方法及装置,预先创建多个模板样本,为图像提供匹配依据,省去现有技术中采图后拼接的过程,避免了拼接导致的模板样本不准确的问题,根据多个独立的模板样本对图像进行分析匹配,可获得准确地匹配结果;通过对待匹配区域和若干模板样本进行行/列灰度均值的比较计算,获得各行/列灰度均值的相似度,并进一步根据最大相似度确定出匹配模板样本和待匹配区域中的目标区域。以待匹配区域中各行/列像素点的灰度值进行图像的特征分析,并作为匹配依据,可进一步保证匹配的准确性。
附图说明
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1为本发明图像匹配方法流程图;
图2为本发明图像匹配方法实施例流程图;
图3为本发明图像匹配方法实施例流程图;
图4为本发明图像匹配方法实施例流程图;
图5为本发明图像匹配方法实施例流程图;
图6为本发明标准样品图像采图过程示意图;
图7为本发明模板样本创建过程中的二值化样本示意图;
图8为本发明待匹配区域示意图;
图9为本发明经过列匹配确定出的待匹配区域的列区域示意图;
图10为本发明图像匹配装置结构示意图;
图11为本发明图像匹配装置实施例结构示意图;
图12为本发明图像匹配设备结构示意图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明实施例提供一种图像匹配方法及装置,可应用于图像检测场景中。对于曲面上的图像进行匹配检测时,现有技术通常采用线阵相机进行列扫描后拼接生成完整的一副图像作为模板样本,但成本过高;或者通过图像拼接算法直接将对图像的部分彩图拼接为完整的一副图像作为模板样本,但拼接的效果不好,导致图像匹配的结果不准确。本发明实施例提供的匹配方法及装置,正是要克服现有技术存在的缺陷,预先创建若干独立的模板样本, 将图像中的待匹配区域分别与若干模板样本进行匹配,并可根据匹配的结果判断待匹配样品的各种质量参数,如图案是否存在缺陷等。
另外,本发明实施例提供的图像匹配方法及装置,还可应用于其它图像匹配场景中,在此不作限定。
参考图1,本发明实施例提供一种图像匹配方法,包括:
S100,根据搜索框在图像中确定出待匹配区域,其中,待匹配区域的尺寸大于模板样本的尺寸;
S200,计算所述待匹配区域的每行/列像素点的灰度值的行/列灰度均值,针对预创建的任一个模板样本,计算待匹配区域中对应所述模板样本行/列数量的任意连续行/列像素点的行/列灰度均值,与所述模板样本的行/列灰度均值的相似度,将最大相似度对应的模板样本作为匹配模板样本;
S300,将所述最大相似度对应的所述待匹配区域的行/列像素点确定的区域作为目标区域。
其中,S100根据搜索框在图像中确定出的待匹配区域作为后续过程中用于与模板样本进行比较的区域,待匹配区域的尺寸设定的大于所述模板样本的尺寸,以避免因图像中图案的位置不固定导致的匹配不准确。最终匹配出的模板样本将对应待匹配区域中的一部分区域,例如图8中,黑色背景的范围对应待匹配区域的尺寸,而黑色背景中的框线范围则代表模板样本的尺寸。
S200中,首先计算所述待匹配区域的行/列灰度均值,并将所述待匹配区域的各行灰度均值分别与若干模板样本的各行灰度均值进行比较,或者将所述待匹配区域的各列灰度均值分别与若干模板样本的各列灰度均值进行比较。由于所述待匹配区域的尺寸大于所述模板样本的尺寸,因此所述待匹配区域的行/列灰度均值的个数必然大于所述模板样本的行/列灰度均值的个 数,在比较的过程中,针对任一个模板样本,计算待匹配区域中对应所述模板样本行/列数量的任意连续行/列像素点的行/列灰度均值,与所述模板样本的行/列灰度均值的相似度,通过匹配比较可找出最大相似度对应的匹配模板样本以及待匹配区域中的目标区域,例如图8中,通过匹配比较确定出框线范围的区域为目标区域,确定出的目标区域的尺寸与模板样本的尺寸相同。
由于不同的图案所对应的像素点的灰度值不同,本实施例根据待匹配区域的行/列灰度均值可确定出待匹配区域中各行/列像素点的特征。具体地,可将图像转化为二值图像后进行行/列灰度均值的计算,而且在计算行/列灰度均值时,将待匹配区域大于模板样本的区域内的像素点的灰度值均认为是0,因此,在计算行/列灰度均值时将每行/列的像素点个数认为与模板样本的每行/列的像素点个数相同。模板样本的各行/列灰度均值与待匹配区域中对应数量且相邻的各行/列灰度均值相匹配,则证明该模板样本的图案与待匹配区域中目标区域的图案相同或极其相似,通过这种方式可确定出用于工业处理或其它应用场景的匹配模板样本以及在图像中定位出待处理的目标区域。例如可在工业处理或其它应用场景中继续对确定出的目标区域进行图案缺陷分析、印刷质量分析等等处理。
S200中计算待匹配区域中对应所述模板样本行/列数量的任意连续行/列像素点的行/列灰度均值,与所述模板样本的行/列灰度均值的相似度的方式多种多样,以下以多个实施例的方式对计算方案进行举例说明。
参考图2,本发明实施例提供的图像匹配方法中,S200包括:
S201,根据所述待匹配区域的每列像素点的灰度值计算列灰度均值,并根据任意相邻的Kw个列灰度均值定义Lw-Kw+1个搜索行向量,其中Lw表示所述待匹配区域的列数量,Kw表示所述模板样本的列数量,Lw>Kw
S202,计算所述待匹配区域的各搜索行向量与所述若干模板样本的样本行向量的相似度,确定出最大相似度对应的一对搜索行向量和样本行向量;
S203,将确定出的样本行向量对应的模板样本作为匹配模板样本。
本实施例中,S201可根据
Figure PCTCN2016088691-appb-000001
计算所述待匹配区域的每列像素点的列灰度均值,其中Q(i,j)表示待匹配区域内的各像素点的灰度值,i表示像素点的行坐标,j表示像素点的列坐标,Kw表示模板样本的列数量。通过计算可获得所述待匹配区域的Lw个列灰度均值,将其中任意相邻的Kw个列灰度均值定义为一个搜索行向量,例如可以第0~Kw-1个列灰度均值定义为一搜索行向量,或将第1~Kw个、第2~Kw+1个、第3~Kw+2个列灰度均值分别定义为搜索行向量,以此类推,可定义出Lw-Kw+1个搜索行向量。搜索行向量的长度均与模板样本中的样本行向量的长度相同。对于步骤S201定义的Lw-Kw+1个搜索行向量,S202将每一个搜索行向量都与预先创建的若干模板样本的样本行向量进行比较,并计算出搜索行向量和样本行向量的相似度。如果模板样本的数量为N,那么计算获得的相似度的个数将为N*(Lw-Kw+1),之后从N*(Lw-Kw+1)个相似度中选出最大相似度,并将最大相似度对应的样本行向量所对应的模板样本确定为匹配模板样本,上述S300会将最大相似度对应的搜索行向量对应的待匹配区域中的像素点确定的区域作为目标区域。
参考图3,本发明实施例提供的图像匹配方法中,S200包括:
S211,根据所述待匹配区域的每行像素点的灰度值计算行灰度均值,并根据任意相邻的Kh个行灰度均值定义Lh-Kh+1个搜索列向量,其中Lh表示所述待匹配区域的行数量,Kh表示所述模板样本的行数量,Lh>Kh
S212,计算所述待匹配区域的各搜索列向量与所述若干模板样本的样本 列向量的相似度,确定出最大相似度对应的一对搜索列向量和样本列向量;
S213,将确定出的样本列向量对应的模板样本作为匹配模板样本。
本实施例中,S211可根据
Figure PCTCN2016088691-appb-000002
计算所述待匹配区域的每行像素点的行灰度均值,其中Q(i,j)表示待匹配区域内的各像素点的灰度值,i表示像素点的行坐标,j表示像素点的列坐标,Kh表示模板样本的行数量。通过计算可获得所述待匹配区域的Lh个行灰度均值,将其中任意相邻的Kh个行灰度均值定义为一个搜索列向量,例如可以第0~Kh-1个行灰度均值定义为一搜索列向量,或将第1~Kh个、第2~Kh+1个、第3~Kh+2个行灰度均值分别定义为搜索列向量,以此类推,可定义出Lh-Kh+1个搜索列向量。搜索列向量的长度均与模板样本中的样本列向量的长度相同。对于步骤S211定义的Lh-Kh+1个搜索列向量,S212将每一个搜索列向量都与预先创建的若干模板样本的样本列向量进行比较,并计算出搜索列向量和样本列向量的相似度。如果模板样本的数量为N,那么计算获得的相似度的个数将为N*(Lh-Kh+1),之后从N*(Lh-Kh+1)个相似度中选出最大相似度,并将最大相似度对应的样本列向量所对应的模板样本确定为匹配模板样本,上述S300会将最大相似度对应的搜索列向量对应的待匹配区域中的像素点确定的区域作为目标区域。
参考图4,本发明实施例提供的图像匹配方法中,S102包括:
S221,根据所述待匹配区域的每列像素点的灰度值计算列灰度均值,并根据任意相邻的Kw个列灰度均值定义Lw-Kw+1个搜索行向量,其中Lw表示所述待匹配区域的列数量,Kw表示所述模板样本的列数量,Lw>Kw
S222,计算所述待匹配区域的各搜索行向量与所述若干模板样本的样本行向量的相似度,确定出相似度大于预定阈值的多对搜索行向量和样本行向 量;
S223,将确定出的多个样本行向量对应的模板样本作为中间模板样本;
S224,根据所述待匹配区域的每行像素点的灰度值计算行灰度均值,并根据任意相邻的Kh个行灰度均值定义Lh-Kh+1个搜索列向量,其中Lh表示所述待匹配区域的行数量,Kh表示所述模板样本的行数量,Lh>Kh
S225,计算所述待匹配区域的各搜索列向量与多个中间模板样本的样本列向量的相似度,确定出最大相似度对应的一对搜索列向量和样本列向量;
S226,将确定出的样本列向量对应的中间模板样本作为匹配模板样本。
本实施例中,S221可根据
Figure PCTCN2016088691-appb-000003
计算所述待匹配区域的每列像素点的列灰度均值,其中Q(i,j)表示待匹配区域内的各像素点的灰度值,i表示像素点的行坐标,j表示像素点的列坐标,Kw表示模板样本的列数量。通过计算可获得所述待匹配区域的Lw个列灰度均值,将其中任意相邻的Kw个列灰度均值定义为一个搜索行向量,定义出Lh-Kh+1个搜索行向量。搜索行向量的长度均与模板样本中的样本行向量的长度相同。对于步骤S221定义的Lw-Kw+1个搜索行向量,S222将每一个搜索行向量都与预先创建的若干模板样本的样本行向量进行比较,并计算出搜索行向量和样本行向量的相似度。如果模板样本的数量为N,那么计算获得的相似度的个数将为N*(Lw-Kw+1),之后从N*(Lw-Kw+1)个相似度中选出大于预定阈值或者根据相似度由大到小排序后选出预定数量的排列在前端的相似度,S223将选出的相似度对应的模板样本确定为中间模板样本,经过S221~223的列匹配过程确定出的中间模板样本的数量将远远小于模板样本的总数量。再通过S224~226对待匹配区域和中间模板样本执行行匹配过程后,从计算获得的待匹配区域的各搜索列向量与多个中间模板样本的样本列向量的相似度 中选出最大相似度,并将最大相似度对应的中间模板样本确定为匹配模板样本。
本实施例中,先对待匹配区域与所有模板样本执行列匹配过程,从中选出列匹配度较高的多个模板样本作为中间模板样本,之后再对待匹配区域和中间模板样本执行行匹配过程,并从中选出行匹配度最高中间模板样本作为匹配模板样本,通过两次匹配过程,可更准确地确定出待匹配区域与模板样本的匹配关系,从而更准确地确定出待匹配区域中的目标区域,以及用于对目标区域进行进一下分析处理的匹配模板样本。
另外,需要说明的是,本实施例中仅实例性的说明了先执行了对待匹配区域与所有模板样本的列匹配过程,后执行了对待匹配区域和中间模板样本的行匹配过程的方案,同理地,先执行待匹配区域与所有模板样本的行匹配过程,后执行待匹配区域和中间模板样本的列匹配过程的方案,也可获得同样准确的匹配结果。
参考图8,本发明实施例提供的图像匹配方法中,S200包括:
S231,根据所述待匹配区域每列像素点的灰度值计算列灰度均值,根据任意相邻的Kw个列灰度均值定义Lw-Kw+1个搜索行向量,其中Lw表示所述待匹配区域的列数量,Kw表示所述模板样本的列数量,Lw>Kw
S232,计算所述待匹配图像的搜索行向量与所述若干模板样本的样本行向量的相似度,确定出每个模板样本对应的最大相似度;
S233,按照确定出的每个模板样本对应的最大相似度,选取最大相似度高于预定阈值的模板样本作为中间模板样本;
S234,根据所述中间模板样本对应的搜索行向量确定所述待匹配区域中对应的列区域,计算确定出的所述待匹配区域中对应的列区域的每行像素点的行灰度均值,并根据任意相邻的Kh个行灰度均值定义Lh-Kh+1个搜索列 向量,其中Lh表示所述待匹配区域的行数量,Kh表示所述模板样本的行数量,Lh>Kh
S235,计算所述中间模板样本的样本列向量与所述待匹配区域中对应的列区域的搜索列向量的相似度;
S236,确定出所述中间模板样本的样本列向量与所述待匹配区域中对应的列区域的搜索列向量的相似度中的最大值,并将所述最大值对应的中间模板样本作为匹配模板样本。
S231可根据
Figure PCTCN2016088691-appb-000004
计算所述待匹配区域的每列像素点的灰度值的列平均值,其中Q(i,j)表示待匹配区域内的像素点的灰度值,i表示像素点的行坐标,j表示像素点的列坐标,Kw表示模板样本的列数量。此处,认为待匹配区域与模板样本相比,每列多出的像素点的灰度值均为0,根据上述方式计算时,以待匹配区域中有效列像素点的数量作为列灰度均值的计算基准,且将有效列像素点的数量定义为与模板样本的列像素点数量相同,这样计算出的列灰度均值才能在计算与模板样本的列灰度均值的相似度时获得准确的相似度结果。通过计算可获得所述待匹配区域的Lw个列灰度均值,将其中任意相邻的Kw个列灰度均值定义为一个搜索行向量,可定义出Lw-Kw+1个搜索行向量。对于每个模板样本来说,其样本行向量将与待匹配区域的Lw-Kw+1个搜索行向量进行相似度的计算,每个模板样本将对应获得Lw-Kw+1个相似度,从Lw-Kw+1个相似度中选出最大相似度。如果模板样本的总数量为N,那么S232将确定出N个最大相似度。S233从N个最大相似度中选择预定数量或者大于预定阈值的最大相似度,根据预定数量选择时可将N个最大相似度进行由大到小的排序并从大数值的一端选取预定数量的最大相似度。选取出的最大相似度对应的模板样本将作为中间模板样 本。
对于每个中间模板样本,可根据其样本行向量与所述待匹配区域的搜索行向量的对应关系,在所述待匹配区域中确定出如图9所示的中间模板样本对应的列区域,S234从中间模板样本的角度,对所述待匹配区域进行反向匹配,在确定出的所述待匹配区域的列区域中,根据
Figure PCTCN2016088691-appb-000005
继续计算所述列区域内的每行像素点的行灰度均值,获得Lh个行灰度均值,根据获得的Lh个行灰度均值可定义出Lh-Kh+1个长度为Kh的搜索列向量,所述搜索列向量与所述中间模板样本的样本列向量的长度相同。之后S235计算所述中间模板样本的样本行向量与所述待匹配区域中对应的列区域的搜索列向量的相似度,确定出每个中间模板样本对应的最大相似度,S236通过比较S235确定出的多个最大相似度获得其中的最大值,该最大值对应的中间模板样本将作为匹配模板样本,该最大值对应的搜索行向量及搜索列向量可在待匹配区域中确定出目标区域。
本实施例中,先对待匹配区域与所有模板样本执行列匹配过程,计算出每个模板样本与待匹配区域的最大相似度,并根据每个模板样板的最大相似度选出最优的几个中间模板样本,并确定出这几个中间模板样本对应在待匹配区域中的最佳匹配的列区域;之后再对确定出的待匹配区域的列区域和中间模板样本执行行匹配过程,此时的行匹配过程仅在确定出的列区域内计算行灰度均值,S234将依据计算出的列区域的各行灰度均值定义待匹配区域的搜索列向量,以及S235以根据此种方式计算出的搜索列向量与中间模板样本的样本列向量进行相似度的计算。如果中间模板样本的数量为n时,那么将获得n*(Lh-Kh+1)个相似度,S235从n*(Lh-Kh+1)个相似度选出最大值,并将最大值对应的中间模板样本作为匹配模板样本。其中,S235从n*(Lh-Kh+1)个相似度选出最大值,可先计算出每个中间模板样本对应的 最大相似度,之后再从n个最大相似度中选择最大值,或者直接从n*(Lh-Kh+1)个相似度选出最大值,以及采用其它方式均可,在此不作限定。
另外,需要说明的是,本实施例中仅实例性的说明了先执行了对待匹配区域与所有模板样本的列匹配过程,后执行了列匹配过程确定出的待匹配区域的列区域和中间模板样本的行匹配过程的方案,同理地,先执行待匹配区域与所有模板样本的行匹配过程,后执行行匹配过程确定出的待匹配区域的行区域和中间模板样本的列匹配过程的方案,也可获得同样准确的匹配结果。两种方案都属于本发明的保护范围。通过本实施例可先锁定待匹配区域与模板样本的最匹配的几个行/列区域,之后再对分别匹配出的行/列区域进行列/行匹配处理,可确定出更加准确的目标区域。且所述匹配结果可精确到图像中的像素点的位置,因此,可进行精准的图像匹配并获得比现有技术更准确的匹配结果。
上述各实施例中,计算相似度时,可采用多种方式,此处以根据向量夹角计算相似的方案进行举例说明。
上述各实施例中,可根据
Figure PCTCN2016088691-appb-000006
计算所述待匹配区域的搜索行/列向量与所述模板样本的样本行/列向量的相似度,其中,m表示搜索行/列向量以所述待匹配区域的第m列/行的列/行灰度均值为起始,P(n)表示第n个所述模板样本的样本行/列向量,
Figure PCTCN2016088691-appb-000007
表示所述待匹配区域的的搜索行/列向量。
例如,根据第0~Kw-1个列平均值定义的搜索行向量与模板样本的样本行向量的夹角可根据
Figure PCTCN2016088691-appb-000008
进行计算获得,而其中
Figure PCTCN2016088691-appb-000009
即可作为两向量的相似度dn,0,相似度越大,则两向量的夹角越小,差别越小。通用地,所述待匹配区域的搜索列向量与所述模板样本的相似度可根据
Figure PCTCN2016088691-appb-000010
计算获得。上述各实施例中可根据不同的计算参数、比较对象,获得不同数量的相似度,最终都可根据相似度选出差别最小,匹配度最高的一对样本行/列向量和搜索行/列向量,并据此准确滴确定出匹配模板样本和待匹配区域中的目标区域。
以下再以一实施例对所述模板样本的预创建过程进行详细说明。模板样本的预创建过程为上述实施例各实施例中的相似度计算过程提供了计算的基础,模板样本的预创建过程至少在上述实施例中执行计算相似度步骤之前完成。
本实施例中,所述模板样本的预创建过程包括:
S401,根据采图框对标准样品图像进行采图,获得若干采图样本,将所述采图样本进行二值化处理获得二值化样本,其中,所述采图框的尺寸设定为Kw*Kh
S402,计算所述二值化样本每行像素点的灰度值的行平均值和每列像素点的灰度值的列平均值,并根据所述二值化样本的所有的行平均值定义长度为Kw的样本列向量,根据所述二值化样本的所有的列平均值定义长度为Kh的样本行向量;
S403,对每个所述二值化样本进行编号,将编号后且定义样本列向量及样本行向量后的若干二值化样本作为模板样本。
其中,步骤S401中,采图的周期可根据实际情况进行设定,一般可每隔3~5度进行一次采图,及避免了过多的采图造成过多的时间损耗,又可保 证采图样本的采样率符合要求。标准样品图像需使用标准的正样品,以避免依据不标准的样品进行采图后生成不准确的模板样本。另外,采图框在采图时应正对所述标准样品图像,且采图框的尺寸不应过大,以避免过大的采图尺寸造成采图样本扭曲而导致模板样本不准确。另外,本文中所述的二值化处理是指对图像进行0和255的灰度值转换,转换方式有多种,例如如果图像的亮度稳定,可采用二值化阈值方法提取图案进行灰度值转换,其它的,还可采用图像灰度礼帽、黑帽、边缘提取等方法,本发明对此不作限定。
步骤S402中,对二值化处理获得的二值化样本逐行和逐列执行行灰度均值和列灰度均值的计算,具体地可根据
Figure PCTCN2016088691-appb-000011
计算所述二值化样本的行灰度均值,根据
Figure PCTCN2016088691-appb-000012
计算所述二值化样本的列灰度均值,其中I(i,j)表示所述二值化样本中各像素点的灰度值,i表示像素点的行坐标,j表示像素点的列坐标,Kw和Kh分别表示二值化样本的列数量和行数量。根据步骤S402可获得Kw个列灰度均值,Kh个行灰度均值,分别将二值化样本的Kw个列灰度均值定义为样本行向量,将Kh个行灰度均值定义为样本列向量。
步骤S403,将步骤S402计算的样本列向量和样本行向量标记到所述二值化样本生成模板样本,另外还对模板样本进行了编号,以便于后续在对图像进行匹配过程中可方便地查找到匹配模板样本。
优选地,本实施例中在将所述采图样本进行二值化处理获得二值化样本,之前还包括:
删除无用的采图样本,所述无用的采图样本包括:与前一采图样本的采图角度差小于预定阈值或无图案的采图样本。
删除无用的采图样本可有效减少样本的分析时间,最终剩余的采图样本 数量在80~180之间即可满足采样分析要求。
本实施例中,改变了传统的完整的一副图像作为模板样本的方案,以多个独立的模板样本的形式作为图像匹配的依据,因此,免去了图像拼接的过程,进而可避免由于图像拼接带来的模板样本不准确的问题;而且,本实施例中的各模板样本中标记了样本列向量和样本行向量,可实现高精度的图像匹配。
以下以图6中的曲面产品上的图像为例进行实施例的详细解释。
首先,将图6中的圆柱型标准正样品进行旋转,并将采图框对准样品的正面正中间的位置对样品曲面上的图像进行采图,获得若干采图样本,在删除无用的采图样本,对N个采图样本进行二值化处理获得如图4所示的二值化样本,之后对N个二值化样本分别进行行分析和列分析,并定义出二值化样本的样本列向量和样本行向量,以及对每个二值化验本进行标号n,获得N个模板样本,每个模板样本的样本列向量可记为Ph(n),样本行向量可记为Pv(n),至此,完成了模板样本的创建过程。
当需要对图像进行匹配分析时,先对图像进行二值化处理,然后根据搜索框在图像中选取如图8所示的待匹配区域,待匹配区域的尺寸(图8中黑色背景对应的尺寸)大于模板样本的尺寸(图8中框线对应的尺寸);之后根据
Figure PCTCN2016088691-appb-000013
计算待匹配区域的每列像素点的灰度值的列灰度均值pv(j),并从左至右取任意Kw个列灰度均值定义Lw-Kw+1个搜索行向量,例如从左起第一个搜索行向量记为
Figure PCTCN2016088691-appb-000014
之后分别计算待匹配区域的Lw-Kw+1个搜索行向量与N个模板样本的样本行向量的相似度;然后根据计算出的N*(Lw-Kw+1)个相似度,选出每个模板样本对应的最大相似度,并进一步从N个最大相似度中选出前5个,这5个最大相似度对应的模板 样本将作为中间模板样本,并根据5个最大相似度对应的待匹配区域的搜索行向量确定出对应的5个列区域,至此完成了对待匹配区域的列匹配过程。
之后,根据5个中间模板样本分别对待匹配区域中对应的列区域进行行匹配。图9示出了单个中间模板样本对应锁定的待匹配区域中的列区域,对于锁定的列区域,可与上述列匹配过程相似地,从上至下取任意Kh个行灰度均值定义Lh-Kh+1个搜索列向量,之后计算每个中间模板样本的样本行向量与在其对应的列区域中定义的Lh-Kh+1个搜索行向量的相似度,并获得5个中间模板样本的最大相似度,比较获得的5个最大相似度,从中选出最大值,并将最大值对应的中间模板样本确定为匹配模板样本,以及根据所述最大值对应的搜索行向量和搜索列向量确定出所述待匹配区域中的目标区域。
参考图10,本发明实施例提供一种图像匹配装置,包括:
选取模块11,用于根据搜索框在图像中确定出待匹配区域,其中,待匹配区域的尺寸大于模板样本的尺寸;
匹配模板样本获取模块12,用于计算所述待匹配区域的每行/列像素点的灰度值的行/列灰度均值,针对预创建的任一个模板样本,计算待匹配区域中对应所述模板样本行/列数量的任意连续行/列像素点的行/列灰度均值,与所述模板样本的行/列灰度均值的相似度,将最大相似度对应的模板样本作为匹配模板样本;
定位模块13,用于将所述最大相似度对应的所述待匹配区域的行/列像素点确定的区域作为目标区域。
其中,选取模块11根据搜索框在图像中确定出的待匹配区域作为后续过程中用于与模板样本进行比较的区域,待匹配区域的尺寸设定的大于所述模板样本的尺寸,以避免因图像中图案的位置不固定导致的匹配不准确。最终匹配出的模板样本将对应待匹配区域中的一部分区域,例如图8中,黑色 背景的范围对应待匹配区域的尺寸,而黑色背景中的框线范围则代表模板样本的尺寸。
匹配模板样本获取模块12中,首先计算所述待匹配区域的行/列灰度均值,并将所述待匹配区域的各行灰度均值分别与若干模板样本的各行灰度均值进行比较,或者将所述待匹配区域的各列灰度均值分别与若干模板样本的各列灰度均值进行比较。由于所述待匹配区域的尺寸大于所述模板样本的尺寸,因此所述待匹配区域的行/列灰度均值的个数必然大于所述模板样本的行/列灰度均值的个数,在比较的过程中,针对任一个模板样本,计算待匹配区域中对应所述模板样本行/列数量的任意连续行/列像素点的行/列灰度均值,与所述模板样本的行/列灰度均值的相似度,通过匹配比较可找出最大相似度对应的匹配模板样本以及待匹配区域中的目标区域,例如图8中,通过匹配比较确定出框线范围的区域为目标区域,确定出的目标区域的尺寸与模板样本的尺寸相同。
由于不同的图案所对应的像素点的灰度值不同,本实施例根据待匹配区域的行/列灰度均值可确定出待匹配区域中各行/列像素点的特征。具体地,可将图像转化为二值图像后进行行/列灰度均值的计算,而且在计算行/列灰度均值时,将待匹配区域大于模板样本的区域内的像素点的灰度值均认为是0,因此,在计算行/列灰度均值时将每行/列的像素点个数认为与模板样本的每行/列的像素点个数相同。模板样本的各行/列灰度均值与待匹配区域中对应数量且相邻的各行/列灰度均值相匹配,则证明该模板样本的图案与待匹配区域中目标区域的图案相同或极其相似,通过这种方式可确定出用于工业处理或其它应用场景的匹配模板样本以及在图像中定位出待处理的目标区域。例如可在工业处理或其它应用场景中继续对确定出的目标区域进行图案缺陷分析、印刷质量分析等等处理。
匹配模板样本获取模块12计算待匹配区域中对应所述模板样本行/列数量的任意连续行/列像素点的行/列灰度均值,与所述模板样本的行/列灰度均值的相似度的方式多种多样,以下以多个实施例的方式对计算方案进行举例说明。
一个实施例中,所述匹配模板样本获取模块12,用于:
根据所述待匹配区域的每列像素点的灰度值计算列灰度均值,并根据任意相邻的Kw个列灰度均值定义Lw-Kw+1个搜索行向量,其中Lw表示所述待匹配区域的列数量,Kw表示所述模板样本的列数量,Lw>Kw
计算所述待匹配区域的各搜索行向量与所述若干模板样本的样本行向量的相似度,确定出最大相似度对应的一对搜索行向量和样本行向量;
将确定出的样本行向量对应的模板样本作为匹配模板样本。
本实施例中,匹配模板样本获取模块12可根据
Figure PCTCN2016088691-appb-000015
计算所述待匹配区域的每列像素点的列灰度均值,其中Q(i,j)表示待匹配区域内的各像素点的灰度值,i表示像素点的行坐标,j表示像素点的列坐标,Kw表示模板样本的列数量。通过计算可获得所述待匹配区域的Lw个列灰度均值,将其中任意相邻的Kw个列灰度均值定义为一个搜索行向量,例如可以第0~Kw-1个列灰度均值定义为一搜索行向量,或将第1~Kw个、第2~Kw+1个、第3~Kw+2个列灰度均值分别定义为搜索行向量,以此类推,可定义出Lw-Kw+1个搜索行向量。搜索行向量的长度均与模板样本中的样本行向量的长度相同。对于步骤S201定义的Lw-Kw+1个搜索行向量,S202将每一个搜索行向量都与预先创建的若干模板样本的样本行向量进行比较,并计算出搜索行向量和样本行向量的相似度。如果模板样本的数量为N,那么计算获得的相似度的个数将为N*(Lw-Kw+1),之后从N*(Lw-Kw+1)个相似度中选 出最大相似度,并将最大相似度对应的样本行向量所对应的模板样本确定为匹配模板样本,上述定位模块13会将最大相似度对应的搜索行向量对应的待匹配区域中的像素点确定的区域作为目标区域。
另个一实施例中,所述匹配模板样本获取模块12,用于:
根据所述待匹配区域的每行像素点的灰度值计算行灰度均值,并根据任意相邻的Kh个行灰度均值定义Lh-Kh+1个搜索列向量,其中Lh表示所述待匹配区域的行数量,Kh表示所述模板样本的行数量,Lh>Kh
计算所述待匹配区域的各搜索列向量与所述若干模板样本的样本列向量的相似度,确定出最大相似度对应的一对搜索列向量和样本列向量;
将确定出的样本列向量对应的模板样本作为匹配模板样本。
本实施例中,匹配模板样本获取模块12可根据
Figure PCTCN2016088691-appb-000016
计算所述待匹配区域的每行像素点的行灰度均值,其中Q(i,j)表示待匹配区域内的各像素点的灰度值,i表示像素点的行坐标,j表示像素点的列坐标,Kh表示模板样本的行数量。通过计算可获得所述待匹配区域的Lh个行灰度均值,将其中任意相邻的Kh个行灰度均值定义为一个搜索列向量,例如可以第0~Kh-1个行灰度均值定义为一搜索列向量,或将第1~Kh个、第2~Kh+1个、第3~Kh+2个行灰度均值分别定义为搜索列向量,以此类推,可定义出Lh-Kh+1个搜索列向量。搜索列向量的长度均与模板样本中的样本列向量的长度相同。对于步骤S211定义的Lh-Kh+1个搜索列向量,S212将每一个搜索列向量都与预先创建的若干模板样本的样本列向量进行比较,并计算出搜索列向量和样本列向量的相似度。如果模板样本的数量为N,那么计算获得的相似度的个数将为N*(Lh-Kh+1),之后从N*(Lh-Kh+1)个相似度中选出最大相似度,并将最大相似度对应的样本列向量所对应的模板样本确定为 匹配模板样本,上述定位模块13会将最大相似度对应的搜索列向量对应的待匹配区域中的像素点确定的区域作为目标区域。
另一个实施例中,所述匹配模板样本获取模块12,用于:
根据所述待匹配区域的每列像素点的灰度值计算列灰度均值,并根据任意相邻的Kw个列灰度均值定义Lw-Kw+1个搜索行向量,其中Lw表示所述待匹配区域的列数量,Kw表示所述模板样本的列数量,Lw>Kw
计算所述待匹配区域的各搜索行向量与所述若干模板样本的样本行向量的相似度,确定出相似度大于预定阈值的多对搜索行向量和样本行向量;
将确定出的多个样本行向量对应的模板样本作为中间模板样本;
根据所述待匹配区域的每行像素点的灰度值计算行灰度均值,并根据任意相邻的Kh个行灰度均值定义Lh-Kh+1个搜索列向量,其中Lh表示所述待匹配区域的行数量,Kh表示所述模板样本的行数量,Lh>Kh
计算所述待匹配区域的各搜索列向量与多个中间模板样本的样本列向量的相似度,确定出最大相似度对应的一对搜索列向量和样本列向量;
将确定出的样本列向量对应的中间模板样本作为匹配模板样本。
本实施例中,匹配模板样本获取模块12可根据
Figure PCTCN2016088691-appb-000017
计算所述待匹配区域的每列像素点的列灰度均值,其中Q(i,j)表示待匹配区域内的各像素点的灰度值,i表示像素点的行坐标,j表示像素点的列坐标,Kw表示模板样本的列数量。通过计算可获得所述待匹配区域的Lw个列灰度均值,将其中任意相邻的Kw个列灰度均值定义为一个搜索行向量,定义出Lh-Kh+1个搜索行向量。搜索行向量的长度均与模板样本中的样本行向量的长度相同。对于步骤S221定义的Lw-Kw+1个搜索行向量,匹配模板样本获取模块12将每一个搜索行向量都与预先创建的若干模板样本的样本行向量 进行比较,并计算出搜索行向量和样本行向量的相似度。如果模板样本的数量为N,那么计算获得的相似度的个数将为N*(Lw-Kw+1),之后从N*(Lw-Kw+1)个相似度中选出大于预定阈值或者根据相似度由大到小排序后选出预定数量的排列在前端的相似度,匹配模板样本获取模块12将选出的相似度对应的模板样本确定为中间模板样本,经过匹配模板样本获取模块12的列匹配过程确定出的中间模板样本的数量将远远小于模板样本的总数量。再通过匹配模板样本获取模块12对待匹配区域和中间模板样本执行行匹配过程后,从计算获得的待匹配区域的各搜索列向量与多个中间模板样本的样本列向量的相似度中选出最大相似度,并将最大相似度对应的中间模板样本确定为匹配模板样本。
本实施例中,先对待匹配区域与所有模板样本执行列匹配过程,从中选出列匹配度较高的多个模板样本作为中间模板样本,之后再对待匹配区域和中间模板样本执行行匹配过程,并从中选出行匹配度最高中间模板样本作为匹配模板样本,通过两次匹配过程,可更准确地确定出待匹配区域与模板样本的匹配关系,从而更准确地确定出待匹配区域中的目标区域,以及用于对目标区域进行进一下分析处理的匹配模板样本。
另外,需要说明的是,本实施例中仅实例性的说明了匹配模板样本获取模块12先执行了对待匹配区域与所有模板样本的列匹配过程,后执行了对待匹配区域和中间模板样本的行匹配过程的方案,同理地,匹配模板样本获取模块12先执行待匹配区域与所有模板样本的行匹配过程,后执行待匹配区域和中间模板样本的列匹配过程的方案,也可获得同样准确的匹配结果。
另一个实施例中,所述匹配模板样本获取模块12,用于:
根据所述待匹配区域每列像素点的灰度值的计算列灰度均值,根据任意相邻的Kw个列灰度均值定义Lw-Kw+1个搜索行向量,其中Lw表示所述待匹 配区域的列数量,Kw表示所述模板样本的列数量,Lw>Kw
比较所述图像的搜索行向量与所述若干模板样本的样本行向量,确定出每个模板样本的样本行向量对应的相似度最高的搜索行向量;
计算所述图像的搜索行向量与所述若干模板样本的样本行向量的相似度,确定出每个模板样本对应的最大相似度;
根据所述中间模板样本对应的搜索行向量确定所述待匹配区域中对应的列区域,计算确定出的所述待匹配区域中对应的列区域的每行像素点的行灰度均值,并根据任意相邻的Kh个行灰度均值定义Lh-Kh+1个搜索列向量,其中Lh表示所述待匹配区域的行数量,Kh表示所述模板样本的行数量,Lh>Kh
计算所述中间模板样本的样本列向量与所述待匹配区域中对应的列区域的搜索列向量的相似度,确定出每个中间模板样本对应的最大相似度;
确定出所述中间模板样本对应的最大相似度中的最大值,并将所述最大值对应的中间模板样本作为匹配模板样本。
匹配模板样本获取模块12可根据
Figure PCTCN2016088691-appb-000018
计算所述待匹配区域的每列像素点的灰度值的列平均值,其中Q(i,j)表示待匹配区域内的像素点的灰度值,i表示像素点的行坐标,j表示像素点的列坐标,Kw表示模板样本的列数量。此处,认为待匹配区域与模板样本相比,每列多出的像素点的灰度值均为0,根据上述方式计算时,以待匹配区域中有效列像素点的数量作为列灰度均值的计算基准,且将有效列像素点的数量定义为与模板样本的列像素点数量相同,这样计算出的列灰度均值才能在计算与模板样本的列灰度均值的相似度时获得准确的相似度结果。通过计算可获得所述待匹配区域的Lw个列灰度均值,将其中任意相邻的Kw个列灰度均值定义为一 个搜索行向量,可定义出Lw-Kw+1个搜索行向量。对于每个模板样本来说,其样本行向量将与待匹配区域的Lw-Kw+1个搜索行向量进行相似度的计算,每个模板样本将对应获得Lw-Kw+1个相似度,从Lw-Kw+1个相似度中选出最大相似度。如果模板样本的总数量为N,那么匹配模板样本获取模块12将确定出N个最大相似度。S233从N个最大相似度中选择预定数量或者大于预定阈值的最大相似度,根据预定数量选择时可将N个最大相似度进行由大到小的排序并从大数值的一端选取预定数量的最大相似度。选取出的最大相似度对应的模板样本将作为中间模板样本。
对于每个中间模板样本,可根据其样本行向量与所述待匹配区域的搜索行向量的对应关系,在所述待匹配区域中确定出如图9所示的中间模板样本对应的列区域,匹配模板样本获取模块12从中间模板样本的角度,对所述待匹配区域进行反向匹配,在确定出的所述待匹配区域的列区域中,根据
Figure PCTCN2016088691-appb-000019
继续计算所述列区域内的每行像素点的行灰度均值,获得Lh个行灰度均值,根据获得的Lh个行灰度均值可定义出Lh-Kh+1个长度为Kh的搜索列向量,所述搜索列向量与所述中间模板样本的样本列向量的长度相同。之后匹配模板样本获取模块12计算所述中间模板样本的样本行向量与所述待匹配区域中对应的列区域的搜索列向量的相似度,确定出每个中间模板样本对应的最大相似度,匹配模板样本获取模块12通过比较确定出的多个最大相似度获得其中的最大值,该最大值对应的中间模板样本将作为匹配模板样本,该最大值对应的搜索行向量及搜索列向量可在待匹配区域中确定出目标区域。
本实施例中,先对待匹配区域与所有模板样本执行列匹配过程,计算出每个模板样本与待匹配区域的最大相似度,并根据每个模板样板的最大相似度选出最优的几个中间模板样本,并确定出这几个中间模板样本对应在待匹 配区域中的最佳匹配的列区域;之后再对确定出的待匹配区域的列区域和中间模板样本执行行匹配过程,此时的行匹配过程仅在确定出的列区域内计算行灰度均值,匹配模板样本获取模块12将依据计算出的列区域的各行灰度均值定义待匹配区域的搜索列向量,以及根据此种方式计算出的搜索列向量与中间模板样本的样本列向量进行相似度的计算。如果中间模板样本的数量为n时,那么将获得n*(Lh-Kh+1)个相似度,匹配模板样本获取模块12从n*(Lh-Kh+1)个相似度选出最大值,并将最大值对应的中间模板样本作为匹配模板样本。其中,匹配模板样本获取模块12从n*(Lh-Kh+1)个相似度选出最大值,可先计算出每个中间模板样本对应的最大相似度,之后再从n个最大相似度中选择最大值,或者直接从n*(Lh-Kh+1)个相似度选出最大值,以及采用其它方式均可,在此不作限定。
另外,需要说明的是,本实施例中仅实例性的说明了匹配模板样本获取模块12先执行了对待匹配区域与所有模板样本的列匹配过程,后执行了列匹配过程确定出的待匹配区域的列区域和中间模板样本的行匹配过程的方案,同理地,匹配模板样本获取模块12先执行待匹配区域与所有模板样本的行匹配过程,后执行行匹配过程确定出的待匹配区域的行区域和中间模板样本的列匹配过程的方案,也可获得同样准确的匹配结果。两种方案都属于本发明的保护范围。通过本实施例可先锁定待匹配区域与模板样本的最匹配的几个行/列区域,之后再对分别匹配出的行/列区域进行列/行匹配处理,可确定出更加准确的目标区域。且所述匹配结果可精确到图像中的像素点的位置,因此,可进行精准的图像匹配并获得比现有技术更准确的匹配结果。
上述各实施例中,所述匹配模板样本获取模块12计算相似度时,可采用多种方式,此处以根据向量夹角计算相似的方案进行举例说明。
所述匹配模板样本获取模块12,用于:
根据
Figure PCTCN2016088691-appb-000020
计算所述待匹配区域的搜索行/列向量与所述模板样本的样本行/列向量的相似度,其中,m表示搜索行/列向量以所述待匹配区域的第m列/行的列/行灰度均值为起始,P(n)表示第n个所述模板样本的样本行/列向量,
Figure PCTCN2016088691-appb-000021
表示所述待匹配区域的的搜索行/列向量。
例如,根据第0~Kw-1个列平均值定义的搜索行向量与模板样本的样本行向量的夹角可根据
Figure PCTCN2016088691-appb-000022
进行计算获得,而其中
Figure PCTCN2016088691-appb-000023
即可作为两向量的相似度dn,0,相似度越大,则两向量的夹角越小,差别越小。通用地,所述待匹配区域的搜索列向量与所述模板样本的相似度可根据
Figure PCTCN2016088691-appb-000024
计算获得。上述各实施例中可根据不同的计算参数、比较对象,获得不同数量的相似度,最终都可根据相似度选出差别最小,匹配度最高的一对样本行/列向量和搜索行/列向量,并据此准确滴确定出匹配模板样本和待匹配区域中的目标区域。
参考图11,以下再以一实施例对所述模板样本的预创建过程进行详细说明。模板样本的预创建过程为上述实施例各实施例中的相似度计算过程提供了计算的基础。
本实施例中提供的图像匹配装置,还包括:
模板样本预创建模块14,用于:
根据采图框对标准样品图像进行采图,获得若干采图样本,将所述采图 样本进行二值化处理获得二值化样本,其中,所述采图框的尺寸设定为Kw*Kh
计算所述二值化样本每行像素点的灰度值的行平均值和每列像素点的灰度值的列平均值,并根据所述二值化样本的所有的行平均值定义长度为Kw的样本列向量,根据所述二值化样本的所有的列平均值定义长度为Kh的样本行向量;
对每个所述二值化样本进行编号,将编号后且定义样本列向量及样本行向量后的若干二值化样本作为模板样本。
模板样本预创建模块14中,采图的周期可根据实际情况进行设定,一般可每隔3~5度进行一次采图,及避免了过多的采图造成过多的时间损耗,又可保证采图样本的采样率符合要求。标准样品图像需使用标准的正样品,以避免依据不标准的样品进行采图后生成不准确的模板样本。另外,采图框在采图时应正对所述标准样品图像,且采图框的尺寸不应过大,以避免过大的采图尺寸造成采图样本扭曲而导致模板样本不准确。另外,本文中所述的二值化处理是指对图像进行0和255的灰度值转换,转换方式有多种,例如如果图像的亮度稳定,可采用二值化阈值方法提取图案进行灰度值转换,其它的,还可采用图像灰度礼帽、黑帽、边缘提取等方法,本发明对此不作限定。
模板样本预创建模块14中,对二值化处理获得的二值化样本逐行和逐列执行行灰度均值和列灰度均值的计算,具体地可根据
Figure PCTCN2016088691-appb-000025
计算所述二值化样本的行灰度均值,根据
Figure PCTCN2016088691-appb-000026
计算所述二值化样本的列灰度均值,其中I(i,j)表示所述二值化样本中各像素点的灰度值,i表示像素点的行坐标,j表示像素点的 列坐标,Kw和Kh分别表示二值化样本的列数量和行数量。模板样本预创建模块14根据获得的Kw个列灰度均值,Kh个行灰度均值,分别将二值化样本的Kw个列灰度均值定义为样本行向量,将Kh个行灰度均值定义为样本列向量。
模板样本预创建模块14,将计算获得的样本列向量和样本行向量标记到所述二值化样本生成模板样本,另外还对模板样本进行了编号,以便于后续在对图像进行匹配过程中可方便地查找到匹配模板样本。
优选地,本实施例中,所述模板样本预创建模块14,用于:
删除无用的采图样本,所述无用的采图样本包括:与前一采图样本的采图角度差小于预定阈值或无图案的采图样本。
删除无用的采图样本可有效减少样本的分析时间,最终剩余的采图样本数量在80~180之间即可满足采样分析要求。
本实施例中,改变了传统的完整的一副图像作为模板样本的方案,以多个独立的模板样本的形式作为图像匹配的依据,因此,免去了图像拼接的过程,进而可避免由于图像拼接带来的模板样本不准确的问题;而且,本实施例中的各模板样本中标记了样本列向量和样本行向量,可实现高精度的图像匹配。
通过本发明实施例提供的图像匹配方法及装置,可准确地匹配出图像中的各种图案,包括密集图像也可准确的实现匹配,同时通过多个独立的模板样本省去了拼接过程,使得匹配结果准确,且运算速度更快。
参考图12,本发明实施例提供一种图像匹配设备,包括:存储器401和处理器402,其中,
所述存储器401,用于存储一条或多条指令,其中,所述一条或多条指令以供所述处理器402调用执行;
所述处理器402,用于根据搜索框在图像中确定出待匹配区域,其中,待匹配区域的尺寸大于模板样本的尺寸;
用于计算所述待匹配区域的每行/列像素点的灰度值的行/列灰度均值,针对预创建的任一个模板样本,计算待匹配区域中对应所述模板样本行/列数量的任意连续行/列像素点的行/列灰度均值,与所述模板样本的行/列灰度均值的相似度,将最大相似度对应的模板样本作为匹配模板样本;
用于将所述最大相似度对应的所述待匹配区域的行/列像素点确定的区域作为目标区域。
所述处理器402,进一步用于,根据所述待匹配区域的每列像素点的灰度值计算列灰度均值,并根据任意相邻的Kw个列灰度均值定义Lw-Kw+1个搜索行向量,其中Lw表示所述待匹配区域的列数量,Kw表示所述模板样本的列数量,Lw>Kw
计算所述待匹配区域的各搜索行向量与所述若干模板样本的样本行向量的相似度,确定出最大相似度对应的一对搜索行向量和样本行向量;
将确定出的样本行向量对应的模板样本作为匹配模板样本。
所述处理器402,进一步用于,根据所述待匹配区域的每行像素点的灰度值计算行灰度均值,并根据任意相邻的Kh个行灰度均值定义Lh-Kh+1个搜索列向量,其中Lh表示所述待匹配区域的行数量,Kh表示所述模板样本的行数量,Lh>Kh
计算所述待匹配区域的各搜索列向量与所述若干模板样本的样本列向量的相似度,确定出最大相似度对应的一对搜索列向量和样本列向量;
将确定出的样本列向量对应的模板样本作为匹配模板样本。
所述处理器402,进一步用于,根据所述待匹配区域的每列像素点的灰度值计算列灰度均值,并根据任意相邻的Kw个列灰度均值定义Lw-Kw+1 个搜索行向量,其中Lw表示所述待匹配区域的列数量,Kw表示所述模板样本的列数量,Lw>Kw
计算所述待匹配区域的各搜索行向量与所述若干模板样本的样本行向量的相似度,确定出相似度大于预定阈值的多对搜索行向量和样本行向量;
将确定出的多个样本行向量对应的模板样本作为中间模板样本;
根据所述待匹配区域的每行像素点的灰度值计算行灰度均值,并根据任意相邻的Kh个行灰度均值定义Lh-Kh+1个搜索列向量,其中Lh表示所述待匹配区域的行数量,Kh表示所述模板样本的行数量,Lh>Kh
计算所述待匹配区域的各搜索列向量与多个中间模板样本的样本列向量的相似度,确定出最大相似度对应的一对搜索列向量和样本列向量;
将确定出的样本列向量对应的中间模板样本作为匹配模板样本。
所述处理器402,进一步用于,根据所述待匹配区域每列像素点的灰度值的计算列灰度均值,根据任意相邻的Kw个列灰度均值定义Lw-Kw+1个搜索行向量,其中Lw表示所述待匹配区域的列数量,Kw表示所述模板样本的列数量,Lw>Kw
比较所述图像的搜索行向量与所述若干模板样本的样本行向量,确定出每个模板样本的样本行向量对应的相似度最高的搜索行向量;
计算所述图像的搜索行向量与所述若干模板样本的样本行向量的相似度,确定出每个模板样本对应的最大相似度;
根据所述中间模板样本对应的搜索行向量确定所述待匹配区域中对应的列区域,计算确定出的所述待匹配区域中对应的列区域的每行像素点的行灰度均值,并根据任意相邻的Kh个行灰度均值定义Lh-Kh+1个搜索列向量,其中Lh表示所述待匹配区域的行数量,Kh表示所述模板样本的行数量,Lh>Kh
计算所述中间模板样本的样本列向量与所述待匹配区域中对应的列区域的搜索列向量的相似度,确定出每个中间模板样本对应的最大相似度;
确定出所述中间模板样本对应的最大相似度中的最大值,并将所述最大值对应的中间模板样本作为匹配模板样本。
所述处理器402,进一步用于,根据采图框对标准样品图像进行采图,获得若干采图样本,将所述采图样本进行二值化处理获得二值化样本,其中,所述采图框的尺寸设定为Kw*Kh
计算所述二值化样本每行像素点的灰度值的行平均值和每列像素点的灰度值的列平均值,并根据所述二值化样本的所有的行平均值定义长度为Kw的样本列向量,根据所述二值化样本的所有的列平均值定义长度为Kh的样本行向量;
对每个所述二值化样本进行编号,将编号后且定义样本列向量及样本行向量后的若干二值化样本作为模板样本。
所述处理器402,进一步用于,删除无用的采图样本,所述无用的采图样本包括:与前一采图样本的采图角度差小于预定阈值或无图案的采图样本。
所述处理器402,进一步用于,
根据
Figure PCTCN2016088691-appb-000027
计算所述待匹配区域的搜索行/列向量与所述模板样本的样本行/列向量的相似度,其中,m表示搜索行/列向量以所述待匹配区域的第m列/行的列/行灰度均值为起始,P(n)表示第n个所述模板样本的样本行/列向量,
Figure PCTCN2016088691-appb-000028
表示所述待匹配区域的的搜索行/列向量。
本实施例中设备的技术方案和各模块的功能特征、连接方式,与图1~图12对应实施例所描述的特征和技术方案相对应,不足之处请参见前述图1~图12对应实施例。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (16)

  1. 一种图像匹配方法,其特征在于,包括:
    S101,根据搜索框在图像中确定出待匹配区域,其中,待匹配区域的尺寸大于模板样本的尺寸;
    S102,计算所述待匹配区域的每行/列像素点的灰度值的行/列灰度均值,针对预创建的任一个模板样本,计算待匹配区域中对应所述模板样本行/列数量的任意连续行/列像素点的行/列灰度均值,与所述模板样本的行/列灰度均值的相似度,将最大相似度对应的模板样本作为匹配模板样本;
    S103,将所述最大相似度对应的所述待匹配区域的行/列像素点确定的区域作为目标区域。
  2. 根据权利要求1所述的方法,其特征在于,S102包括:
    根据所述待匹配区域的每列像素点的灰度值计算列灰度均值,并根据任意连续的Kw个列灰度均值定义Lw-Kw+1个搜索行向量,其中Lw表示所述待匹配区域的列数量,Kw表示所述模板样本的列数量,Lw>Kw
    计算所述待匹配区域的各搜索行向量与所述若干模板样本的样本行向量的相似度,确定出最大相似度对应的一对搜索行向量和样本行向量;
    将确定出的样本行向量对应的模板样本作为匹配模板样本。
  3. 根据权利要求1所述的方法,其特征在于,S102包括:
    根据所述待匹配区域的每行像素点的灰度值计算行灰度均值,并根据任意相邻的Kh个行灰度均值定义Lh-Kh+1个搜索列向量,其中Lh表示所述待匹配区域的行数量,Kh表示所述模板样本的行数量,Lh>Kh
    计算所述待匹配区域的各搜索列向量与所述若干模板样本的样本列向量的相似度,确定出最大相似度对应的一对搜索列向量和样本列向量;
    将确定出的样本列向量对应的模板样本作为匹配模板样本。
  4. 根据权利要求1所述的方法其特征在于,S102包括:
    根据所述待匹配区域的每列像素点的灰度值计算列灰度均值,并根据任意相邻的Kw个列灰度均值定义Lw-Kw+1个搜索行向量,其中Lw表示所述待匹配区域的列数量,Kw表示所述模板样本的列数量,Lw>Kw
    计算所述待匹配区域的各搜索行向量与所述若干模板样本的样本行向量的相似度,确定出相似度大于预定阈值的多对搜索行向量和样本行向量;
    将确定出的多个样本行向量对应的模板样本作为中间模板样本;
    根据所述待匹配区域的每行像素点的灰度值计算行灰度均值,并根据任意相邻的Kh个行灰度均值定义Lh-Kh+1个搜索列向量,其中Lh表示所述待匹配区域的行数量,Kh表示所述模板样本的行数量,Lh>Kh
    计算所述待匹配区域的各搜索列向量与多个中间模板样本的样本列向量的相似度,确定出最大相似度对应的一对搜索列向量和样本列向量;
    将确定出的样本列向量对应的中间模板样本作为匹配模板样本。
  5. 根据权利要求1所述的方法,其特征在于,S102包括:
    根据所述待匹配区域每列像素点的灰度值计算列灰度均值,根据任意相邻的Kw个列灰度均值定义Lw-Kw+1个搜索行向量,其中Lw表示所述待匹配区域的列数量,Kw表示所述模板样本的列数量,Lw>Kw
    计算所述图像的搜索行向量与所述若干模板样本的样本行向量的相似度,确定出每个模板样本对应的最大相似度;
    按照确定出的每个模板样本对应的最大相似度,选取最大相似度高于预定阈值的模板样本作为中间模板样本;
    根据所述中间模板样本对应的搜索行向量确定所述待匹配区域中对应的列区域,计算确定出的所述待匹配区域中对应的列区域的每行像素点的行灰度均值,并根据任意相邻的Kh个行灰度均值定义Lh-Kh+1个搜索列向量, 其中Lh表示所述待匹配区域的行数量,Kh表示所述模板样本的行数量,Lh>Kh
    计算所述中间模板样本的样本列向量与所述待匹配区域中对应的列区域的搜索列向量的相似度,确定出每个中间模板样本对应的最大相似度;
    确定出所述中间模板样本对应的最大相似度中的最大值,并将所述最大值对应的中间模板样本作为匹配模板样本。
  6. 根据权利要求1所述的方法,其特征在于,所述模板样本的预创建,包括:
    根据采图框对标准样品图像进行采图,获得若干采图样本,将所述采图样本进行二值化处理获得二值化样本,其中,所述采图框的尺寸设定为Kw*Kh
    计算所述二值化样本每行像素点的灰度值的行平均值和每列像素点的灰度值的列平均值,并根据所述二值化样本的所有的行平均值定义长度为Kw的样本列向量,根据所述二值化样本的所有的列平均值定义长度为Kh的样本行向量;
    对每个所述二值化样本进行编号,将编号后且定义样本列向量及样本行向量后的若干二值化样本作为模板样本。
  7. 根据权利要求6所述的方法,其特征在于,所述将所述采图样本进行二值化处理获得二值化样本,之前还包括:
    删除无用的采图样本,所述无用的采图样本包括:与前一采图样本的采图角度差小于预定阈值或无图案的采图样本。
  8. 根据权利要求1~5任一项所述的方法,其特征在于,所述计算相似度,包括:
    根据
    Figure PCTCN2016088691-appb-100001
    计算所述待匹配区域的搜索行/列向量与所述模板样本的样本行/列向量的相似度,其中,m表示搜索行/列向量以所述待匹配区域的第m列/行的列/行灰度均值为起始,P(n)表示第n个所述模板样本的样本行/列向量,
    Figure PCTCN2016088691-appb-100002
    表示所述待匹配区域的的搜索行/列向量。
  9. 一种图像匹配装置,其特征在于,包括:
    选取模块,用于根据搜索框在图像中确定出待匹配区域,其中,待匹配区域的尺寸大于模板样本的尺寸;
    匹配模板样本获取模块,用于计算所述待匹配区域的每行/列像素点的灰度值的行/列灰度均值,针对预创建的任一个模板样本,计算待匹配区域中对应所述模板样本行/列数量的任意连续行/列像素点的行/列灰度均值,与所述模板样本的行/列灰度均值的相似度,将最大相似度对应的模板样本作为匹配模板样本;
    定位模块,用于将所述最大相似度对应的所述待匹配区域的行/列像素点确定的区域作为目标区域。
  10. 根据权利要求9所述的装置,其特征在于,所述匹配模板样本获取模块,用于:
    根据所述待匹配区域的每列像素点的灰度值计算列灰度均值,并根据任意相邻的Kw个列灰度均值定义Lw-Kw+1个搜索行向量,其中Lw表示所述待匹配区域的列数量,Kw表示所述模板样本的列数量,Lw>Kw
    计算所述待匹配区域的各搜索行向量与所述若干模板样本的样本行向量的相似度,确定出最大相似度对应的一对搜索行向量和样本行向量;
    将确定出的样本行向量对应的模板样本作为匹配模板样本。
  11. 根据权利要求9所述的装置,其特征在于,所述匹配模板样本获取模块,用于:
    根据所述待匹配区域的每行像素点的灰度值计算行灰度均值,并根据任意相邻的Kh个行灰度均值定义Lh-Kh+1个搜索列向量,其中Lh表示所述待匹配区域的行数量,Kh表示所述模板样本的行数量,Lh>Kh
    计算所述待匹配区域的各搜索列向量与所述若干模板样本的样本列向量的相似度,确定出最大相似度对应的一对搜索列向量和样本列向量;
    将确定出的样本列向量对应的模板样本作为匹配模板样本。
  12. 根据权利要求9所述的装置,其特征在于,所述匹配模板样本获取模块,用于:
    根据所述待匹配区域的每列像素点的灰度值计算列灰度均值,并根据任意相邻的Kw个列灰度均值定义Lw-Kw+1个搜索行向量,其中Lw表示所述待匹配区域的列数量,Kw表示所述模板样本的列数量,Lw>Kw
    计算所述待匹配区域的各搜索行向量与所述若干模板样本的样本行向量的相似度,确定出相似度大于预定阈值的多对搜索行向量和样本行向量;
    将确定出的多个样本行向量对应的模板样本作为中间模板样本;
    根据所述待匹配区域的每行像素点的灰度值计算行灰度均值,并根据任意相邻的Kh个行灰度均值定义Lh-Kh+1个搜索列向量,其中Lh表示所述待匹配区域的行数量,Kh表示所述模板样本的行数量,Lh>Kh
    计算所述待匹配区域的各搜索列向量与多个中间模板样本的样本列向量的相似度,确定出最大相似度对应的一对搜索列向量和样本列向量;
    将确定出的样本列向量对应的中间模板样本作为匹配模板样本。
  13. 根据权利要求9所述的装置,其特征在于,所述匹配模板样本获取模块,用于:
    根据所述待匹配区域每列像素点的灰度值的计算列灰度均值,根据任意相邻的Kw个列灰度均值定义Lw-Kw+1个搜索行向量,其中Lw表示所述待匹配区域的列数量,Kw表示所述模板样本的列数量,Lw>Kw
    比较所述图像的搜索行向量与所述若干模板样本的样本行向量,确定出每个模板样本的样本行向量对应的相似度最高的搜索行向量;
    计算所述图像的搜索行向量与所述若干模板样本的样本行向量的相似度,确定出每个模板样本对应的最大相似度;
    根据所述中间模板样本对应的搜索行向量确定所述待匹配区域中对应的列区域,计算确定出的所述待匹配区域中对应的列区域的每行像素点的行灰度均值,并根据任意相邻的Kh个行灰度均值定义Lh-Kh+1个搜索列向量,其中Lh表示所述待匹配区域的行数量,Kh表示所述模板样本的行数量,Lh>Kh
    计算所述中间模板样本的样本列向量与所述待匹配区域中对应的列区域的搜索列向量的相似度,确定出每个中间模板样本对应的最大相似度;
    确定出所述中间模板样本对应的最大相似度中的最大值,并将所述最大值对应的中间模板样本作为匹配模板样本。
  14. 根据权利要求9所述的装置,其特征在于,还包括:
    模板样本预创建模块,用于:
    根据采图框对标准样品图像进行采图,获得若干采图样本,将所述采图样本进行二值化处理获得二值化样本,其中,所述采图框的尺寸设定为Kw*Kh
    计算所述二值化样本每行像素点的灰度值的行平均值和每列像素点的灰度值的列平均值,并根据所述二值化样本的所有的行平均值定义长度为Kw的样本列向量,根据所述二值化样本的所有的列平均值定义长度为Kh的 样本行向量;
    对每个所述二值化样本进行编号,将编号后且定义样本列向量及样本行向量后的若干二值化样本作为模板样本。
  15. 根据权利要求14所述的装置,其特征在于,所述模板样本预创建模块,用于:
    删除无用的采图样本,所述无用的采图样本包括:与前一采图样本的采图角度差小于预定阈值或无图案的采图样本。
  16. 根据权利要求9~13任一项所述的装置,其特征在于,所述匹配模板样本获取模块,用于:
    根据
    Figure PCTCN2016088691-appb-100003
    计算所述待匹配区域的搜索行/列向量与所述模板样本的样本行/列向量的相似度,其中,m表示搜索行/列向量以所述待匹配区域的第m列/行的列/行灰度均值为起始,P(n)表示第n个所述模板样本的样本行/列向量,
    Figure PCTCN2016088691-appb-100004
    表示所述待匹配区域的的搜索行/列向量。
PCT/CN2016/088691 2015-12-29 2016-07-05 一种图像匹配方法及装置 WO2017113692A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/247,000 US20170185865A1 (en) 2015-12-29 2016-08-25 Method and electronic apparatus of image matching

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201511017712.5 2015-12-29
CN201511017712.5A CN105894441A (zh) 2015-12-29 2015-12-29 一种图像匹配方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/247,000 Continuation US20170185865A1 (en) 2015-12-29 2016-08-25 Method and electronic apparatus of image matching

Publications (1)

Publication Number Publication Date
WO2017113692A1 true WO2017113692A1 (zh) 2017-07-06

Family

ID=57002184

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/088691 WO2017113692A1 (zh) 2015-12-29 2016-07-05 一种图像匹配方法及装置

Country Status (2)

Country Link
CN (1) CN105894441A (zh)
WO (1) WO2017113692A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111814690A (zh) * 2020-07-09 2020-10-23 浙江大华技术股份有限公司 一种目标重识别方法、装置和计算机可读存储介质
CN112559314A (zh) * 2019-09-26 2021-03-26 上海汽车集团股份有限公司 人机交互界面的测试方法和测试装置
CN112559314B (zh) * 2019-09-26 2024-05-31 上海汽车集团股份有限公司 人机交互界面的测试方法和测试装置

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106960437B (zh) * 2017-03-24 2020-04-14 重庆邮电大学 一种电力巡检机器人的油浸电力变压器液位检测方法
CN109448219A (zh) * 2018-10-25 2019-03-08 深圳怡化电脑股份有限公司 图像匹配方法、装置、票据鉴别仪及计算机可读存储介质
CN110456308B (zh) * 2019-07-08 2021-05-04 广西工业职业技术学院 一种三维空间定位快速搜索方法
CN112818983A (zh) * 2021-01-22 2021-05-18 常州友志自动化科技有限公司 一种利用图片相识度判断字符倒置的方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101556695A (zh) * 2009-05-15 2009-10-14 广东工业大学 一种图像匹配方法
CN101576961A (zh) * 2009-06-16 2009-11-11 天津大学 高速的图像匹配方法及装置
CN104134213A (zh) * 2014-09-02 2014-11-05 武汉华目信息技术有限责任公司 一种数字图像中的目标定位方法以及装置
US20140376807A1 (en) * 2011-08-29 2014-12-25 Adobe Systems Incorporated Patch-Based Synthesis Techniques Using Color and Color Gradient Voting

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101556695A (zh) * 2009-05-15 2009-10-14 广东工业大学 一种图像匹配方法
CN101576961A (zh) * 2009-06-16 2009-11-11 天津大学 高速的图像匹配方法及装置
US20140376807A1 (en) * 2011-08-29 2014-12-25 Adobe Systems Incorporated Patch-Based Synthesis Techniques Using Color and Color Gradient Voting
CN104134213A (zh) * 2014-09-02 2014-11-05 武汉华目信息技术有限责任公司 一种数字图像中的目标定位方法以及装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112559314A (zh) * 2019-09-26 2021-03-26 上海汽车集团股份有限公司 人机交互界面的测试方法和测试装置
CN112559314B (zh) * 2019-09-26 2024-05-31 上海汽车集团股份有限公司 人机交互界面的测试方法和测试装置
CN111814690A (zh) * 2020-07-09 2020-10-23 浙江大华技术股份有限公司 一种目标重识别方法、装置和计算机可读存储介质
CN111814690B (zh) * 2020-07-09 2023-09-01 浙江大华技术股份有限公司 一种目标重识别方法、装置和计算机可读存储介质

Also Published As

Publication number Publication date
CN105894441A (zh) 2016-08-24

Similar Documents

Publication Publication Date Title
WO2017113692A1 (zh) 一种图像匹配方法及装置
CN110738207B (zh) 一种融合文字图像中文字区域边缘信息的文字检测方法
WO2022170706A1 (zh) 用于模具监视的缺陷检测方法、装置、设备及介质
CN108121991B (zh) 一种基于边缘候选区域提取的深度学习舰船目标检测方法
CN113109368B (zh) 玻璃裂纹检测方法、装置、设备及介质
WO2021143233A1 (zh) 图像清晰度检测方法、系统、设备及存储介质
CN106920245B (zh) 一种边界检测的方法及装置
CN111369495B (zh) 一种基于视频的全景图像的变化检测方法
CN109271848B (zh) 一种人脸检测方法及人脸检测装置、存储介质
JP6317725B2 (ja) 取得された画像内のクラッタを決定するためのシステム及び方法
WO2019019250A1 (zh) 倾斜图像的倾斜值获取方法、装置、终端及存储介质
CN111178193A (zh) 一种车道线的检测方法、检测装置及计算机可读存储介质
CN112712518A (zh) 鱼类计数方法、装置、电子设备及存储介质
CN116188544A (zh) 一种结合边缘特征的点云配准方法
CN110035281B (zh) 一种坏点检测方法、装置及电子设备
JP5772675B2 (ja) 濃淡画像のエッジ抽出方法、エッジ抽出装置並びに濃淡画像のエッジ抽出プログラム
CN107369179B (zh) 一种高精度图像定位方法
JP2005293334A (ja) テンプレートマッチング装置
WO2024016632A1 (zh) 亮点定位方法、亮点定位装置、电子设备及存储介质
WO2023134251A1 (zh) 一种基于聚类的光条提取方法及装置
CN112036232A (zh) 一种图像表格结构识别方法、系统、终端以及存储介质
CN111626236A (zh) 一种快速的椭圆目标检测方法
CN116188826A (zh) 一种复杂光照条件下的模板匹配方法及装置
CN116007504A (zh) 基于图像技术的裂缝检测模块、装置和计算机设备
CN111583341B (zh) 云台像机移位检测方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16880493

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16880493

Country of ref document: EP

Kind code of ref document: A1