CN109886325B - Template selection and accelerated matching method for nonlinear color space classification - Google Patents
Template selection and accelerated matching method for nonlinear color space classification Download PDFInfo
- Publication number
- CN109886325B CN109886325B CN201910105261.2A CN201910105261A CN109886325B CN 109886325 B CN109886325 B CN 109886325B CN 201910105261 A CN201910105261 A CN 201910105261A CN 109886325 B CN109886325 B CN 109886325B
- Authority
- CN
- China
- Prior art keywords
- image
- template
- matched
- region
- color
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000003062 neural network model Methods 0.000 claims abstract description 20
- 238000010586 diagram Methods 0.000 claims abstract description 17
- 238000005070 sampling Methods 0.000 claims abstract description 17
- 239000011159 matrix material Substances 0.000 claims description 17
- 230000009466 transformation Effects 0.000 claims description 16
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 11
- 210000002569 neuron Anatomy 0.000 claims description 11
- 238000000844 transformation Methods 0.000 claims description 6
- 230000000007 visual effect Effects 0.000 abstract description 4
- 239000003086 colorant Substances 0.000 description 18
- 238000013528 artificial neural network Methods 0.000 description 9
- 238000003709 image segmentation Methods 0.000 description 3
- 101000701876 Homo sapiens Serpin A9 Proteins 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000004456 color vision Effects 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 102000050111 human SERPINA9 Human genes 0.000 description 1
- 210000004205 output neuron Anatomy 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention provides a template selection and accelerated matching method for nonlinear color space classification, which comprises the following steps: the method comprises a model training and image matching process, wherein the model training comprises the following steps: collecting training image samples, extracting a CIE chromaticity diagram of the training image samples, and manually marking the color class number of the training image samples; acquiring a five-layer feedforward neural network model; the image matching process comprises the following steps: inputting a pair of color images and setting a sampling rate; carrying out alternate point down-sampling treatment; obtaining a classification result set; calculating a metric value of the similarity probability; selecting i corresponding to the top k values with the highest score as a preferred color class number, and corresponding to a template region in the template image and a region to be matched in the image to be matched according to the preferred color class number; matching relation is obtained between the template area in the template image and the area to be matched in the image to be matched. Experimental results show that the method has higher registration rate and execution speed, and solves the problem that the color distance of the color space measured by a linear model is inconsistent with the visual judgment of human eyes in the existing matching method.
Description
Technical Field
The invention belongs to the field of image processing, and particularly relates to a template selection and accelerated matching method for nonlinear color space classification.
Background
The template matching algorithm typically considers all possible transformations, including rotation, scale, and affine transformations. Alexe et al provides an efficient computational way to process high-dimensional vectors in two image-matching windows by extracting the boundary of the overlapping portion of the two windows and using it to constrain and match multiple windows. Tsai et al propose the use of wave decomposition with circular projection to improve matching accuracy, and the use of a rotation transform is repeated. Kim et al presents a gray-scale template matching method that has better rotation resistance and scale transformation. Yao et al propose a method for searching color texture that also takes into account rotation and scaling. Under a wide base line condition, the latter three methods have the problem of low matching quality. Another related study is the work of Tian et al, which performs parameter estimation on a density deformation field, and is a method for obtaining a minimum transformation distance from a target transformation parameter space. FAST-Match was proposed by Korman et al in 2013 to determine the matching result by sampling to calculate the minimum SAD between pixels of the matching region and to use global template matching to achieve accelerated search, but conversion to a grayscale image is required in advance before matching the color image. Based on the method, a document realizes region selection and matching from coarse to fine. CFAST-Match is proposed by Jia et al, the accuracy of color image template matching is improved by calculating the proportion of different colors in a template area, but the method needs to set part of parameters according to experience, and in addition, the method uses DBSCAN density clustering, the execution time is long when large-size images are processed, and the practicability of the method is reduced.
Disclosure of Invention
Based on the technical defects, the invention aims to adopt the nonlinear computing capability of the neural network to calculate the complex relation expression, provides a training set for constructing the neural network by taking a CIE chromaticity diagram as a basis to classify the colors of the images, and utilizes the trained network to realize the image segmentation. The invention provides a template selection and accelerated matching method for nonlinear color space classification, which comprises a model training and image matching process and comprises the following specific steps of:
and (3) a model training process:
step 1: collecting training image samples, extracting a CIE chromaticity diagram of the training image samples, acquiring each Macadam ellipse diagram, recording the Macadam ellipse diagram as a color area, extracting an RGB value corresponding to each color area, and manually marking the color class number to which the Macadam ellipse diagram belongs;
step 2: based on the collected training image samples, a five-layer feedforward neural network model is adopted for training to obtain the five-layer feedforward neural network model, and the five-layer feedforward neural network model comprises the following components: the device comprises an input layer, a first hidden layer, a second hidden layer, a third hidden layer and an output layer;
the number of the neurons of the input layer is 3, and the neurons respectively represent R, G and B values corresponding to each color region extracted from the training image sample; the output layer represents a color class number;
and (3) image matching process:
and 4, step 4: inputting a pair of color images I 1 And I 2 Record an image pair I 1 And I 2 Wherein, I 1 As template image, I 2 Setting a sampling rate alpha for an image to be matched;
and 5: using a set sampling rate alpha, image pair I 1 And I 2 Performing alternate point down-sampling treatment to obtainAnd
step 6: by obtaining five-layer feedforward neural network model, pairAndprocessing to obtain a classification result setAnd withAnd number of categoriesAndwherein,and withAre respectively asAndthe number of elements in the set;
wherein, cluster adopts a five-layer feedforward neural network model for classification processing, desendFor processing by down-sampling at alternate points, i.e.
And 7: establishing indexes for similar clusters through an index matrix IM [ i ] [ j ], and calculating a metric value Score (i) of the similarity probability;
wherein, IM [ i ] [ j ] is an index matrix, epsilon is a real number for ensuring that the denominator is not 0, and count () is used for counting the number;
the index matrix IM [ i ] [ j ] is calculated as follows:
and 8: selecting i corresponding to the top k values with the highest Score as a preferred color class number according to Score (i); has already obtained five-layer feedforward neural network modelAndperforming color classification to obtain a classification result, performing region growing on the result to establish a corresponding relation between a color class number and a region, and corresponding a template region in a template image and a region to be matched in an image to be matched according to the preferred color class number;
and step 9: calculating the similarity of the template region in the template image and the region to be matched in the image to be matched to obtain the matching relation between the template region in the template image and the region to be matched in the image to be matched: order imageAndthe regional similarity therebetween is Δ T (I D 1 ,I D 2 ) T isPixel p toAnd (3) affine transformation matrix between the middle pixels, the similarity calculation method between the regions is as follows:the calculated values of the corresponding similarity of all affine transformations T are calculated by the above formula T (I D 1 ,I D 2 ) Calculate the value Δ at all similarities T (I D 1 ,I D 2 ) Taking the maximum value as the final result, wherein the result shows that the template region in the template image and the region to be matched in the image to be matched are the most matched.
The beneficial technical effects are as follows:
the invention classifies the color space by using the neural network and provides a method for determining a matching template and a region to be matched based on a classification result. The experimental result shows that compared with the existing method, the method has higher registration rate and execution speed, and solves some problems existing in the existing matching method: 1) The linear model measures that the color space color distance is inconsistent with the human eye visual judgment; 2) Automatically selecting a template area; 3) The template matching method is inefficient to perform.
Drawings
FIG. 1 is a flowchart of a template selection and accelerated matching method for nonlinear color space classification according to an embodiment of the present invention;
FIG. 2 is a problem with linear color distance calculation; wherein, FIG. 2 (a) is an image of RGB color values (249, 255, 121); FIG. 2 (b) is an image of RGB color values (221, 255, 121), and FIG. 2 (c) is an image of RGB color values (212, 255, 81);
FIG. 3 is a Macadam ellipse on the CIE chromaticity diagram for an embodiment of the present invention;
FIG. 4 is a CIE chromaticity diagram according to an embodiment of the present invention;
FIG. 5 is a five-layer neural network model according to an embodiment of the present invention;
FIG. 6 is a comparison of experimental results for examples of the present invention; wherein, fig. 6 (a) and fig. 6 (c) are the same template region; FIG. 6 (b) is an experimental result obtained by the method of the present invention; FIG. 6 (d) is the experimental results obtained with CFAST; FIG. 6 (e) is an enlarged view of experimental results obtained by the method of the present invention; FIG. 6 (f) is an enlarged view of experimental results obtained using CFAST;
FIG. 7 illustrates template matching location selection according to an embodiment of the present invention; among them, fig. 7 (a) is an original image; fig. 7 (b) is an image obtained by affine transformation;
FIG. 8 shows the experimental results 1 of the examples of the present invention; wherein, fig. 8 (a) is the down-sampled matching image; FIG. 8 (b) is a down-sampled target image; FIG. 8 (c) is a score map; FIG. 8 (d) is a high score region in a matching image; FIG. 8 (e) is a possible matching region in the target image corresponding to a high score region; FIG. 8 (f) is a possible matching region in the target image corresponding to a high score region; FIG. 8 (g) is a possible matching region in the target image corresponding to a high score region; FIG. 8 (h) is a possible matching region in the target image corresponding to a high score region; FIG. 8 (i) is a template region automatically selected; FIG. 8 (j) is the matching result; FIG. 8 (k) is the corresponding enlarged region in (i) and (j);
FIG. 9 shows the experimental result 2 of the example of the present invention; wherein fig. 9 (a) and 9 (b) are down-sampled image pairs; FIG. 9 (c) is a score plot; FIG. 9 (d) is a diagram of 4 cluster regions selected by the score map; fig. 9 (e) is a clustering region where the cluster centers of the selected clustering region 1 in the image to be matched are similar; fig. 9 (f) is a cluster region where the cluster centers of the selected cluster region 2 in the image to be matched are similar; fig. 9 (g) is a cluster region where the cluster centers of the selected cluster region 3 in the image to be matched are similar; fig. 9 (h) is a clustering region where the cluster centers of the selected clustering region 4 in the image to be matched are similar; FIG. 9 (i) is a template selection position determined by the clustering region of FIG. 9 (d); FIG. 9 (j) is the matching result obtained by the template of FIG. 9 (i); fig. 9 (k) is a corresponding enlarged region of (i) and (j).
Detailed Description
The invention is further described with reference to the accompanying drawings and specific embodiments, and the invention provides a template selection and accelerated matching method for nonlinear color space classification, which includes a model training and image segmentation process, as shown in fig. 1, and includes the following specific steps:
and (3) a model training process:
step 1: collecting training image samples, extracting a CIE chromaticity diagram of the training image samples, acquiring each Macadam ellipse chart, recording the Macadam ellipse chart as a color area, extracting an RGB value corresponding to each color area, and manually marking the color class number of the color area;
the color image has a richer information space than the gray image, and most of the existing color image matching methods usually use a linear formula to calculate the similarity between colors, for example, CFAST uses the euclidean distance between RGB to calculate the similarity F (I) between two pixels 1 (p),I 2 (T(p))):
Dist (, in the formula) is used for calculating the similarity between two input parameters, Δ s (p) is a score coefficient of a region where p is located, and r is a distance threshold radius.Are respectively I 1 The channel values of R, G and B at the position of middle p,are respectively I 2 The R, G and B channel values at the position of middle T (p), and C (I1 (p)) is I 1 The cluster center RGB value at the position of p in the middle, I2 (T (p)) is I 2 I 2 Cluster center value at the middle T (p) position. The method adopts the Euclidean distance of the RGB space to calculate the similarity between colors, no matter what color space: RGB, lab and HSV all relate to calculating similarity between colors, and Euclidean distance and Manhattan distance are common linear methods. TheseThe method has the problem that the calculation result is inconsistent with the observation result when the color distance is calculated, taking RGB and LAB color space as an example: 1) The colors represented by the RGB color space differ from the colors recognized by the human visual system HVS such that the colors with the smallest distance are not necessarily similar; 2) Colors having the same distance in the LAB color space are not necessarily similar. As shown in fig. 2, the RGB color values in fig. 2 (a), 2 (B), and 2 (C) are (249, 255, 121), (221, 255, 121), (212, 255, 81), respectively, which correspond to the three points a, B, and C in fig. 3. Wherein the euclidean distance between fig. 2 (a) and fig. 2 (b) is 28, the manhattan distance is 28, the euclidean distance between fig. 2 (b) and fig. 2 (c) is 41, and the manhattan distance is 49, and the calculation results in greater similarity between fig. 2 (a) and fig. 2 (b) in terms of both the euclidean distance and the manhattan distance, but from the human visual point of view, fig. 2 (b) and fig. 2 (c) are more similar, while fig. 2 (a) and fig. 2 (b) are not.
The similarity between pixel colors is not a simple linear relation, the human eye has nonuniform perceptibility to the difference of spectral colors, and the Macadam ellipse determines the boundary of the ellipse according to the wide capacity of the colors, thereby providing a guidance method for describing the color vision accuracy of the common human eye and the excellence of distinguishing similar colors. The Macadam ellipse on the CIE chromaticity diagram is shown in FIG. 3, the size of the ellipse is ten times of the actual size, the center point of the ellipse represents the standard color to be tested, the elliptical area represents the color range which can not be distinguished from the color of the center of the ellipse by human eyes, the periphery of the ellipse consists of pixel points which can be distinguished by the human eyes and are different from the color of the center point, and the size of the Macadam ellipse in different areas is different according to different color recognition degrees of the human eyes. Since the size and direction of the ellipse vary with the center position, the color difference cannot be measured by euclidean distance or manhattan distance in space. The complex relation expression is calculated by adopting the nonlinear computing power of the neural network, a training set of the neural network is constructed on the basis of a CIE chromaticity diagram to classify the colors of the images, and the trained network is used for realizing image segmentation.
And 2, step: based on the collected training image samples, a five-layer feedforward neural network model is adopted for training, and the five-layer feedforward neural network model comprises the following components: an input layer, a first hidden layer, a second hidden layer, a third hidden layer, and an output layer, as shown in fig. 5;
the number of the neurons of the input layer is 3, and the neurons respectively represent R, G and B values corresponding to each color area extracted from the training image sample; the output layer represents a color class number;
the neural network training samples are from a CIE chromaticity diagram, and when the training samples are collected, RGB values corresponding to each color region are extracted, and the color class numbers to which the color regions belong are manually labeled, as shown in fig. 4. Since the figure does not relate to black and grey areas, it is expanded to 25 different colour classes, where class number 24 indicates grey and class number 25 indicates black. Five-layer feedforward neural network models were used for training, as shown in fig. 5. The number of the neurons of the input layer is 3, the neurons respectively represent input R, G and B values, and the first hidden layer comprises 51 neurons; the second hidden layer contains 60 neurons, the third hidden layer contains 42 neurons, and the number of output neurons is 25, which represents the color class number.
For the color image I, inputting R, G and B data into a neural network for color classification to obtain a classification result set C of each pixel point:
C I =Cluster(I R ,I G ,I B )
wherein Cluster () is used for classification, here classification by means of neural networks, I R Is the R channel value of image I, I G Is the G channel value of image I, I B Is the B channel value of picture I. Template matching is based on calculating the distance between pixels, and overall similarity is calculated by counting the similarity of all pixels in the whole template area, and the similarity between colors is nonlinear, so that the problem of mismatching can be caused. The method classifies the whole image pair to obtain a segmentation result, each position in a result matrix is composed of 1-25 classification numbers, and whether the colors of each pixel are consistent or not is determined by comparing whether the classification numbers are consistent or not, so that a distance-based calculation mode is avoided, and the registration rate is improved. Let image I 1 And I 2 The regional similarity therebetween is Δ T (I 1 ,I 2 ) Let us orderT is I 1 Pixels p to I in 2 The method for calculating the similarity between the regions by using the affine transformation matrix between the intermediate pixels comprises the following steps:
and selecting i corresponding to the top k values with the highest Score as the preferred classification number according to the Score (i). Since the neural network already willAndcarrying out color classification to obtain a classification result, carrying out region growing on the result to establish a corresponding relation between the classification number and the region, and obtaining a template region and a region to be matched according to the preferred classification number; order imageAndthe regional similarity therebetween is Δ T (I D 1 ,I D 2 ) T isPixel p toAnd (3) affine transformation matrix between the middle pixels, the similarity calculation method between the regions is as follows:the degree of matching is determined by comparing the similarity between the regions.
Fig. 6 (a) and fig. 6 (c) show the same template region, which is used as the input of the present method and the CFAST method, respectively, and the region contains rich color information. Fig. 6 (b) is an experimental result obtained by the method of this section, and fig. 6 (e) is a partially enlarged view corresponding thereto. Fig. 6 (d) shows the results of an experiment using CFAST, and fig. 6 (f) is a partial enlarged view corresponding thereto. The experimental result shows that compared with the CFAST method, the method provided by the invention has higher matching accuracy.
CFAST uses density clustering for the entire image, and performs a long time when processing a high resolution image. Defining two color images I 1 And I 2 Respectively is n 1 ×n 1 And n 2 ×n 2 Picture I 1 To I 2 The set of affine transformations of (1) is Ω. When a large-size image is processed, the matching accuracy can be improved and the matching execution time can be shortened by selecting a proper template and positioning the position to be matched. For example, the shape of FIG. 7 (a) is composed of a majority of RGB values (255, 0) and RGB values (119, 117, 162) that occur as little as possible. Fig. 7 (b) is an image obtained by affine transformation in fig. 7 (a), and it is easy to improve the matching accuracy by selecting, as a template, an area where colors or color combinations appear as little as possible in the target image. If the search area to be matched is the local rectangular area n of the image 2 ′×n 2 ', due to n 2 ×n 2 >>n 2 ′×n 2 ' the search speed is effectively increased by reducing omega, with the purpose of: 1) Providing the location of the best template selection. 2) The location of the template selection is provided and the matching region of the template in the target image is determined.
Input color image I is subjected to down-sampling processing to obtain image I D And clustering the data to obtain a set C of classification results of each pixel point.
Wherein, cluster is classified by adopting a five-layer feedforward neural network model, and Desend is separated point down-sampling treatment, that is
For two images I 1 ,I 2 Processed using the above formula to obtain a collectionAndthen calculating the center of each clusterAndcalculating outIn thatThe inverse of the similarity number is taken as a measure Score of the probability of similarity in the target image. And the Top-k cluster selected in the Score is used as a template for improving the accuracy of template matching.
In the formula IM [ i][j]Is a compute cluster centerIn thatAn index matrix of similar clusters in (1);
in the formula, IM [ i][j]For the index matrix, ε is a minimum value used to ensure that the denominator is not 0][j]Not equal to 0) for computing class clustersIn thatOf (a) is similar. Score (i) is a Score table, with higher scores giving smaller probabilities of colors or color combinations appearing in the target.
g×h→∑g i ′×h i ′(i=1..n)
Top-k type clusters were selected by Score (i). In the above formula, the search area is reduced from g × h of the whole image to several g '× h' search areas, and the radiation transform set Ω is reduced, which specifically includes the following steps:
and (3) image matching process:
and 4, step 4: inputting a pair of color images I 1 And I 2 Record an image pair I 1 And I 2 Setting a sampling rate alpha;
and 5: using a set sampling rate alpha, image pair I 1 And I 2 Performing alternate point down-sampling treatment to obtainAnd
and 6: using the optimized neural network model pairAndprocessing to obtain a classification result setAndand number of categoriesAndwherein,and withAre respectively asAnd withThe number of elements in the set;
wherein, cluster is classified by adopting a five-layer feedforward neural network model, and Desend is separated point down-sampling treatment, that is
And 7: establishing indexes for the similar clusters through an index matrix IM [ i ] [ j ], and calculating a metric value Score (i) of the similarity probability;
wherein, IM [ i ] [ j ] is an index matrix, ε is a minimum value used to ensure that the denominator is not 0, and 0.001 is taken in this embodiment; count () is used to Count the number;
the index matrix IM [ i ] [ j ] is calculated as follows:
and step 8: selecting i corresponding to the top k values with the highest Score as a preferred color class number according to Score (i); has already obtained five-layer feedforward neural network modelAndcarrying out color classification to obtain a classification result, carrying out region growing on the result to establish a corresponding relation between a color class number and a region, and corresponding a template region in a template image and a region to be matched in the image to be matched according to the preferred color class number;
and step 9: calculating the similarity of the template region in the template image and the region to be matched in the image to be matched to obtain the matching relation between the template region in the template image and the region to be matched in the image to be matched: order imageAndthe regional similarity therebetween is Δ T (I D 1 ,I D 2 ) T isPixel p toAnd (3) affine transformation matrix between the middle pixels, the similarity calculation method between the regions is as follows:calculating values delta of corresponding similarity of all affine transformations T by the above formula T (I D 1 ,I D 2 ) Calculate the value Δ for all similarities T (I D 1 ,I D 2 ) Taking the maximum value as the final result, wherein the result shows that the template region in the template image and the region to be matched in the image to be matched are the most matched.
Fig. 8 (a) and 8 (b) are pairs of down-sampled images, and fig. 8 (c) is a score map. And (4) obtaining a score map by adopting the calculation method in the step 7. Fig. 8 (d) shows that 4 cluster regions are selected by the score map. Fig. 8 (e), fig. 8 (f), fig. 8 (g), and fig. 8 (h) are clustering regions in which the cluster centers of the selected clustering regions in the image to be matched are similar, respectively. As can be seen from the figure, the template regions selected in the images to be matched are all located in the clustering region of the target image, so that only the final matching result needs to be searched in the clustering region in the matching process. Fig. 8 (i) is a template selection position determined by the clustering region of fig. 8 (d), and fig. 8 (j) is a matching result obtained by the template of fig. 8 (i). Fig. 8 (k) is a corresponding enlarged region of (i) and (j). III II III IV is the corresponding region number.
Fig. 9 (a) and 9 (b) are pairs of down-sampled images, and fig. 9 (c) is a score map. And (5) obtaining a score map by adopting the calculation method in the step 7. Fig. 9 (d) shows 4 cluster regions selected by the score map. Fig. 9 (e), 9 (f), 9 (g), and 9 (h) are respectively cluster regions in which the cluster centers of the selected cluster regions in the image to be matched are similar. As can be seen from the figure, the template regions selected in the images to be matched are all located in the clustering region of the target image, so that the final matching result only needs to be searched in the clustering region in the matching process. Fig. 9 (i) is a template selection position determined by the clustering region of fig. 9 (d), and fig. 9 (j) is a matching result obtained by the template of fig. 9 (i). Fig. 9 (k) is a corresponding enlarged region of (i) and (j). III II III IV is the corresponding region number.
Claims (2)
1. A template selection and accelerated matching method for nonlinear color space classification is characterized by comprising a model training and image matching process, and specifically comprising the following steps:
and (3) a model training process:
step 1: collecting training image samples, extracting a CIE chromaticity diagram of the training image samples, acquiring each Macadam ellipse diagram, recording the Macadam ellipse diagram as a color area, extracting an RGB value corresponding to each color area, and manually marking the color class number to which the Macadam ellipse diagram belongs;
and 2, step: based on the collected training image samples, a five-layer feedforward neural network model is adopted for training to obtain the five-layer feedforward neural network model, and the five-layer feedforward neural network model comprises the following components: the device comprises an input layer, a first hidden layer, a second hidden layer, a third hidden layer and an output layer;
the number of the neurons of the input layer is 3, and the neurons respectively represent R, G and B values corresponding to each color region extracted from the training image sample; the output layer represents a color class number;
and (3) image matching process:
and 4, step 4: inputting a pair of color images I 1 And I 2 Recording an image pair I 1 And I 2 Wherein, I 1 As template image, I 2 Setting a sampling rate alpha for an image to be matched;
and 5: using a set sampling rate alpha, image pair I 1 And I 2 Performing alternate point down-sampling treatment to obtainAnd
and 6: by obtaining five-layer feedforward neural network model, pairAndprocessing to obtain a classification result setAnd withAnd number of categoriesAndwherein,andare respectively asAndthe number of elements in the set;
wherein, cluster is classified by adopting a five-layer feedforward neural network model, and Desend is separated point down-sampling treatment, that is
And 7: establishing indexes for the similar clusters through an index matrix IM [ i ] [ j ], and calculating a metric value Score (i) of the similarity probability;
wherein, IM [ i ] [ j ] is an index matrix, epsilon is a real number for ensuring that the denominator is not 0, and count () is used for counting the number;
the index matrix IM [ i ] [ j ] is calculated as follows:
and step 8: selecting i corresponding to the top k values with the highest Score as a preferred color class number according to Score (i), and corresponding to a template region in the template image and a region to be matched in the image to be matched according to the preferred color class number;
and step 9: and calculating the similarity of the template region in the template image and the region to be matched in the image to be matched to obtain the matching relation of the template region in the template image and the region to be matched in the image to be matched.
2. The template selection and accelerated matching method for nonlinear color space classification according to claim 1, wherein the specific process of the step 9 is as follows: order imageAndthe regional similarity therebetween is Δ T (I D 1 ,I D 2 ) T isPixel p toAnd (3) affine transformation matrix between the middle pixels, the similarity calculation method between the regions is as follows:the calculated values of the corresponding similarity of all affine transformations T are calculated by the above formula T (I D 1 ,I D 2 ) Calculate the value Δ at all similarities T (I D 1 ,I D 2 ) Taking the maximum value as the final result, wherein the result shows that the template region in the template image and the region to be matched in the image to be matched are the most matched.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910105261.2A CN109886325B (en) | 2019-02-01 | 2019-02-01 | Template selection and accelerated matching method for nonlinear color space classification |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910105261.2A CN109886325B (en) | 2019-02-01 | 2019-02-01 | Template selection and accelerated matching method for nonlinear color space classification |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109886325A CN109886325A (en) | 2019-06-14 |
CN109886325B true CN109886325B (en) | 2022-11-29 |
Family
ID=66927930
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910105261.2A Active CN109886325B (en) | 2019-02-01 | 2019-02-01 | Template selection and accelerated matching method for nonlinear color space classification |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109886325B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111583193B (en) * | 2020-04-21 | 2021-04-23 | 广州番禺职业技术学院 | Pistachio nut framework extraction device based on geometric contour template matching and algorithm thereof |
CN113408365B (en) * | 2021-05-26 | 2023-09-08 | 广东能源集团科学技术研究院有限公司 | Safety helmet identification method and device under complex scene |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106204618A (en) * | 2016-07-20 | 2016-12-07 | 南京文采科技有限责任公司 | Product surface of package defects detection based on machine vision and sorting technique |
CN106355607A (en) * | 2016-08-12 | 2017-01-25 | 辽宁工程技术大学 | Wide-baseline color image template matching method |
CN107122776A (en) * | 2017-04-14 | 2017-09-01 | 重庆邮电大学 | A kind of road traffic sign detection and recognition methods based on convolutional neural networks |
CN107133943A (en) * | 2017-04-26 | 2017-09-05 | 贵州电网有限责任公司输电运行检修分公司 | A kind of visible detection method of stockbridge damper defects detection |
CN108229561A (en) * | 2018-01-03 | 2018-06-29 | 北京先见科技有限公司 | Particle product defect detection method based on deep learning |
CN108830912A (en) * | 2018-05-04 | 2018-11-16 | 北京航空航天大学 | A kind of interactive grayscale image color method of depth characteristic confrontation type study |
CN108876797A (en) * | 2018-06-08 | 2018-11-23 | 长安大学 | A kind of image segmentation system and method based on Spiking-SOM neural network clustering |
-
2019
- 2019-02-01 CN CN201910105261.2A patent/CN109886325B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106204618A (en) * | 2016-07-20 | 2016-12-07 | 南京文采科技有限责任公司 | Product surface of package defects detection based on machine vision and sorting technique |
CN106355607A (en) * | 2016-08-12 | 2017-01-25 | 辽宁工程技术大学 | Wide-baseline color image template matching method |
CN107122776A (en) * | 2017-04-14 | 2017-09-01 | 重庆邮电大学 | A kind of road traffic sign detection and recognition methods based on convolutional neural networks |
CN107133943A (en) * | 2017-04-26 | 2017-09-05 | 贵州电网有限责任公司输电运行检修分公司 | A kind of visible detection method of stockbridge damper defects detection |
CN108229561A (en) * | 2018-01-03 | 2018-06-29 | 北京先见科技有限公司 | Particle product defect detection method based on deep learning |
CN108830912A (en) * | 2018-05-04 | 2018-11-16 | 北京航空航天大学 | A kind of interactive grayscale image color method of depth characteristic confrontation type study |
CN108876797A (en) * | 2018-06-08 | 2018-11-23 | 长安大学 | A kind of image segmentation system and method based on Spiking-SOM neural network clustering |
Non-Patent Citations (4)
Title |
---|
Mineral identification using color spaces and artificial neural networks;Baykan N A等;《Computers & Geosciences》;20101231;第36卷(第1期);第91-97页 * |
Template matching for detection & recognition of frontal view of human face through Matlab;N. Singh等;《2017 International Conference on Information Communication and Embedded Systems (ICICES)》;20171231;第1-7页 * |
像对匹配的模板选择与匹配;贾迪等;《中国图象图形学报》;20171130;第22卷(第11期);第1512-1520页 * |
基于颜色空间和模板匹配的交通标志检测方法;郝博闻等;《智能计算机与应用》;20161231;第6卷(第4期);第20-22页 * |
Also Published As
Publication number | Publication date |
---|---|
CN109886325A (en) | 2019-06-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109800824B (en) | Pipeline defect identification method based on computer vision and machine learning | |
CN111340824B (en) | Image feature segmentation method based on data mining | |
CN107610114B (en) | optical satellite remote sensing image cloud and snow fog detection method based on support vector machine | |
CN110443128B (en) | Finger vein identification method based on SURF feature point accurate matching | |
CN112150493B (en) | Semantic guidance-based screen area detection method in natural scene | |
CN111126240B (en) | Three-channel feature fusion face recognition method | |
CN110348319A (en) | A kind of face method for anti-counterfeit merged based on face depth information and edge image | |
CN106023151B (en) | Tongue object detection method under a kind of open environment | |
CN102103690A (en) | Method for automatically portioning hair area | |
CN106529532A (en) | License plate identification system based on integral feature channels and gray projection | |
CN109740572A (en) | A kind of human face in-vivo detection method based on partial color textural characteristics | |
CN106485253B (en) | A kind of pedestrian of maximum particle size structured descriptor discrimination method again | |
CN111310768B (en) | Saliency target detection method based on robustness background prior and global information | |
CN107045621A (en) | Facial expression recognizing method based on LBP and LDA | |
CN113450369B (en) | Classroom analysis system and method based on face recognition technology | |
CN109815923B (en) | Needle mushroom head sorting and identifying method based on LBP (local binary pattern) features and deep learning | |
CN112270317A (en) | Traditional digital water meter reading identification method based on deep learning and frame difference method | |
CN111009005A (en) | Scene classification point cloud rough registration method combining geometric information and photometric information | |
CN109886325B (en) | Template selection and accelerated matching method for nonlinear color space classification | |
CN111210447B (en) | Hematoxylin-eosin staining pathological image hierarchical segmentation method and terminal | |
Paul et al. | Rotation invariant multiview face detection using skin color regressive model and support vector regression | |
CN111339932A (en) | Palm print image preprocessing method and system | |
CN106295478A (en) | A kind of image characteristic extracting method and device | |
CN107679467A (en) | A kind of pedestrian's weight recognizer implementation method based on HSV and SDALF | |
CN109886320B (en) | Human femoral X-ray intelligent recognition method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |