WO2017121018A1 - 二维码图像处理的方法和装置、终端、存储介质 - Google Patents
二维码图像处理的方法和装置、终端、存储介质 Download PDFInfo
- Publication number
- WO2017121018A1 WO2017121018A1 PCT/CN2016/075259 CN2016075259W WO2017121018A1 WO 2017121018 A1 WO2017121018 A1 WO 2017121018A1 CN 2016075259 W CN2016075259 W CN 2016075259W WO 2017121018 A1 WO2017121018 A1 WO 2017121018A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- dimensional code
- image
- code image
- baseband layer
- pixel
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000001914 filtration Methods 0.000 claims abstract description 42
- 238000004364 calculation method Methods 0.000 claims description 25
- 230000003044 adaptive effect Effects 0.000 claims description 23
- 238000010606 normalization Methods 0.000 claims description 23
- 230000002146 bilateral effect Effects 0.000 claims description 18
- 238000000926 separation method Methods 0.000 claims description 8
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 230000006870 function Effects 0.000 description 45
- 238000010586 diagram Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 238000005286 illumination Methods 0.000 description 3
- 239000011324 bead Substances 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002902 bimodal effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
Definitions
- the present invention relates to the field of computer image processing, and in particular, to a method and apparatus for processing a two-dimensional code image, a terminal, and a storage medium.
- QR code scanning software With the development of computer technology, the application environment of QR code in daily social life is more and more wide. More and more client software installed on computers, mobile phones, tablet computers and other terminals integrates QR code scanning software. The two-dimensional code is identified to obtain information transmitted by the two-dimensional code.
- the two-dimensional code usually records the data symbol information by using black and white binary graphics, but in some specific occasions, displaying the two-dimensional code by using a special display method results in a certain degree of grayscale contrast, sharpness and pixel resolution. Decrease, such as when using a low-light-emitting diode (LED) dot matrix screen to output a two-dimensional code, the two-dimensional code pattern is affected by the brightness of the LED, resulting in a black and white contrast is not clear, and because the LED lamp bead volume is fixed, if The output of the high-pixel two-dimensional code pattern requires a lot of LED lamp beads.
- LED low-light-emitting diode
- an embodiment of the present invention provides a method for processing a two-dimensional code image, where the method includes:
- the filtering is a nonlinear bilateral filtering
- the step of filtering the grayscale two-dimensional code image to obtain a baseband image of the grayscale two-dimensional code image comprises:
- the baseband pixel values of the current pixel to be filtered are calculated according to the neighboring pixel points, the normalization coefficient, the spatial standard deviation parameter, the intensity standard deviation parameter, the Gaussian kernel function in the spatial domain, and the Gaussian kernel function in the intensity domain.
- the pixel used in calculating the spatial standard deviation parameter corresponding to the Gaussian kernel function of the spatial domain is a pixel point on a diagonal line within the template, and the intensity standard deviation corresponding to the Gaussian kernel function of the intensity domain is calculated.
- the pixel used in the parameter is all the pixels in the template.
- the method before the step of performing the histogram statistics on the baseband layer image, the method further includes: performing adaptive Gaussian filtering on the baseband layer image to obtain a filtered baseband layer image, including:
- the step of selecting a local dynamic threshold according to the grayscale contrast feature of the baseband layer image in the histogram statistics result includes:
- Determining a histogram statistical result between the highest peak of the high gray scale and the highest peak of the low gray scale is an effective dynamic range
- an embodiment of the present invention provides an apparatus for processing a two-dimensional code image, where the apparatus includes:
- a grayscale conversion module configured to convert the acquired original two-dimensional code image into a grayscale two-dimensional code image
- the baseband layer separation module is configured to filter the grayscale two-dimensional code image to obtain a baseband layer image of the grayscale two-dimensional code image;
- a binarization module configured to perform histogram statistics on the baseband layer image according to the straight
- the gray-scale contrast feature of the baseband layer image in the square graph statistical result selects a local dynamic threshold, and the baseband layer image is binarized according to the local dynamic threshold to obtain a binary two-dimensional code image.
- the filtering is nonlinear bilateral filtering
- the baseband layer separation module includes:
- a neighboring pixel point acquiring unit configured to acquire a neighboring pixel in a vicinity of a pixel to be filtered by using a template corresponding to the image size of the grayscale two-dimensional code as a neighboring range;
- a standard deviation parameter calculation unit configured to calculate a spatial standard deviation parameter corresponding to a Gaussian kernel function in a spatial domain and an intensity standard deviation parameter corresponding to a Gaussian kernel function in the intensity domain according to the neighboring pixel points;
- the normalization coefficient calculation unit is configured to calculate, according to the neighboring pixel point, the spatial standard deviation parameter, and the intensity standard deviation parameter, the normalization of the pixel to be filtered by the Gaussian kernel function of the spatial domain and the Gaussian kernel function of the intensity domain Coefficient
- the baseband layer pixel calculation unit is configured to calculate the current pixel to be filtered according to the neighboring pixel point, the normalization coefficient, the spatial standard deviation parameter, the intensity standard deviation parameter, the Gaussian kernel function in the spatial domain, and the Gaussian kernel function in the intensity domain.
- the baseband pixel value of the point is configured to calculate the current pixel to be filtered according to the neighboring pixel point, the normalization coefficient, the spatial standard deviation parameter, the intensity standard deviation parameter, the Gaussian kernel function in the spatial domain, and the Gaussian kernel function in the intensity domain.
- the standard deviation parameter calculation unit is configured to calculate a spatial standard deviation parameter corresponding to a Gaussian kernel function in a spatial domain, and the pixel point used is a pixel point on a diagonal line within the template, and the intensity domain is calculated.
- the pixel points used in the intensity standard deviation parameter corresponding to the Gaussian kernel function are all the pixels in the template.
- the apparatus further includes:
- the Gaussian filtering module is configured to perform adaptive Gaussian filtering on the baseband layer image to obtain a filtered baseband layer image, including:
- a variance calculation unit configured to approximate a Laplacian operator of a fundamental frequency layer image using a Laplacian operator According to the weighted interpolation E(i,j) and By formula Calculating a variance ⁇ 2 (i, j) of the adaptive Gaussian filter;
- a filtering unit configured to be according to the And the variance of the adaptive Gaussian filter ⁇ 2 (i, j) by the formula
- the filtered baseband layer image pixel value Ig (i,j) is obtained.
- the binarization module includes:
- a statistical peak acquisition unit configured to acquire a high gray scale statistical peak and a low gray scale statistical peak in the histogram statistical result within a preset effective gray scale range
- a local dynamic threshold determining unit configured to determine a histogram statistical result between the highest peak of the high gray scale and the highest peak of the low gray scale as an effective dynamic range, and obtain the gray with the least number of pixels in the effective dynamic range The degree is used as the local dynamic threshold.
- an embodiment of the present invention provides a computer storage medium, where the computer storage medium stores computer executable instructions for performing two-dimensional code image processing provided by the first aspect of the present invention.
- an embodiment of the present invention provides a terminal, where the terminal includes:
- a storage medium configured to store computer executable instructions
- a processor configured to execute computer executable instructions stored on the storage medium, the computer executable instructions comprising: converting the acquired original two-dimensional code image into a grayscale two-dimensional code image; The code image is filtered to obtain a baseband layer image of the grayscale two-dimensional code image; the baseband layer image is subjected to histogram statistics, and the grayscale contrast of the baseband layer image is compared according to the histogram statistical result.
- the feature selects a local dynamic threshold, and binarizes the baseband image according to the local dynamic threshold to obtain a binary two-dimensional code image.
- an embodiment of the present invention provides a terminal, where the terminal includes:
- a processor configured to convert the acquired original two-dimensional code image into a gray-scale two-dimensional code image; and filtering the gray-scale two-dimensional code image to obtain a baseband layer image of the gray-scale two-dimensional code image; Performing histogram statistics on the baseband layer image, selecting a local dynamic threshold according to the grayscale contrast feature of the baseband layer image in the histogram statistical result, and performing binary value on the baseband layer image according to the local dynamic threshold Obtaining a binary two-dimensional code image;
- a display device configured to display the two-dimensional code image.
- the method and device for processing the two-dimensional code image, the terminal, and the storage medium by converting the acquired original two-dimensional code image into a gray-scale two-dimensional code image, filtering the gray-scale two-dimensional code image to obtain the gray-scale two-dimensional image
- the baseband layer image of the code image is subjected to histogram statistics of the baseband layer image, and the local dynamic threshold is selected according to the grayscale contrast feature of the fundamental frequency layer image in the histogram statistical result, and the fundamental frequency layer image is binary according to the local dynamic threshold.
- the binary two-dimensional code image is obtained, and the noise and minute image detail information are separated into the detail layer.
- the subsequent recognition of the two-dimensional code image uses only the fundamental frequency layer image, thereby avoiding interference of noise and the like on the recognition of the two-dimensional code image.
- the local dynamic threshold is used for binarization, so that the two-dimensional code images affected by different illuminations have different suitable thresholds according to the histogram statistics, so that the binarized image is closer to the original two-dimensional code image, thereby improving The accuracy of subsequent recognition of the two-dimensional code image.
- FIG. 1 is a flow chart of a method for processing a two-dimensional code image in an embodiment
- FIG. 2 is a flow chart showing a baseband layer image of a grayscale two-dimensional code image in one embodiment
- FIG. 3 is a flow chart of performing adaptive Gaussian filtering on a baseband layer image to obtain a filtered baseband layer image in an embodiment
- FIG. 4 is a flow chart of selecting a local dynamic threshold in one embodiment
- FIG. 5 is a flow chart of obtaining the highest peak of high gray scale statistics and the highest peak of low gray scale statistics in one embodiment
- FIG. 6 is a schematic diagram of a histogram statistical result in one embodiment
- FIG. 7 is a schematic diagram of an original two-dimensional code image in one embodiment
- Figure 8 is a detailed layer diagram of a two-dimensional code image in one embodiment
- FIG. 9 is a schematic diagram of a baseband layer of a two-dimensional code image in one embodiment
- FIG. 10 is a schematic diagram of a two-dimensional code image after binarization in one embodiment
- FIG. 11 is a structural block diagram of an apparatus for processing a two-dimensional code image in an embodiment
- FIG. 12 is a structural block diagram of a baseband layer separation module in an embodiment
- FIG. 13 is a structural block diagram of an apparatus for processing a two-dimensional code image in another embodiment
- Figure 14 is a block diagram showing the structure of a binarization module in one embodiment.
- a method for two-dimensional code image processing including:
- Step S110 converting the acquired original two-dimensional code image into a gray-scale two-dimensional code image.
- the original two-dimensional code may be a variety of two-dimensional codes, such as a QR code.
- the display mode of the original two-dimensional code image can be divided into various types, such as through paper, network, TV screen, projection display by LED dot matrix screen, and the like.
- the original two-dimensional code image can be acquired by the smart device such as a mobile phone terminal through the camera, but the original two-dimensional code image collected is generally a color image, and even if a black and white image is taken, the obtained image is still an image with three colors of RGB. Since the information carried by the two-dimensional code can be characterized only by black and white, it is necessary to convert the color image into a grayscale image.
- the original two-dimensional code color image captured by the camera is generally encoded in RGB space.
- Each pixel uses 1 byte to represent RGB three primary colors, and the Y component obtained by converting RGB space into YUV space represents the brightness of the pixel, which can be used as gray. The degree value, thereby completing the process of converting the original two-dimensional code image into a gray-scale two-dimensional code image.
- step S120 the grayscale two-dimensional code image is filtered to obtain a baseband layer image of the grayscale two-dimensional code image.
- the image is segmented by using a filter to obtain a detail layer and a baseband layer of the gray-scale two-dimensional code image, and high-frequency components in the image, such as strong edges and the like, where the gray level of adjacent pixels is greatly changed,
- the image detail information and noise are kept as much as possible in the detail layer.
- Only the low frequency component, ie the energy information, of the image is preserved in the fundamental frequency layer, and the fundamental frequency layer image basically retains the original contrast of the image. Since noise and minute image detail information are separated into the detail layer, subsequent recognition of the two-dimensional code image uses only the fundamental frequency layer image, thereby avoiding interference of noise and the like on the recognition of the two-dimensional code image, and also reducing subsequent binarization.
- the computational difficulty of threshold selection during the process is not limited to obtain a detail layer and a baseband layer of the gray-scale two-dimensional code image, and high-frequency components in the image, such as strong edges and the like, where the gray level of adjacent pixels is greatly changed.
- the filtering algorithm can be customized during filtering, such as linear guiding filtering algorithm or nonlinear bilateral filtering algorithm.
- nonlinear bilateral filtering algorithm the length and width of the template can be customized according to the resolution of the image, and the spatial domain and intensity domain information of the adjacent pixel points of the current filtered pixel point are obtained by moving the template, so that the filtering effect is obtained. Better, get a more accurate baseband image.
- Step S130 performing histogram statistics on the baseband layer image, selecting a local dynamic threshold according to the grayscale contrast feature of the fundamental frequency layer image in the histogram statistical result, and binarizing the baseband layer image according to the local dynamic threshold.
- Value QR code image
- the original two-dimensional code image is affected by factors such as aperture, exposure, ambient light, etc. during the shooting process, and the brightest two-dimensional code portion and the darkest two-dimensional code in each original two-dimensional code image captured.
- the contrast of the portion often changes, and the gray level of the pixel in the grayscale two-dimensional code image after the grayscale includes a plurality of different gray levels.
- Histogram statistics are performed on the standard binary QR code image.
- the histogram feature should be a bimodal form, that is, a statistical peak with a low gray level and a statistical peak with a high gray level, but due to the actual gray level two.
- the gray code image contains the gray scale of other scenes, so there will be multiple peaks of different gray levels in the histogram statistics, but the most basic double peak feature will not disappear because the intensity of the feature is much larger than other background peaks. .
- the grayscale addition of the scene is usually concentrated in the low-gradation region, so in the histogram, the peak near the low grayscale becomes more, and A certain degree of traverse occurs in the low gradation area. The same thing happens in high gray areas, just the usual shot There will not be too many high-brightness backgrounds during the shooting process, so the histogram traverse phenomenon in the high gray areas is less.
- the threshold of binarization adopts the method of local dynamic threshold selection. Firstly, the effective dynamic range is determined according to the grayscale contrast feature of the fundamental frequency layer image in the histogram statistical result, and the threshold is selected within the effective dynamic range. The gray level outside the effective dynamic range is not a threshold, so that the two-dimensional code image affected by different illuminations has different suitable thresholds according to the histogram statistics.
- Statistical peaks of different gray levels can be obtained first, and a statistical peak is formed when the number of statistical pixels of one gray level is higher than the number of statistical pixels of the adjacent gray level.
- the effective dynamic range is determined according to the gray level corresponding to the statistical peak, the number of pixel statistics, and the gray distance difference between different statistical peaks.
- the determination of the effective dynamic range can be customized according to the situation. For example, when the grayscale distance difference corresponding to the two statistical peaks is greater than the preset threshold, the statistical peak with high gray level is classified into the high grayscale statistical peak group, and the grayscale is The low-level statistical peaks are classified into the low-gradation statistical peak group, and the highest peak is obtained from the high-gray statistical peak group to obtain the highest peak of the high gray level.
- the statistical peak with the largest gray level is obtained from the low-gradation statistical peak group.
- the lowest peak of low grayscale statistics It is also possible to first remove some statistical peaks with too small gray levels from the low-gradation statistical peak group, and then obtain the highest peak from the remaining statistical peaks as the highest peak of the low gray level statistics. Because the statistical peaks with small gray levels for low-gradation statistical peak groups are usually formed by the gray-scale addition of the scene, they can be accurately removed to accurately determine the effective dynamic range. It is also possible to obtain the preset effective gray scale range first, and then determine the highest peak of the high gray scale statistics and the highest peak of the low gray scale in the effective gray scale range, and accelerate the determination speed. After the effective dynamic range is determined, the gray level with the smallest number of pixel statistics is obtained as the local dynamic threshold in the effective dynamic range.
- the original two-dimensional code image is converted into a gray-scale two-dimensional code image, and the gray-scale two-dimensional code image is filtered to obtain a baseband layer image of the gray-scale two-dimensional code image, and the fundamental frequency is obtained.
- the layer image is subjected to histogram statistics, and the gray scale contrast of the fundamental frequency layer image according to the histogram statistical result.
- the feature selects the local dynamic threshold, and binarizes the fundamental frequency layer image according to the local dynamic threshold to obtain the binary two-dimensional code image.
- the noise and minute image detail information are separated into the detail layer, and the subsequent recognition of the two-dimensional code image is only
- the use of the fundamental frequency layer image avoids the interference of noise and the like on the recognition of the two-dimensional code image, and the local dynamic threshold is used for binarization, so that the two-dimensional code image affected by different illumination has different suitable thresholds according to the histogram statistical result. So that the binarized image is closer to the original two-dimensional code image, thereby improving the accuracy of subsequent recognition of the two-dimensional code image.
- step S120 includes:
- Step S121 The template corresponding to the image size of the gray-scale two-dimensional code is a neighboring range, and the neighboring pixel points in the vicinity of the current pixel to be filtered are acquired.
- the length and width of the template can be adjusted according to the image size of the grayscale two-dimensional code. For example, if the resolution of the two-dimensional code image is high, the length and width of the template can be increased.
- the template is a 7*7 pixel template, and then 7*7 neighboring points adjacent to the pixel to be filtered are acquired.
- Step S122 calculating a spatial standard deviation parameter corresponding to the Gaussian kernel function of the spatial domain and an intensity standard deviation parameter corresponding to the Gaussian kernel function of the intensity domain according to the neighboring pixel points.
- the nonlinear bilateral filtering algorithm needs to utilize the Gaussian kernel function of the spatial domain and the Gaussian kernel function of the intensity domain.
- the Gaussian function is a form of statistical function whose function shape is a positive value with a standard value and a standard deviation as a confidence interval. State distribution, the size of the standard deviation determines the validity of the function range, which controls the expansion range of the Gaussian kernel function. Therefore, the selection of the spatial standard deviation parameter and the intensity standard deviation parameter is particularly important.
- ⁇ s determines the scale of the adjacent region. In one embodiment, ⁇ s is proportional to the size of the image, and 2.5% of the diagonal size of the image can be selected.
- ⁇ r represents the amplitude of the image detail.
- the selection of the human eye can resolve 20% of the gray level, i.e., 25 as the value of ⁇ r .
- the spatial standard deviation parameter and the intensity standard deviation parameter are obtained according to the calculation of the dynamics of the neighboring pixel points, so that the calculation of the parameter takes into account the distribution of the image itself and is more adaptive.
- the formula for calculating the spatial standard deviation parameter ⁇ s is:
- the formula for calculating the standard deviation parameter ⁇ r is Where u and t are expected values, respectively, N and M represent the number of neighboring pixels used for calculation, and the selection of N and M can be customized as needed.
- the pixel used in calculating the spatial standard deviation parameter corresponding to the Gaussian kernel function in the spatial domain is a pixel on a diagonal line within the template, and the intensity standard deviation parameter corresponding to the Gaussian kernel function in the intensity domain is used.
- the pixels are all pixels within the template.
- 7*7 template 7 pixel points on the diagonal inside the template are used when calculating the spatial standard deviation parameter, and 7*7 pixels are used when calculating the intensity standard deviation parameter.
- Step S123 Calculate a normalization coefficient corresponding to the pixel to be filtered by a Gaussian kernel function in the spatial domain and a Gaussian kernel function in the intensity domain according to the neighboring pixel point, the spatial standard deviation parameter, and the intensity standard deviation parameter.
- g s is the Gaussian kernel function of the spatial domain and is a normalized Gaussian kernel function, that is, the sum of all the coefficients in the filter is 1.
- g r is a Gaussian kernel function of the intensity domain and a standardized Gaussian kernel function.
- S (i, j) represents adjacent pixel points in the proximity range determined by the template to be filtered by the current pixel point.
- i, j is the position coordinate of the current pixel to be filtered, and i', j' is the position coordinate of the adjacent pixel.
- k(i,j) is obtained by multiplying the spatial domain and the results of two Gaussian kernel function templates of the intensity domain, and the range is between 0-1.
- the space standard deviation parameter and the intensity standard deviation parameter calculated in the previous step are respectively used in calculating g s and g r . It can be understood that the above formula can be modified to some extent when calculating the normalization coefficient.
- Step S124 calculating a baseband pixel value of the current pixel to be filtered according to a neighboring pixel point, a normalization coefficient, a spatial standard deviation parameter, an intensity standard deviation parameter, a Gaussian kernel function in the spatial domain, and a Gaussian kernel function in the intensity domain.
- the baseband pixel value of the pixel to be filtered whose current position coordinate is i, j is determined by the formula It can be calculated, where I in represents the pixel value.
- the space standard deviation parameter and the intensity standard deviation parameter calculated in step S122 are respectively used in calculating g s and g r . It can be understood that the above formula can be modified to some extent when calculating the baseband pixel value.
- the information belonging to the basic image is retained in the baseband layer, and the noise information and the minute image detail information are left in the detail layer, and the calculation difficulty of the threshold selection in the subsequent binarization process can be reduced, and the second improvement is further improved.
- the accuracy of the identification of the dimensional image is retained in the baseband layer, and the noise information and the minute image detail information are left in the detail layer, and the calculation difficulty of the threshold selection in the subsequent binarization process can be reduced, and the second improvement is further improved.
- the method before step S130, further includes: step S210, performing adaptive Gaussian filtering on the baseband layer image to obtain a filtered baseband layer image.
- the execution of the primary bilateral filter is equivalent to a step toward the local mode of the image.
- the Gaussian weighted statistics may be unstable, which may cause the basic image after the gradient to be leaked into the detail layer image.
- the adaptive frequency Gaussian filter is used to correct the fundamental layer image.
- step S210 includes:
- the coefficient k(i,j) of the weighted interpolation is a normalization factor.
- k(i, j) is calculated in step S123, which indicates whether the gray value of one image is located in an unstable region near the edge.
- Step S212 using the Laplacian operator to approximate the Laplacian operator of the fundamental frequency layer image
- the Laplace operator approximate calculation formula can be Substituting I bf (i,j) into the formula can obtain the Laplacian operator of the fundamental layer image.
- Step S213 according to the weighted interpolation E(i, j) and By formula Calculate the variance ⁇ 2 (i, j) of the adaptive Gaussian filter.
- the variance parameter of the Gaussian filter in order to correct the error caused by the over-sharpening of the bilateral filter, the variance parameter of the Gaussian filter must be adapted to the local area in the image.
- the Gaussian filter is used to smooth the bilateral filter to make the processed image closer to the original edge, so the difference between the Gaussian filter and the image filtered by the bilateral filter must be the same as the original image and the image filtered by the bilateral filter. The difference is equal, so the variance of the adaptive Gaussian filter can be obtained as
- Step S214 according to And the variance of the adaptive Gaussian filter ⁇ 2 (i, j) by the formula
- the filtered baseband layer image pixel value Ig (i,j) is obtained.
- the Gaussian filter is a linear filter and is isotropic. Therefore, the relationship between an original image I and its output I g through the Gaussian filter can be expressed as Substituting the baseband layer image I bf (i, j) to obtain a filtered baseband layer image filtered by the baseband layer image,
- the variance of the adaptive Gaussian filter is reasonably determined by analyzing the cause of the gradient inversion effect, and the corrected filtered fundamental layer image solves the error caused by the gradient inversion effect.
- step S130 includes:
- Step S131 obtaining a high grayscale statistical peak and a low grayscale statistical peak in the histogram statistical result within a preset effective grayscale range.
- the effective gray scale range can be adaptively adjusted and customized according to the overall image gray value, such as calculating the gray average value of the complete image first, and then determining the effective gray scale range according to the average value.
- the effective gray scale ranges from 120-180. Because the gray level is too low, it is usually the background addition, not the original QR code image. Therefore, the statistics are performed within the preset effective gray level range. On the one hand, the invalid statistical results are filtered out, and on the other hand, the statistics are accelerated. speed.
- the acquisition of the highest peak of the high gray scale and the highest peak of the low gray scale can be obtained by first determining the high gray scale and the low gray scale range, and then performing the statistics to obtain the highest peak in different ranges, and also obtaining the respective statistical peaks first. And then separating the low gray level and the high gray level according to the corresponding gray level of each statistical peak. Then, the highest peak of the high gray scale and the highest peak of the low gray scale are obtained respectively.
- step S132 it is determined that the histogram statistical result between the highest peak of the high gray scale and the highest peak of the low gray scale is the effective dynamic range.
- the histogram statistical result between the gray levels 90 to 230 is the effective dynamic range.
- Step S133 obtaining a gray level with the smallest number of pixel statistics in the effective dynamic range as the local dynamic threshold.
- the binarization threshold when the binarization threshold is selected, the black and white part belonging to the actual two-dimensional code image should be restored to the greatest extent to avoid gradation loss, so the required threshold should be a statistically widest reasonable threshold.
- the minimum gray-scale valley value is selected between the effective dynamic range, and the corresponding gray level is obtained as the local dynamic threshold to satisfy the theoretical basis of the statistically widest reasonable threshold, so as to obtain the optimal binarization threshold.
- step S131 includes:
- step S131a within the preset effective gray level range, the number of pixel counts of each gray level is traversed to obtain statistical peaks corresponding to different gray levels.
- a histogram statistical result diagram between different preset effective gray scale ranges (70-260), a plurality of different statistical peaks are obtained, including a statistical peak 311 and a statistical peak 312. Statistical peak 313, statistical peak 321 and statistical peak 322.
- each statistical peak is divided into a low gray statistical peak set and a high gray statistical peak set according to the size of the gray level corresponding to the statistical peak.
- the preset gray level can be customized as the dividing line of the statistical peak.
- the statistical peak 311, the statistical peak 312, and the statistical peak 313 are divided into the low gray statistical peak set 310, and the statistical peak 321 is counted.
- Peak 322 is divided into a set of high grayscale statistical peaks 320.
- Step S131c obtaining the highest peak in the low gray level statistical peak set to obtain the highest low gray level statistics Peak, the highest peak in the high grayscale statistical peak set to obtain the highest peak of high grayscale statistics.
- the highest peak of the low gray scale is 311, and the highest peak of the high gray scale is 321 .
- the obtained original image of the dot matrix image of the LED dot matrix with spurious scene information and poor brightness contrast is shown in FIG. 8 as the second processed by the bilateral filter.
- the dimension code layer information as shown in FIG. 9 is the image baseband layer information remaining after the detail layer is removed, as shown in FIG. 6 is the baseband layer image histogram statistical information, wherein 311 represents the black grayscale in the two-dimensional code pattern.
- Statistical peak 321 represents a white gray statistical peak in the two-dimensional code pattern
- 330 represents a minimum binarized threshold trough value within the dynamic range, as shown in FIG. 10, is a binarized binary two-dimensional code
- the image can be seen from the figure that the processed binary two-dimensional code image is clearer than the original two-dimensional code image, and the noise and detail information are removed to facilitate the recognition of the two-dimensional code image.
- an apparatus for processing two-dimensional code images including:
- the grayscale conversion module 410 is configured to convert the acquired original two-dimensional code image into a grayscale two-dimensional code image.
- the baseband layer separation module 420 is configured to filter the grayscale two-dimensional code image to obtain a baseband layer image of the grayscale two-dimensional code image.
- the binarization module 430 is configured to perform histogram statistics on the baseband layer image, select a local dynamic threshold according to the grayscale contrast feature of the baseband layer image in the histogram statistical result, and perform the baseband layer image according to the local dynamic threshold. Binarization results in a binary two-dimensional code image.
- the filtering is nonlinear bilateral filtering.
- the baseband layer separation module 420 includes:
- the neighboring pixel point acquiring unit 421 is configured to acquire a neighboring pixel in the vicinity of the pixel to be filtered by using a template corresponding to the image size of the grayscale two-dimensional code as a neighboring range.
- the standard deviation parameter calculation unit 422 is configured to calculate a spatial standard deviation parameter corresponding to the Gaussian kernel function of the spatial domain and an intensity standard deviation parameter corresponding to the Gaussian kernel function of the intensity domain according to the adjacent pixel points.
- the normalization coefficient calculation unit 423 is configured to calculate, according to the neighboring pixel point, the spatial standard deviation parameter, and the intensity standard deviation parameter, the normalization corresponding to the pixel to be filtered by the Gaussian kernel function in the spatial domain and the Gaussian kernel function in the intensity domain. coefficient.
- the baseband layer pixel calculation unit 424 is configured to calculate the current pixel to be filtered according to the neighboring pixel point, the normalization coefficient, the spatial standard deviation parameter, the intensity standard deviation parameter, the Gaussian kernel function in the spatial domain, and the Gaussian kernel function in the intensity domain.
- the baseband pixel value is configured to calculate the current pixel to be filtered according to the neighboring pixel point, the normalization coefficient, the spatial standard deviation parameter, the intensity standard deviation parameter, the Gaussian kernel function in the spatial domain, and the Gaussian kernel function in the intensity domain.
- the standard deviation parameter calculation unit calculates the spatial standard deviation parameter corresponding to the Gaussian kernel function of the spatial domain, and uses the pixel point as a pixel point on the diagonal line in the template, and calculates the intensity corresponding to the Gaussian kernel function in the intensity domain.
- the pixel used in the standard deviation parameter is all the pixels in the template.
- the device further includes:
- the Gaussian filtering module 440 is configured to perform adaptive Gaussian filtering on the baseband layer image to obtain a filtered baseband layer image.
- the Gaussian filtering module 440 includes:
- the variance calculation unit 442 is configured to approximate the Laplacian operator of the fundamental frequency layer image by using the Laplacian operator According to the weighted interpolation E(i,j) and By formula Calculating a variance ⁇ 2 (i, j) of the adaptive Gaussian filter;
- Filtering unit 443 configured to be configured according to And the variance of the adaptive Gaussian filter ⁇ 2 (i, j) by the formula Obtain the filtered baseband image pixel value I g (i, j)
- the binarization module 430 includes:
- the statistical peak acquisition unit 431 is configured to obtain a high grayscale statistical peak and a low grayscale peak in the histogram statistical result within a preset effective grayscale range.
- the local dynamic threshold determining unit 432 is configured to determine a histogram statistical result between the highest peak of the high gray level and the highest peak of the low gray level as the effective dynamic range, and obtain the gray level with the smallest number of pixel statistics in the effective dynamic range as the gray level. Local dynamic threshold.
- Each module included in the apparatus for processing two-dimensional code image in the embodiment of the present invention such as a grayscale conversion module, a baseband layer separation module, a binarization module, and the like, and each unit included in each module, for example, weighted interpolation calculation
- the unit and the variance calculation unit and the like can be implemented by a processor in the terminal, and can also be implemented by a logic circuit.
- the processor can be a central processing unit (CPU) or a microprocessor (MPU). ), digital signal processor (DSP) or field programmable gate array (FPGA).
- the method for processing the two-dimensional code image described above is implemented in the form of a software function module, and is sold or used as an independent product, it may also be stored in a computer readable storage medium. in.
- the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions.
- a computer device (which may be a personal computer, server, or network device, etc.) is caused to perform all or part of the methods described in various embodiments of the present invention.
- the foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a mobile hard disk, a read only memory (ROM), a magnetic disk, or an optical disk.
- program codes such as a USB flash drive, a mobile hard disk, a read only memory (ROM), a magnetic disk, or an optical disk.
- an embodiment of the present invention further provides a computer storage medium, where the computer stores Computer-executable instructions are stored in the medium for performing the method of two-dimensional code image processing in the embodiments of the present invention.
- an embodiment of the present invention provides a terminal, where the terminal includes:
- a processing device such as a processor configured to convert the acquired original two-dimensional code image into a gray-scale two-dimensional code image; and filter the gray-scale two-dimensional code image to obtain a fundamental frequency of the gray-scale two-dimensional code image a layer image; performing a histogram statistics on the baseband layer image, selecting a local dynamic threshold according to the grayscale contrast feature of the baseband layer image in the histogram statistical result, and using the local dynamic threshold according to the local dynamic threshold
- the layer image is binarized to obtain a binary two-dimensional code image
- a display device such as a display, configured to display a QR code image.
- an embodiment of the present invention provides a terminal, where the terminal includes:
- a storage medium configured to store computer executable instructions
- a processor configured to execute computer executable instructions stored on the storage medium, the computer executable instructions comprising: converting the acquired original two-dimensional code image into a grayscale two-dimensional code image; The code image is filtered to obtain a baseband layer image of the grayscale two-dimensional code image; the baseband layer image is subjected to histogram statistics, and the grayscale contrast of the baseband layer image is compared according to the histogram statistical result.
- the feature selects a local dynamic threshold, and binarizes the baseband image according to the local dynamic threshold to obtain a binary two-dimensional code image.
- the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).
- the obtained original two-dimensional code image is converted into a gray-scale two-dimensional code image; and the gray-scale two-dimensional code image is filtered to obtain a baseband layer image of the gray-scale two-dimensional code image; Performing histogram statistics on the baseband layer image, selecting a local dynamic threshold according to the grayscale contrast feature of the baseband layer image in the histogram statistics result, and performing the second frequency threshold image according to the local dynamic threshold.
- the value of the binary two-dimensional code image is obtained, and the processed binary two-dimensional code image can improve the accuracy of the subsequent recognition of the two-dimensional code image.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Facsimile Image Signal Circuits (AREA)
- Studio Devices (AREA)
Abstract
一种二维码图像处理的方法和装置、终端、存储介质,包括:将获取的原始二维码图像转化为灰度二维码图像(S110);将所述灰度二维码图像进行滤波得到所述灰度二维码图像的基频层图像(S120);将所述基频层图像进行直方图统计,根据所述直方图统计结果中所述基频层图像的灰度对比特征选取局部动态阈值,根据所述局部动态阈值对所述基频层图像进行二值化得到二值二维码图像(S130)。
Description
本发明涉及计算机图像处理领域,特别是涉及一种二维码图像处理的方法和装置、终端、存储介质。
随着计算机技术的发展,二维码在日常社会生活中的应用环境越来越广,安装在计算机、手机、平板电脑等终端上的越来越多的客户端软件集成二维码扫码软件以识别二维码从而得到二维码传递的信息。
二维码通常采用黑白相间的二值图形记录数据符号信息,但是在一些特定的场合中,采用特殊的显示方式显示二维码导致其灰度对比度,清晰度以及像素分辨率均有一定程度的下降,如当使用低像素发光二极管(Light Emitting Diode,LED)点阵屏输出二维码时,二维码图案受到LED亮度的影响,导致黑白对比不分明,同时由于LED灯珠体积固定,若输出高像素的二维码图案则需要非常多的LED灯珠,为了控制显示成本,往往采用少量的LED输出二维码,导致输出的二维码像素分辨率较低。现有的二维码识别在通常只进行简单的预处理,使得识别二维码时出现扫描速度严重下降甚至无法正确解析,不能正确识别二维码的问题。
发明内容
基于此,有必要针对上述技术问题,提供一种二维码图像处理的方法和装置、终端、存储介质,能够有效提高二维码识别的准确率。
第一方面,本发明实施例提供一种二维码图像处理的方法,所述方法包括:
将获取的原始二维码图像转化为灰度二维码图像;
将所述灰度二维码图像进行滤波得到所述灰度二维码图像的基频层图像;
将所述基频层图像进行直方图统计,根据所述直方图统计结果中所述基频层图像的灰度对比特征选取局部动态阈值,根据所述局部动态阈值对所述基频层图像进行二值化得到二值二维码图像。
在其中一个实施例中,所述滤波为非线性双边滤波,所述将所述灰度二维码图像进行滤波得到所述灰度二维码图像的基频层图像的步骤包括:
以与所述灰度二维码图像大小对应的模板为邻近范围,获取当前待滤波像素点邻近范围内的邻近像素点;
根据所述邻近像素点,计算空间域的高斯核函数对应的空间标准差参数和强度域的高斯核函数对应的强度标准差参数;
根据所述邻近像素点、空间标准差参数、强度标准差参数,由空间域的高斯核函数、强度域的高斯核函数计算当前待滤波像素点对应的归一化系数;
根据所述邻近像素点、归一化系数、空间标准差参数、强度标准差参数,空间域的高斯核函数、强度域的高斯核函数计算得到当前待滤波像素点的基频层像素值。
在其中一个实施例中,计算空间域的高斯核函数对应的空间标准差参数时采用的像素点为所述模板内对角线上的像素点,计算强度域的高斯核函数对应的强度标准差参数时采用的像素点为所述模板内的所有像素点。
在其中一个实施例中,所述将所述基频层图像进行直方图统计的步骤之前,还包括:对所述基频层图像进行自适应高斯滤波得到滤波基频层图像,包括:
根据E(i,j)=k(i,j)(Iin(i,j)-Ibf(i,j))计算所述基频层图像和灰度二维码图
像的加权插值E(i,j),其中所述i,j表示像素点的位置坐标,Iin(i,j)表示灰度二维码图像的像素值,Ibf(i,j)表示基频层图像的像素值,所述加权插值的系数k(i,j)为所述归一化因子;
在其中一个实施例中,所述根据所述直方图统计结果中所述基频层图像的灰度对比特征选取局部动态阈值的步骤包括:
在预设的有效灰度等级范围内获取所述直方图统计结果中的高灰度统计最高峰和低灰度统计最高峰;
确定所述高灰度统计最高峰和低灰度统计最高峰之间的直方图统计结果为有效动态范围;
获取所述有效动态范围内的像素统计个数最少的灰度级作为所述局部动态阈值。
第二方面,本发明实施例提供一种二维码图像处理的装置,所述装置包括:
灰度转化模块,配置为将获取的原始二维码图像转化为灰度二维码图像;
基频层分离模块,配置为将所述灰度二维码图像进行滤波得到所述灰度二维码图像的基频层图像;
二值化模块,配置为将所述基频层图像进行直方图统计,根据所述直
方图统计结果中所述基频层图像的灰度对比特征选取局部动态阈值,根据所述局部动态阈值对所述基频层图像进行二值化得到二值二维码图像。
在其中一个实施例中,所述滤波为非线性双边滤波,所述基频层分离模块包括:
邻近像素点获取单元,配置为以与所述灰度二维码图像大小对应的模板为邻近范围,获取当前待滤波像素点邻近范围内的邻近像素点;
标准差参数计算单元,配置为根据所述邻近像素点,计算空间域的高斯核函数对应的空间标准差参数和强度域的高斯核函数对应的强度标准差参数;
归一化系数计算单元,配置为根据所述邻近像素点、空间标准差参数、强度标准差参数,由空间域的高斯核函数、强度域的高斯核函数计算当前待滤波像素点对应的归一化系数;
基频层像素计算单元,配置为根据所述邻近像素点、归一化系数、空间标准差参数、强度标准差参数,空间域的高斯核函数、强度域的高斯核函数计算得到当前待滤波像素点的基频层像素值。
在其中一个实施例中,所述标准差参数计算单元配置为计算空间域的高斯核函数对应的空间标准差参数时采用的像素点为所述模板内对角线上的像素点,计算强度域的高斯核函数对应的强度标准差参数时采用的像素点为所述模板内的所有像素点。
在其中一个实施例中,所述装置还包括:
高斯滤波模块,配置为对所述基频层图像进行自适应高斯滤波得到滤波基频层图像,包括:
加权插值计算单元,配置为根据E(i,j)=k(i,j)(Iin(i,j)-Ibf(i,j))计算所述基频层图像和灰度二维码图像的加权插值E(i,j),其中所述i,j表示像素点的位置坐标,Iin(i,j)表示灰度二维码图像的像素值,Ibf(i,j)表示基频层图像的像
素值,所述加权插值的系数k(i,j)为所述归一化因子;
在其中一个实施例中,所述二值化模块包括:
统计最高峰获取单元,配置为在预设的有效灰度等级范围内获取所述直方图统计结果中的高灰度统计最高峰和低灰度统计最高峰;
局部动态阈值确定单元,配置为确定所述高灰度统计最高峰和低灰度统计最高峰之间的直方图统计结果为有效动态范围,获取所述有效动态范围内的像素统计个数最少的灰度级作为所述局部动态阈值。
第三方面,本发明实施例提供一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令用于执行本发明第一方面实施例提供的二维码图像处理的方法。
第四方面,本发明实施例提供一种终端,所述终端包括:
存储介质,配置为存储计算机可执行指令;
处理器,配置为执行存储在所述存储介质上的计算机可执行指令,所述计算机可执行指令包括:将获取的原始二维码图像转化为灰度二维码图像;将所述灰度二维码图像进行滤波得到所述灰度二维码图像的基频层图像;将所述基频层图像进行直方图统计,根据所述直方图统计结果中所述基频层图像的灰度对比特征选取局部动态阈值,根据所述局部动态阈值对所述基频层图像进行二值化得到二值二维码图像。
第五方面,本发明实施例提供一种终端,所述终端包括:
处理器,配置为将获取的原始二维码图像转化为灰度二维码图像;将所述灰度二维码图像进行滤波得到所述灰度二维码图像的基频层图像;将所述基频层图像进行直方图统计,根据所述直方图统计结果中所述基频层图像的灰度对比特征选取局部动态阈值,根据所述局部动态阈值对所述基频层图像进行二值化得到二值二维码图像;
显示设备,配置为显示所述二维码图像。
上述二维码图像处理的方法和装置、终端、存储介质,通过将获取的原始二维码图像转化为灰度二维码图像,将灰度二维码图像进行滤波得到所述灰度二维码图像的基频层图像,将基频层图像进行直方图统计,根据直方图统计结果中基频层图像的灰度对比特征选取局部动态阈值,根据局部动态阈值对基频层图像进行二值化得到二值二维码图像,噪声、微小的图像细节信息等被分离到了细节层,后续对二维码图像的识别只使用基频层图像,避免了噪声等对二维码图像识别的干扰,同时采用局部动态阈值进行二值化,使得受到不同光照影响的二维码图像根据直方图统计结果具有不同的合适的阈值,使得二值化后的图像更接近原始二维码图像,从而提高后续识别二维码图像的准确率。
图1为一个实施例中二维码图像处理的方法的流程图;
图2为一个实施例中得到灰度二维码图像的基频层图像的流程图;
图3为一个实施例中对基频层图像进行自适应高斯滤波得到滤波基频层图像的流程图;
图4为一个实施例中选取局部动态阈值的流程图;
图5为一个实施例中获取高灰度统计最高峰和低灰度统计最高峰的流程图;
图6为一个实施例中直方图统计结果的示意图;
图7为一个实施例中原始二维码图像示意图;
图8为一个实施例中二维码图像的细节层示意图;
图9为一个实施例中二维码图像的基频层示意图;
图10为一个实施例中二值化后的二维码图像示意图;
图11为一个实施例中二维码图像处理的装置的结构框图;
图12为一个实施例中基频层分离模块的结构框图;
图13为另一个实施例中二维码图像处理的装置的结构框图;
图14为一个实施例中二值化模块的结构框图。
在一个实施例中,如图1所示,提供了一种二维码图像处理的方法,包括:
步骤S110,将获取的原始二维码图像转化为灰度二维码图像。
这里,原始二维码可为各种形式的二维码,如QR二维码等。原始二维码图像的显示方式可分为多种,如通过纸面,网络,电视屏幕,由LED点阵屏投射显示等。可通过手机终端等智能设备通过摄像头采集原始二维码图像,但采集到的原始二维码图像一般是彩色图像,即使是拍摄黑白图像,获得的图片仍然是带有RGB三色的图像。由于二维码携带的信息只需要黑白两色即可表征,所以必须将彩色图像转换为灰度图像。摄像头采集到的原始二维码彩色图像一般是在RGB空间编码的,每个像素分别用1个字节表示RGB三原色,将RGB空间转换为YUV空间得到的Y分量表示像素的亮度,可以作为灰度值,从而完成将原始二维码图像转化为灰度二维码图像的过程。
步骤S120,将灰度二维码图像进行滤波得到灰度二维码图像的基频层图像。
这里,利用滤波器对图像进行分割,获取灰度二维码图像的细节层和基频层,将图像中的高频分量,如强边缘等相邻像素点灰度变化较大的区域、微小的图像细节信息和噪声等尽可能的保留在细节层内,基频层中只保留图像的低频分量即能量信息,基频层图像基本保留了图像的原始对比度。由于噪声、微小的图像细节信息等被分离到了细节层,后续对二维码图像的识别只使用基频层图像,避免了噪声等对二维码图像识别的干扰,也可以降低后续二值化过程中阈值选取的计算难度。滤波时可自定义滤波算法,如采用线性引导滤波算法,也可采用非线性双边滤波算法。采用非线性双边滤波算法时,可根据图像的分辨率自定义模版的长度和宽度,以模板为单位,移动式获取当前滤波像素点的相邻像素点空间域及强度域信息,使得滤波的效果更好,得到更精准的基频层图像。
步骤S130,将基频层图像进行直方图统计,根据直方图统计结果中基频层图像的灰度对比特征选取局部动态阈值,根据局部动态阈值对所述基频层图像进行二值化得到二值二维码图像。
这里,原始二维码图像在拍摄过程中由于受到光圈,曝光,环境光等因素的影响,每次拍摄到的原始二维码图像中,最亮的二维码部分和最暗的二维码部分的对比度经常会发生变化,进行灰度化后的灰度二维码图像中的像素灰度包括多个不同的灰度级。对标准的二值二维码图像进行直方图统计,其直方图特征应该是一个双峰形式,即具有一个低灰度的统计峰和一个高灰度的统计峰,但是由于实际的灰度二维码图像中包含了其他场景的灰度,因此直方图统计中会出现多个不同灰度级别的峰值,但最基本的双峰特征不会消失,因为该特征的强度远远大于其他背景峰。在对实际灰度二维码图像进行直方图统计时,场景的灰度加成通常都集中在低灰度的区域,因此在直方图中,低灰度附近的尖峰会变得更多,并且在低灰度区域出现一定程度的横移。同样的情况也发生在高灰度区域,只是通常拍
摄过程中不会有太多高亮度的背景出现,因此高灰度区域的直方图横移现象较少。
针对这种低灰度区域出现一定程度的横移的情况,如果阈值设置过低,则很可能将原始二维码图像中原本应二值化为黑色的低灰度部分的像素二值化为了白色值,使得二值化的结果不准确。所以二值化的阈值采用局部动态阈值选取的方式,先根据直方图统计结果中基频层图像的灰度对比特征确定有效动态范围,再在有效动态范围内选取阈值。有效动态范围外的灰度级则不可作为阈值,使得受到不同光照影响的二维码图像根据直方图统计结果具有不同的合适的阈值。可先得到不同灰度级的统计峰,当一个灰度级的统计像素个数高于左右相邻灰度级的统计像素个数时会形成一个统计峰。根据统计峰对应的灰度级、像素统计个数和不同统计峰之间的灰度距离差值确定有效动态范围。有效动态范围的确定可根据情况自定义,如当2个统计峰对应的灰度距离差值大于预设阈值时,将灰度级高的统计峰归入高灰度统计峰群,将灰度级低的统计峰归入低灰度统计峰群,再从高灰度统计峰群获取最高峰得到高灰度统计最高峰,从低灰度统计峰群中获取灰度级最大的统计峰为低灰度统计最高峰。也可先将低灰度统计峰群中剔除一些灰度级过小的统计峰,再从剩下的统计峰中获取最高峰作为低灰度统计最高峰。因为对于低灰度统计峰群灰度级较小的统计峰通常是由场景的灰度加成形成的,将其剔除才能准确的确定有效动态范围。也可先获取预设的有效灰度等级范围,再在有效灰度等级范围内进行高灰度统计最高峰和低灰度统计最高峰的确定,加快确定速度。有效动态范围确定后,在有效动态范围内获取像素统计个数最少的灰度级作为局部动态阈值。
本实施例中,通过将获取的原始二维码图像转化为灰度二维码图像,将灰度二维码图像进行滤波得到所述灰度二维码图像的基频层图像,将基频层图像进行直方图统计,根据直方图统计结果中基频层图像的灰度对比
特征选取局部动态阈值,根据局部动态阈值对基频层图像进行二值化得到二值二维码图像,噪声、微小的图像细节信息等被分离到了细节层,后续对二维码图像的识别只使用基频层图像,避免了噪声等对二维码图像识别的干扰,同时采用局部动态阈值进行二值化,使得受到不同光照影响的二维码图像根据直方图统计结果具有不同的合适的阈值,使得二值化后的图像更接近原始二维码图像,从而提高后续识别二维码图像的准确率。
在一个实施例中,滤波为非线性双边滤波,如图2所示,步骤S120包括:
步骤S121,以与灰度二维码图像大小对应的模板为邻近范围,获取当前待滤波像素点邻近范围内的邻近像素点。
这里,模板的长度和宽度可根据灰度二维码图像大小相应的调整,如二维码图像分辨率高,则可加大模板的长度和宽度。一个实施例中,模板为7*7像素的模板,则获取当前待滤波像素点相邻的7*7个邻近点。
步骤S122,根据邻近像素点,计算空间域的高斯核函数对应的空间标准差参数和强度域的高斯核函数对应的强度标准差参数。
这里,非线性双边滤波的算法需要利用空间域的高斯核函数和强度域的高斯核函数,高斯函数是一种统计函数形式,其函数形状是一个以期望值为中心,标准差为置信区间的正态分布,标准差的大小决定函数范围的有效性,其控制高斯核函数的扩张范围,因此空间标准差参数和强度标准差参数的选取尤为重要。σs决定了临近区域的尺度,在一个实施例中,σs与图像的大小成比例关系,可选取图像对角线尺寸的2.5%。σr代表了图像细节的幅度,如果信号波动的范围小于σr,那么这个信号波动就会被认为是细节,即会被双边滤波器平滑,被分离到细节层中。反之,如果这个波动的范围大于σr,那么由于双边滤波器的非线性特性,这个边缘将会被很好的保留到基频层。在一个实施例中,选择人眼可以分辨灰度级的20%,即
25作为σr的取值。
本实施例中,对于每个待滤波的像素点都根据其邻近像素点动态的计算得到空间标准差参数和强度标准差参数,使得参数的计算考虑了图像本身的分布,更自适应。空间标准差参数σs的计算公式为:强度标准差参数σr的计算公式为其中u和t分别为期望值,N和M表示用于计算的邻近像素点的个数,N和M的选取可根据需要自定义。
在一个实施例中,计算空间域的高斯核函数对应的空间标准差参数时采用的像素点为模板内对角线上的像素点,计算强度域的高斯核函数对应的强度标准差参数时采用的像素点为模板内的所有像素点。
这里,如一个7*7的模板,计算空间标准差参数时采用模板内对角线上的7个像素点,计算强度标准差参数时采用7*7个像素点。
步骤S123,根据邻近像素点、空间标准差参数、强度标准差参数,由空间域的高斯核函数、强度域的高斯核函数计算当前待滤波像素点对应的归一化系数。
这里,归一化系数 其中gs是空间域的高斯核函数,是一个标准化的高斯核函数,即滤波器中的所有系数之和为1。gr是强度域的高斯核函数,也是一个标准化的高斯核函数。S(i,j)表示当前待滤波像素点通过模板确定的邻近范围内的邻近像素点。i,j为当前待滤波像素点的位置坐标,i',j'为邻近像素点的位置坐标。k(i,j)是通过将空间域与强度域的两个高斯核函数模板的结果相乘得到,其范围在0-1之间。其中在计算gs和gr时分别使用上一步计算得到的空间标准差参数、
强度标准差参数。可以理解的是,在计算归一化系数时可对上述公式进行一定的变形。
步骤S124,根据邻近像素点、归一化系数、空间标准差参数、强度标准差参数,空间域的高斯核函数、强度域的高斯核函数计算得到当前待滤波像素点的基频层像素值。
这里,当前位置坐标为i,j的待滤波像素点的基频层像素值由公式 可计算得到,其中Iin表示取像素值。其中在计算gs和gr时分别使用步骤S122计算得到的空间标准差参数、强度标准差参数。可以理解的是,在计算基频层像素值时可对上述公式进行一定的变形。通过非线性滤波器对图像的处理,采用模板动态的获取当前待滤波像素点的邻近像素点,自适应的计算各个参数,可以更好的区分在二维码图像中的噪声和图像边缘信息,并将属于基本图像的信息保留到基频层中,而将噪声信息和微小的图像细节信息留在细节层中舍去,也可以降低后续二值化过程中阈值选取的计算难度,进一步提高二维码图像的识别的准确率。
在一个实施例中,步骤S130之前还包括:步骤S210,对基频层图像进行自适应高斯滤波得到滤波基频层图像。
这里,由于双边滤波器的机理与均值漂移相关,一次双边滤波器的执行过程就相当于向图像的局部模式收敛了一步。当一个像素周围有很少的与其相似的像素时,高斯加权统计结果可能不稳定,可能导致梯度翻转后的基本图像泄露到细节层图像中。为了解决梯度翻转效应,采用自适应高斯滤波对基频层图像进行修正。
如图3所示,步骤S210包括:
步骤S211,根据E(i,j)=k(i,j)(Iin(i,j)-Ibf(i,j))计算基频层图像和灰度二
维码图像的加权插值E(i,j),其中i,j表示像素点的位置坐标,Iin(i,j)表示灰度二维码图像的像素值,Ibf(i,j)表示基频层图像的像素值,加权插值的系数k(i,j)为归一化因子。
这里,k(i,j)即为步骤S123中计算得到的,它表示是否一个图像的灰度值位于边缘附近的不稳定区域。
这里,为了修正双边滤波器过锐化带来的误差,高斯滤波器的方差参数必须与图像中局部区域相适应。用高斯滤波器平滑双边滤波器来使处理后的图像更接近原始边缘,所以高斯滤波器滤波结果与经过双边滤波器滤波后的图像之差必须与原始图像与经过双边滤波器滤波后的图像之差相等,因此可以获得自适应高斯滤波器的方差为
这里,一个原始信号F(x)经过高斯滤波器的输出结果Fg(x)具有类似于泰勒级数展开的性质:
其中m=1,2,...,是F″(x)的二阶微分.F(2m)(x)是F(x)2m阶微分.σ是高斯滤波器的标准差参数。如果略去高阶项,那么可以近似为
这个结果可以拓展到二维图像,因为高斯滤波器是线性滤波器且是各向同性的。所以一幅原始图像I,与其经过高斯滤波器的输出结果Ig的关系可以表示为将基频层图像Ibf(i,j)代入即可得到对基频层图像滤波后的滤波基频层图像,
本实施例中,通过分析梯度翻转效应产生的原因合理的确定自适应高斯滤波器的方差,修正后的滤波基频层图像解决了梯度翻转效应带来的误差。
在一个实施例中,如图4所示,步骤S130包括:
步骤S131,在预设的有效灰度等级范围内获取直方图统计结果中的高灰度统计最高峰和低灰度统计最高峰。
这里,有效灰度等级范围可根据整体图像灰度值自适应调整并自定义,如先计算完整图像的灰度平均值,再根据平均值确定有效灰度等级范围。在一个实施例中有效灰度等级范围为120-180。因为太低的灰度一般都是背景加成,并不是原始二维码图像,所以在预设的有效灰度等级范围内进行统计,一方面滤除了无效的统计结果,另一方面加快了统计的速度。高灰度统计最高峰和低灰度统计最高峰的获取可以采取先确定高灰度和低灰度范围,再分别在不同的范围内进行统计得到最高峰的方式,也可先得到各个统计峰,再根据各个统计峰的对应的灰度级进行低灰度和高灰度的分离,
再分别得到高灰度统计最高峰和低灰度统计最高峰。
步骤S132,确定高灰度统计最高峰和低灰度统计最高峰之间的直方图统计结果为有效动态范围。
这里,如高灰度统计最高峰对应的灰度级为230,低灰度统计最高峰对应的灰度级为90,则灰度级90至230之间的直方图统计结果为有效动态范围。
步骤S133,获取有效动态范围内的像素统计个数最少的灰度级作为局部动态阈值。
这里,在选取二值化阈值时,要能够最大程度的将属于实际二维码图像的黑白部分还原,避免产生灰度丢失,因此所需要的阈值应该是一个统计意义上最宽的合理门限,在有效动态范围之间选取最小的灰度谷值,获取其对应的灰度级作为局部动态阈值以满足统计意义上最宽的合理门限这一理论依据,从而得到最佳的二值化阈值。
在一个实施例中,如图5所示,步骤S131包括:
步骤S131a,在预设的有效灰度等级范围内,遍历查找各个灰度级的像素统计个数,得到不同灰度级对应的统计峰。
这里,如图6所示,为一个直方图统计结果示意图,在预设的有效灰度等级范围(70-260)之间,可得到多个不同的统计峰,包括统计峰311、统计峰312、统计峰313、统计峰321、统计峰322。
步骤S131b,根据统计峰对应的灰度级的大小将各个统计峰划分到低灰度统计峰集合和高灰度统计峰集合。
这里,可自定义预设灰度级为统计峰的划分界线,如图6所示,统计峰311、统计峰312、统计峰313划分到低灰度统计峰集合310中,统计峰321、统计峰322划分到高灰度统计峰集合320中。
步骤S131c,在低灰度统计峰集合中获取最高峰得到低灰度统计最高
峰,在高灰度统计峰集合中获取最高峰得到高灰度统计最高峰。
这里,如图6所示,低灰度统计最高峰为311,高灰度统计最高峰为321。
在一个实施例中,如图7所示为获取的带有杂散场景信息以及亮度对比度较差的LED点阵屏原始二维码图像,如图8所示为利用双边滤波器处理得到的二维码细节层信息,如图9所示为去除细节层后剩余的图像基频层信息,如图6所示为基频层图像直方图统计信息,其中311表示二维码图案中黑色灰度统计峰;321表示二维码图案中白色灰度统计峰;330表示在该动态范围之内的最小二值化阈值波谷值,如图10所示,为二值化后的二值二维码图像,可从图中看出处理后的二值二维码图像比原始二维码图像更清晰,去掉了噪声和细节信息便于二维码图像的识别。
在一个实施例中,如图11所示,提供了一种二维码图像处理的装置,包括:
灰度转化模块410,配置为将获取的原始二维码图像转化为灰度二维码图像。
基频层分离模块420,配置为将灰度二维码图像进行滤波得到灰度二维码图像的基频层图像。
二值化模块430,配置为将基频层图像进行直方图统计,根据直方图统计结果中所述基频层图像的灰度对比特征选取局部动态阈值,根据局部动态阈值对基频层图像进行二值化得到二值二维码图像。
在一个实施例中,滤波为非线性双边滤波,如图12所示,基频层分离模块420包括:
邻近像素点获取单元421,配置为以与灰度二维码图像大小对应的模板为邻近范围,获取当前待滤波像素点邻近范围内的邻近像素点。
标准差参数计算单元422,配置为根据邻近像素点,计算空间域的高斯核函数对应的空间标准差参数和强度域的高斯核函数对应的强度标准差参数。
归一化系数计算单元423,配置为根据邻近像素点、空间标准差参数、强度标准差参数,由空间域的高斯核函数、强度域的高斯核函数计算当前待滤波像素点对应的归一化系数。
基频层像素计算单元424,配置为根据邻近像素点、归一化系数、空间标准差参数、强度标准差参数,空间域的高斯核函数、强度域的高斯核函数计算得到当前待滤波像素点的基频层像素值。
在一个实施例中,标准差参数计算单元计算空间域的高斯核函数对应的空间标准差参数时采用的像素点为模板内对角线上的像素点,计算强度域的高斯核函数对应的强度标准差参数时采用的像素点为模板内的所有像素点。
在一个实施例中,如图13所示,装置还包括:
高斯滤波模块440,配置为对基频层图像进行自适应高斯滤波得到滤波基频层图像,高斯滤波模块440包括:
加权插值计算单元441,配置为根据E(i,j)=k(i,j)(Iin(i,j)-Ibf(i,j))计算所述基频层图像和灰度二维码图像的加权插值E(i,j),其中所述i,j表示像素点的位置坐标,Iin(i,j)表示灰度二维码图像的像素值,Ibf(i,j)表示基频层图像的像素值,所述加权插值的系数k(i,j)为所述归一化因子;
在一个实施例中,如图14所示,二值化模块430包括:
统计最高峰获取单元431,配置为在预设的有效灰度等级范围内获取直方图统计结果中的高灰度统计最高峰和低灰度统计最高峰。
局部动态阈值确定单元432,配置为确定高灰度统计最高峰和低灰度统计最高峰之间的直方图统计结果为有效动态范围,获取有效动态范围内的像素统计个数最少的灰度级作为局部动态阈值。
本发明实施例中二维码图像处理的装置所包括的各模块,例如灰度转化模块、基频层分离模块、二值化模块等,以及各模块所包括的各单元,例如,加权插值计算单元和方差计算单元等,都可以通过终端中处理器来实现,当然还可以通过逻辑电路来实现,在一个实施例的过程中,处理器可以为中央处理器(CPU)、微处理器(MPU)、数字信号处理器(DSP)或现场可编程门阵列(FPGA)等。
需要说明的是,本发明实施例中,如果以软件功能模块的形式实现上述的二维码图像处理的方法,并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read Only Memory)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本发明实施例不限制于任何特定的硬件和软件结合。
相应地,本发明实施例再提供一种计算机存储介质,所述计算机存储
介质中存储有计算机可执行指令,该计算机可执行指令用于执行本发明实施例中二维码图像处理的方法。
基于前述的实施例,本发明实施例提供一种终端,所述终端包括:
处理装置(如处理器),配置为将获取的原始二维码图像转化为灰度二维码图像;将所述灰度二维码图像进行滤波得到所述灰度二维码图像的基频层图像;将所述基频层图像进行直方图统计,根据所述直方图统计结果中所述基频层图像的灰度对比特征选取局部动态阈值,根据所述局部动态阈值对所述基频层图像进行二值化得到二值二维码图像;
显示设备(如显示屏),配置为显示二维码图像。
基于前述的实施例,本发明实施例提供一种终端,所述终端包括:
存储介质,配置为存储计算机可执行指令;
处理器,配置为执行存储在所述存储介质上的计算机可执行指令,所述计算机可执行指令包括:将获取的原始二维码图像转化为灰度二维码图像;将所述灰度二维码图像进行滤波得到所述灰度二维码图像的基频层图像;将所述基频层图像进行直方图统计,根据所述直方图统计结果中所述基频层图像的灰度对比特征选取局部动态阈值,根据所述局部动态阈值对所述基频层图像进行二值化得到二值二维码图像。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述程序可存储于一计算机可读取存储介质中,如本发明实施例中,该程序可存储于计算机系统的存储介质中,并被该计算机系统中的至少一个处理器执行,以实现包括如上述各方法的实施例的流程。其中,所述存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,
未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。
本发明实施例中,将获取的原始二维码图像转化为灰度二维码图像;将所述灰度二维码图像进行滤波得到所述灰度二维码图像的基频层图像;将所述基频层图像进行直方图统计,根据所述直方图统计结果中所述基频层图像的灰度对比特征选取局部动态阈值,根据所述局部动态阈值对所述基频层图像进行二值化得到二值二维码图像,处理后的二值二维码图像能提高后续识别二维码图像的准确率。
Claims (13)
- 一种二维码图像处理的方法,所述方法包括:将获取的原始二维码图像转化为灰度二维码图像;将所述灰度二维码图像进行滤波得到所述灰度二维码图像的基频层图像;将所述基频层图像进行直方图统计,根据所述直方图统计结果中所述基频层图像的灰度对比特征选取局部动态阈值,根据所述局部动态阈值对所述基频层图像进行二值化得到二值二维码图像。
- 根据权利要求1所述的方法,其中,所述滤波为非线性双边滤波,所述将所述灰度二维码图像进行滤波得到所述灰度二维码图像的基频层图像的步骤包括:以与所述灰度二维码图像大小对应的模板为邻近范围,获取当前待滤波像素点邻近范围内的邻近像素点;根据所述邻近像素点,计算空间域的高斯核函数对应的空间标准差参数和强度域的高斯核函数对应的强度标准差参数;根据所述邻近像素点、空间标准差参数、强度标准差参数,由空间域的高斯核函数、强度域的高斯核函数计算当前待滤波像素点对应的归一化系数;根据所述邻近像素点、归一化系数、空间标准差参数、强度标准差参数,空间域的高斯核函数、强度域的高斯核函数计算得到当前待滤波像素点的基频层像素值。
- 根据权利要求2所述的方法,其中,计算空间域的高斯核函数对应的空间标准差参数时采用的像素点为所述模板内对角线上的像素点,计算强度域的高斯核函数对应的强度标准差参数时采用的像素点为所述模板内的所有像素点。
- 根据权利要求2所述的方法,其中,所述将所述基频层图像进行直方图统计的步骤之前,还包括:对所述基频层图像进行自适应高斯滤波得到滤波基频层图像,包括:根据E(i,j)=k(i,j)(Iin(i,j)-Ibf(i,j))计算所述基频层图像和灰度二维码图像的加权插值E(i,j),其中所述i,j表示像素点的位置坐标,Iin(i,j)表示灰度二维码图像的像素值,Ibf(i,j)表示基频层图像的像素值,所述加权插值的系数k(i,j)为所述归一化因子;
- 根据权利要求1所述的方法,其中,所述根据所述直方图统计结果中所述基频层图像的灰度对比特征选取局部动态阈值的步骤包括:在预设的有效灰度等级范围内获取所述直方图统计结果中的高灰度统计最高峰和低灰度统计最高峰;确定所述高灰度统计最高峰和低灰度统计最高峰之间的直方图统计结果为有效动态范围;获取所述有效动态范围内的像素统计个数最少的灰度级作为所述局部动态阈值。
- 一种二维码图像处理的装置,所述装置包括:灰度转化模块,配置为将获取的原始二维码图像转化为灰度二维码 图像;基频层分离模块,配置为将所述灰度二维码图像进行滤波得到所述灰度二维码图像的基频层图像;二值化模块,配置为将所述基频层图像进行直方图统计,根据所述直方图统计结果中所述基频层图像的灰度对比特征选取局部动态阈值,根据所述局部动态阈值对所述基频层图像进行二值化得到二值二维码图像。
- 根据权利要求6所述的装置,其中,所述滤波为非线性双边滤波,所述基频层分离模块包括:邻近像素点获取单元,配置为以与所述灰度二维码图像大小对应的模板为邻近范围,获取当前待滤波像素点邻近范围内的邻近像素点;标准差参数计算单元,配置为根据所述邻近像素点,计算空间域的高斯核函数对应的空间标准差参数和强度域的高斯核函数对应的强度标准差参数;归一化系数计算单元,配置为根据所述邻近像素点、空间标准差参数、强度标准差参数,由空间域的高斯核函数、强度域的高斯核函数计算当前待滤波像素点对应的归一化系数;基频层像素计算单元,配置为根据所述邻近像素点、归一化系数、空间标准差参数、强度标准差参数,空间域的高斯核函数、强度域的高斯核函数计算得到当前待滤波像素点的基频层像素值。
- 根据权利要求7所述的装置,其中,所述标准差参数计算单元计算空间域的高斯核函数对应的空间标准差参数时采用的像素点为所述模板内对角线上的像素点,计算强度域的高斯核函数对应的强度标准差参数时采用的像素点为所述模板内的所有像素点。
- 根据权利要求7所述的装置,其中,所述装置还包括:高斯滤波模块,配置为对所述基频层图像进行自适应高斯滤波得到滤波基频层图像,包括:加权插值计算单元,配置为根据E(i,j)=k(i,j)(Iin(i,j)-Ibf(i,j))计算所述基频层图像和灰度二维码图像的加权插值E(i,j),其中所述i,j表示像素点的位置坐标,Iin(i,j)表示灰度二维码图像的像素值,Ibf(i,j)表示基频层图像的像素值,所述加权插值的系数k(i,j)为所述归一化因子;
- 根据权利要求6所述的装置,其中,所述二值化模块包括:统计最高峰获取单元,配置为在预设的有效灰度等级范围内获取所述直方图统计结果中的高灰度统计最高峰和低灰度统计最高峰;局部动态阈值确定单元,配置为确定所述高灰度统计最高峰和低灰度统计最高峰之间的直方图统计结果为有效动态范围,获取所述有效动态范围内的像素统计个数最少的灰度级作为所述局部动态阈值。
- 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令用于执行权利要求1至5任一项所述的二维码图像处理的方法。
- 一种终端,所述终端包括:存储介质,配置为存储计算机可执行指令;处理器,配置为执行存储在所述存储介质上的计算机可执行指令, 所述计算机可执行指令包括:将获取的原始二维码图像转化为灰度二维码图像;将所述灰度二维码图像进行滤波得到所述灰度二维码图像的基频层图像;将所述基频层图像进行直方图统计,根据所述直方图统计结果中所述基频层图像的灰度对比特征选取局部动态阈值,根据所述局部动态阈值对所述基频层图像进行二值化得到二值二维码图像。
- 一种终端,所述终端包括:处理器,配置为将获取的原始二维码图像转化为灰度二维码图像;将所述灰度二维码图像进行滤波得到所述灰度二维码图像的基频层图像;将所述基频层图像进行直方图统计,根据所述直方图统计结果中所述基频层图像的灰度对比特征选取局部动态阈值,根据所述局部动态阈值对所述基频层图像进行二值化得到二值二维码图像;显示设备,配置为显示所述二维码图像。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610017051.4 | 2016-01-11 | ||
CN201610017051.4A CN106960427A (zh) | 2016-01-11 | 2016-01-11 | 二维码图像处理的方法和装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017121018A1 true WO2017121018A1 (zh) | 2017-07-20 |
Family
ID=59310814
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/075259 WO2017121018A1 (zh) | 2016-01-11 | 2016-03-01 | 二维码图像处理的方法和装置、终端、存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106960427A (zh) |
WO (1) | WO2017121018A1 (zh) |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109543692A (zh) * | 2018-11-27 | 2019-03-29 | 合肥工业大学 | 一种专用于含qr码图像的二值化方法 |
CN109671035A (zh) * | 2018-12-26 | 2019-04-23 | 哈工大机器人(山东)智能装备研究院 | 一种基于直方图的红外图像增强方法 |
CN110009633A (zh) * | 2019-04-19 | 2019-07-12 | 湖南大学 | 一种基于反向高斯差分的钢轨表面缺陷检测方法 |
CN110599408A (zh) * | 2019-07-25 | 2019-12-20 | 安庆师范大学 | 一种基于图像纹理区域选择性多尺度去纹理方法 |
CN111047570A (zh) * | 2019-12-10 | 2020-04-21 | 西安中科星图空间数据技术有限公司 | 一种基于纹理分析法的自动云检测方法 |
CN111047540A (zh) * | 2019-12-27 | 2020-04-21 | 嘉应学院 | 一种基于天空分割的图像去雾方法及其应用系统 |
CN111144419A (zh) * | 2019-12-05 | 2020-05-12 | 大连民族大学 | 基于分块自适应同态滤波的历史文档图像二值化方法 |
CN111222360A (zh) * | 2018-11-23 | 2020-06-02 | 隆基绿能科技股份有限公司 | 硅料熔化状态的检测方法、设备及存储介质 |
CN111539238A (zh) * | 2020-04-27 | 2020-08-14 | 广州致远电子有限公司 | 二维码图像修复方法、装置、计算机设备和存储介质 |
CN111563851A (zh) * | 2020-03-27 | 2020-08-21 | 中国科学院西安光学精密机械研究所 | 一种基于动态高斯参数的图像映射方法 |
CN111583150A (zh) * | 2020-05-07 | 2020-08-25 | 湖南优象科技有限公司 | 一种二维码图像处理方法与系统 |
CN111639572A (zh) * | 2020-05-22 | 2020-09-08 | 上海交通大学 | 一种藤本月季高效花量估测的方法 |
CN111724301A (zh) * | 2020-06-19 | 2020-09-29 | 电子科技大学 | 一种基于直方图统计的自适应拉伸方法及系统 |
CN111862010A (zh) * | 2020-07-03 | 2020-10-30 | 河南中烟工业有限责任公司 | 一种基于线性高斯滤波的卷烟包灰性能检测方法 |
CN111968148A (zh) * | 2020-07-20 | 2020-11-20 | 华南理工大学 | 一种基于图像处理的空载率计算方法 |
CN112200753A (zh) * | 2020-10-30 | 2021-01-08 | 青岛海泰新光科技股份有限公司 | 一种图像宽动态范围的处理方法 |
CN112287942A (zh) * | 2020-10-20 | 2021-01-29 | 哈尔滨理工大学 | 一种适用于非均匀光照条件的二值化方法 |
CN113361673A (zh) * | 2021-01-18 | 2021-09-07 | 南昌航空大学 | 一种基于支持向量机的彩色二维码防伪方法 |
CN113474784A (zh) * | 2019-02-15 | 2021-10-01 | 日商方舟合同公司 | 动态二维码评价方法、动态二维码评价系统和动态二维码评价程序 |
CN113657133A (zh) * | 2021-08-24 | 2021-11-16 | 凌云光技术股份有限公司 | 一种二维码提取信息的纠正方法及装置 |
CN113658164A (zh) * | 2021-08-24 | 2021-11-16 | 凌云光技术股份有限公司 | 一种二维码信息提取准确性的评估方法及装置 |
CN114564978A (zh) * | 2022-04-27 | 2022-05-31 | 北京紫光青藤微系统有限公司 | 用于二维码解码的方法及装置、电子设备、存储介质 |
CN115797234A (zh) * | 2023-01-29 | 2023-03-14 | 南京邮电大学 | 一种增强低对比度二维码图像识别效果的方法 |
CN115861135A (zh) * | 2023-03-01 | 2023-03-28 | 铜牛能源科技(山东)有限公司 | 一种应用于箱体全景探测的图像增强及识别方法 |
CN116976775A (zh) * | 2023-07-25 | 2023-10-31 | 山东大舜医药物流有限公司 | 疫苗流向分拣及信息监控系统 |
CN116993643A (zh) * | 2023-09-27 | 2023-11-03 | 山东建筑大学 | 基于人工智能的无人机摄影测量图像校正方法 |
CN117078565A (zh) * | 2023-10-17 | 2023-11-17 | 深圳市精研科洁科技股份有限公司 | 一种摄像头抖动模糊图像优化增强方法及系统 |
CN117522281A (zh) * | 2024-01-05 | 2024-02-06 | 山东通广电子股份有限公司 | 一种基于视觉识别的工器具出入库管理方法及系统 |
CN117576139A (zh) * | 2024-01-17 | 2024-02-20 | 深圳市致佳仪器设备有限公司 | 一种基于双边滤波的边缘及角点检测方法及系统 |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107563973A (zh) * | 2017-08-15 | 2018-01-09 | 北京北信源软件股份有限公司 | 一种图像处理方法及装置 |
CN108346128B (zh) * | 2018-01-08 | 2021-11-23 | 北京美摄网络科技有限公司 | 一种美颜磨皮的方法和装置 |
CN110610108B (zh) * | 2018-06-14 | 2024-09-27 | 恩智浦美国有限公司 | 条码扫描器的摄相接口电路 |
CN108960817B (zh) * | 2018-07-11 | 2022-01-25 | 深圳市银联金融网络有限公司 | 基于电量检测的电子支付平台 |
CN109902530B (zh) * | 2019-03-04 | 2022-04-19 | 厦门商集网络科技有限责任公司 | 一种二维码解码方法及终端 |
CN110097355A (zh) * | 2019-04-29 | 2019-08-06 | 北京意锐新创科技有限公司 | 一种基于图像识读的交易方法和应用其的交易设备 |
CN112419195B (zh) * | 2020-11-26 | 2023-06-20 | 华侨大学 | 一种非线性变换的图像增强方法 |
CN113496531B (zh) * | 2021-03-31 | 2024-02-09 | 北京航天飞腾装备技术有限责任公司 | 一种红外图像动态范围压缩方法和系统 |
CN114897923B (zh) * | 2022-05-25 | 2023-07-21 | 中国海洋大学 | 天然气水合物ct图像阈值分割方法、系统、设备及介质 |
CN116704048B (zh) * | 2023-08-09 | 2023-11-17 | 四川元祉智慧科技有限公司 | 一种双光配准方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101059834A (zh) * | 2007-05-31 | 2007-10-24 | 中国农业大学 | 基于聊天用摄像头的qr二维条码识读方法 |
CN101093553A (zh) * | 2007-07-19 | 2007-12-26 | 成都博古天博科技有限公司 | 一种二维码系统及其识别方法 |
CN103093225A (zh) * | 2013-01-05 | 2013-05-08 | 武汉矽感科技有限公司 | 二维码图像的二值化方法 |
US20140375649A1 (en) * | 2013-06-20 | 2014-12-25 | Nokia Corporation | Method and Apparatus for a Linear Representation of an Image Histogram |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103177429A (zh) * | 2013-04-16 | 2013-06-26 | 南京理工大学 | 基于fpga的红外图像细节增强系统及其方法 |
US8965117B1 (en) * | 2013-12-17 | 2015-02-24 | Amazon Technologies, Inc. | Image pre-processing for reducing consumption of resources |
CN104463795B (zh) * | 2014-11-21 | 2017-03-01 | 高韬 | 一种点阵式dm二维码图像处理方法及装置 |
-
2016
- 2016-01-11 CN CN201610017051.4A patent/CN106960427A/zh active Pending
- 2016-03-01 WO PCT/CN2016/075259 patent/WO2017121018A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101059834A (zh) * | 2007-05-31 | 2007-10-24 | 中国农业大学 | 基于聊天用摄像头的qr二维条码识读方法 |
CN101093553A (zh) * | 2007-07-19 | 2007-12-26 | 成都博古天博科技有限公司 | 一种二维码系统及其识别方法 |
CN103093225A (zh) * | 2013-01-05 | 2013-05-08 | 武汉矽感科技有限公司 | 二维码图像的二值化方法 |
US20140375649A1 (en) * | 2013-06-20 | 2014-12-25 | Nokia Corporation | Method and Apparatus for a Linear Representation of an Image Histogram |
Cited By (50)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111222360B (zh) * | 2018-11-23 | 2023-10-31 | 隆基绿能科技股份有限公司 | 硅料熔化状态的检测方法、设备及存储介质 |
CN111222360A (zh) * | 2018-11-23 | 2020-06-02 | 隆基绿能科技股份有限公司 | 硅料熔化状态的检测方法、设备及存储介质 |
CN109543692B (zh) * | 2018-11-27 | 2023-04-07 | 合肥工业大学 | 一种专用于含qr码图像的二值化方法 |
CN109543692A (zh) * | 2018-11-27 | 2019-03-29 | 合肥工业大学 | 一种专用于含qr码图像的二值化方法 |
CN109671035A (zh) * | 2018-12-26 | 2019-04-23 | 哈工大机器人(山东)智能装备研究院 | 一种基于直方图的红外图像增强方法 |
CN109671035B (zh) * | 2018-12-26 | 2022-12-13 | 哈工大机器人(山东)智能装备研究院 | 一种基于直方图的红外图像增强方法 |
CN113474784A (zh) * | 2019-02-15 | 2021-10-01 | 日商方舟合同公司 | 动态二维码评价方法、动态二维码评价系统和动态二维码评价程序 |
CN110009633A (zh) * | 2019-04-19 | 2019-07-12 | 湖南大学 | 一种基于反向高斯差分的钢轨表面缺陷检测方法 |
CN110009633B (zh) * | 2019-04-19 | 2023-03-24 | 湖南大学 | 一种基于反向高斯差分的钢轨表面缺陷检测方法 |
CN110599408A (zh) * | 2019-07-25 | 2019-12-20 | 安庆师范大学 | 一种基于图像纹理区域选择性多尺度去纹理方法 |
CN111144419A (zh) * | 2019-12-05 | 2020-05-12 | 大连民族大学 | 基于分块自适应同态滤波的历史文档图像二值化方法 |
CN111144419B (zh) * | 2019-12-05 | 2023-06-09 | 大连民族大学 | 基于分块自适应同态滤波的历史文档图像二值化方法 |
CN111047570A (zh) * | 2019-12-10 | 2020-04-21 | 西安中科星图空间数据技术有限公司 | 一种基于纹理分析法的自动云检测方法 |
CN111047570B (zh) * | 2019-12-10 | 2023-06-27 | 中科星图空间技术有限公司 | 一种基于纹理分析法的自动云检测方法 |
CN111047540A (zh) * | 2019-12-27 | 2020-04-21 | 嘉应学院 | 一种基于天空分割的图像去雾方法及其应用系统 |
CN111047540B (zh) * | 2019-12-27 | 2023-07-28 | 嘉应学院 | 一种基于天空分割的图像去雾方法及其应用系统 |
CN111563851B (zh) * | 2020-03-27 | 2023-04-11 | 中国科学院西安光学精密机械研究所 | 一种基于动态高斯参数的图像映射方法 |
CN111563851A (zh) * | 2020-03-27 | 2020-08-21 | 中国科学院西安光学精密机械研究所 | 一种基于动态高斯参数的图像映射方法 |
CN111539238B (zh) * | 2020-04-27 | 2023-08-18 | 广州致远电子股份有限公司 | 二维码图像修复方法、装置、计算机设备和存储介质 |
CN111539238A (zh) * | 2020-04-27 | 2020-08-14 | 广州致远电子有限公司 | 二维码图像修复方法、装置、计算机设备和存储介质 |
CN111583150A (zh) * | 2020-05-07 | 2020-08-25 | 湖南优象科技有限公司 | 一种二维码图像处理方法与系统 |
CN111583150B (zh) * | 2020-05-07 | 2023-06-16 | 湖南优象科技有限公司 | 一种二维码图像处理方法与系统 |
CN111639572A (zh) * | 2020-05-22 | 2020-09-08 | 上海交通大学 | 一种藤本月季高效花量估测的方法 |
CN111724301B (zh) * | 2020-06-19 | 2023-05-12 | 电子科技大学 | 一种基于直方图统计的自适应拉伸方法及系统 |
CN111724301A (zh) * | 2020-06-19 | 2020-09-29 | 电子科技大学 | 一种基于直方图统计的自适应拉伸方法及系统 |
CN111862010A (zh) * | 2020-07-03 | 2020-10-30 | 河南中烟工业有限责任公司 | 一种基于线性高斯滤波的卷烟包灰性能检测方法 |
CN111968148B (zh) * | 2020-07-20 | 2023-08-22 | 华南理工大学 | 一种基于图像处理的空载率计算方法 |
CN111968148A (zh) * | 2020-07-20 | 2020-11-20 | 华南理工大学 | 一种基于图像处理的空载率计算方法 |
CN112287942A (zh) * | 2020-10-20 | 2021-01-29 | 哈尔滨理工大学 | 一种适用于非均匀光照条件的二值化方法 |
CN112200753A (zh) * | 2020-10-30 | 2021-01-08 | 青岛海泰新光科技股份有限公司 | 一种图像宽动态范围的处理方法 |
CN113361673B (zh) * | 2021-01-18 | 2022-07-15 | 南昌航空大学 | 一种基于支持向量机的彩色二维码防伪方法 |
CN113361673A (zh) * | 2021-01-18 | 2021-09-07 | 南昌航空大学 | 一种基于支持向量机的彩色二维码防伪方法 |
CN113658164B (zh) * | 2021-08-24 | 2024-05-24 | 凌云光技术股份有限公司 | 一种二维码信息提取准确性的评估方法及装置 |
CN113658164A (zh) * | 2021-08-24 | 2021-11-16 | 凌云光技术股份有限公司 | 一种二维码信息提取准确性的评估方法及装置 |
CN113657133A (zh) * | 2021-08-24 | 2021-11-16 | 凌云光技术股份有限公司 | 一种二维码提取信息的纠正方法及装置 |
CN113657133B (zh) * | 2021-08-24 | 2023-12-12 | 凌云光技术股份有限公司 | 一种二维码提取信息的纠正方法及装置 |
CN114564978B (zh) * | 2022-04-27 | 2022-07-15 | 北京紫光青藤微系统有限公司 | 用于二维码解码的方法及装置、电子设备、存储介质 |
CN114564978A (zh) * | 2022-04-27 | 2022-05-31 | 北京紫光青藤微系统有限公司 | 用于二维码解码的方法及装置、电子设备、存储介质 |
CN115797234B (zh) * | 2023-01-29 | 2023-09-12 | 南京邮电大学 | 一种增强低对比度二维码图像识别效果的方法 |
CN115797234A (zh) * | 2023-01-29 | 2023-03-14 | 南京邮电大学 | 一种增强低对比度二维码图像识别效果的方法 |
CN115861135A (zh) * | 2023-03-01 | 2023-03-28 | 铜牛能源科技(山东)有限公司 | 一种应用于箱体全景探测的图像增强及识别方法 |
CN116976775A (zh) * | 2023-07-25 | 2023-10-31 | 山东大舜医药物流有限公司 | 疫苗流向分拣及信息监控系统 |
CN116993643A (zh) * | 2023-09-27 | 2023-11-03 | 山东建筑大学 | 基于人工智能的无人机摄影测量图像校正方法 |
CN116993643B (zh) * | 2023-09-27 | 2023-12-12 | 山东建筑大学 | 基于人工智能的无人机摄影测量图像校正方法 |
CN117078565A (zh) * | 2023-10-17 | 2023-11-17 | 深圳市精研科洁科技股份有限公司 | 一种摄像头抖动模糊图像优化增强方法及系统 |
CN117078565B (zh) * | 2023-10-17 | 2024-02-02 | 深圳市精研科洁科技股份有限公司 | 一种摄像头抖动模糊图像优化增强方法及系统 |
CN117522281A (zh) * | 2024-01-05 | 2024-02-06 | 山东通广电子股份有限公司 | 一种基于视觉识别的工器具出入库管理方法及系统 |
CN117522281B (zh) * | 2024-01-05 | 2024-04-16 | 山东通广电子股份有限公司 | 一种基于视觉识别的工器具出入库管理方法及系统 |
CN117576139A (zh) * | 2024-01-17 | 2024-02-20 | 深圳市致佳仪器设备有限公司 | 一种基于双边滤波的边缘及角点检测方法及系统 |
CN117576139B (zh) * | 2024-01-17 | 2024-04-05 | 深圳市致佳仪器设备有限公司 | 一种基于双边滤波的边缘及角点检测方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN106960427A (zh) | 2017-07-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017121018A1 (zh) | 二维码图像处理的方法和装置、终端、存储介质 | |
US9262690B2 (en) | Method and device for detecting glare pixels of image | |
AU2011250829B2 (en) | Image processing apparatus, image processing method, and program | |
US9251614B1 (en) | Background removal for document images | |
WO2019057067A1 (zh) | 图像质量评估方法及装置 | |
CN108229526A (zh) | 网络训练、图像处理方法、装置、存储介质和电子设备 | |
US10552949B2 (en) | Contrast enhancement and reduction of noise in images from cameras | |
WO2015070723A1 (zh) | 眼部图像处理方法和装置 | |
AU2011250827B2 (en) | Image processing apparatus, image processing method, and program | |
CN109993161B (zh) | 一种文本图像旋转矫正方法及系统 | |
CN113902641B (zh) | 一种基于红外图像的数据中心热区判别方法及系统 | |
WO2019201184A1 (zh) | 一种车牌增强方法、装置及电子设备 | |
CN113592776A (zh) | 图像处理方法及装置、电子设备、存储介质 | |
CN110599553B (zh) | 一种基于YCbCr的肤色提取及检测方法 | |
US20240086661A1 (en) | Method and apparatus for processing graphic symbol and computer-readable storage medium | |
CN111311610A (zh) | 图像分割的方法及终端设备 | |
CN109657544B (zh) | 一种人脸检测方法和装置 | |
JP3906221B2 (ja) | 画像処理方法及び画像処理装置 | |
CN116503871A (zh) | 字符分割的预处理方法、终端设备和计算机可读存储介质 | |
CN110633705A (zh) | 一种低照度成像车牌识别方法及装置 | |
Dai et al. | Effective moving shadow detection using statistical discriminant model | |
JP2019054362A (ja) | 画像処理装置、二値画像生産方法および画像処理プログラム | |
JP2001222683A (ja) | 画像処理方法、画像処理装置、文字認識方法、文字認識装置及び記憶媒体 | |
CN114596210A (zh) | 噪声估计方法、装置、终端设备及计算机可读存储介质 | |
Jadav et al. | Shadow Extraction and Elimination of Moving Vehicles for Tracking Vehicles |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16884578 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16884578 Country of ref document: EP Kind code of ref document: A1 |