CN110046529B - Two-dimensional code identification method, device and equipment - Google Patents

Two-dimensional code identification method, device and equipment Download PDF

Info

Publication number
CN110046529B
CN110046529B CN201811513649.8A CN201811513649A CN110046529B CN 110046529 B CN110046529 B CN 110046529B CN 201811513649 A CN201811513649 A CN 201811513649A CN 110046529 B CN110046529 B CN 110046529B
Authority
CN
China
Prior art keywords
image
dimensional code
character
recognized
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811513649.8A
Other languages
Chinese (zh)
Other versions
CN110046529A (en
Inventor
陈家大
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Advantageous New Technologies Co Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201811513649.8A priority Critical patent/CN110046529B/en
Publication of CN110046529A publication Critical patent/CN110046529A/en
Priority to TW108133787A priority patent/TWI726422B/en
Priority to PCT/CN2019/114218 priority patent/WO2020119301A1/en
Application granted granted Critical
Publication of CN110046529B publication Critical patent/CN110046529B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1443Methods for optical code recognition including a method step for retrieval of the optical code locating of the code in an image

Abstract

The embodiment of the specification provides a two-dimensional code identification method, a two-dimensional code identification device and two-dimensional code identification equipment, and an image to be identified is obtained. And when the image to be recognized contains the two-dimensional code, detecting the angular points of the two-dimensional code in the image to be recognized in the specified number according to a deep learning detection algorithm. And determining a target area of the two-dimensional code in the image to be identified according to the position coordinates of the specified number of corner points. And carrying out image correction on the target area to obtain a corrected image. The image correction here may at least comprise a perspective transformation. And carrying out two-dimensional code recognition on the corrected image.

Description

Two-dimensional code identification method, device and equipment
Technical Field
One or more embodiments of the present disclosure relate to the field of image recognition, and in particular, to a two-dimensional code recognition method, apparatus, and device.
Background
A two-dimensional bar code (2-dimensional bar code) is a bar code in which information is recorded by a pattern distributed in a two-dimensional direction of a plane according to a certain rule. Among them, QR two-dimensional codes are the most common. The QR two-dimensional code has 3 patterns (hereinafter referred to as character-hui characteristics) similar to the character hui for positioning, and the patterns are respectively positioned at the upper left corner, the upper right corner and the lower left corner of the two-dimensional code. The identification method can be as follows: and searching for 3 character-backed features of the two-dimensional code in the image to be identified by adopting an image processing technology. And recovering the normal image to be identified according to the number and the position of the character-hui characteristics. And then converted into a binary lattice by a binary method. And finally, analyzing the character content implied by the dot matrix according to the standard grammar of the two-dimensional code.
However, when the image to be recognized is not perfect, for example, the character-backed feature of the two-dimensional code in the image to be recognized is greatly deformed, is blocked, or the image to be recognized is a large-angle image. According to the traditional method, ideal 3 return character features cannot be searched usually, so that a normal image to be identified cannot be recovered, and finally, the two-dimensional code cannot be identified. Therefore, it is desirable to provide a two-dimensional code identification method with higher robustness.
Disclosure of Invention
One or more embodiments of the present specification describe a two-dimensional code recognition method, apparatus, and device, which can accurately recognize a two-dimensional code in an imperfect image.
In a first aspect, a two-dimensional code recognition method is provided, including:
acquiring an image to be identified;
when the image to be recognized contains the two-dimensional code, detecting the angular points of the two-dimensional code in the image to be recognized in the specified number according to a deep learning detection algorithm;
determining a target area of the two-dimensional code in the image to be identified according to the position coordinates of the angular points with the designated number;
carrying out image correction on the target area to obtain a corrected image; the image correction comprises at least a perspective transformation;
and carrying out two-dimensional code recognition on the corrected image.
In a second aspect, a two-dimensional code recognition apparatus is provided, including:
the device comprises an acquisition unit, a recognition unit and a processing unit, wherein the acquisition unit is used for acquiring an image to be recognized;
the detection unit is used for detecting the angular points of the two-dimensional code with the appointed number in the image to be identified according to a deep learning detection algorithm when the image to be identified acquired by the acquisition unit contains the two-dimensional code;
the determining unit is used for determining a target area of the two-dimensional code in the image to be identified according to the position coordinates of the specified number of corner points detected by the detecting unit;
the correction unit is used for carrying out image correction on the target area determined by the determination unit to obtain a corrected image; the image correction comprises at least a perspective transformation;
and the identification unit is used for carrying out two-dimensional code identification on the image corrected by the correction unit.
In a third aspect, a two-dimensional code recognition device is provided, including:
a memory;
one or more processors; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs when executed by the processors implement the steps of:
acquiring an image to be identified;
when the image to be recognized contains the two-dimensional code, detecting the angular points of the two-dimensional code in the image to be recognized in the specified number according to a deep learning detection algorithm;
determining a target area of the two-dimensional code in the image to be identified according to the position coordinates of the angular points with the designated number;
carrying out image correction on the target area to obtain a corrected image; the image correction comprises at least a perspective transformation;
and carrying out two-dimensional code recognition on the corrected image.
The two-dimensional code identification method, device and equipment provided by one or more embodiments of the specification are used for acquiring an image to be identified. And when the image to be recognized contains the two-dimensional code, detecting the angular points of the two-dimensional code in the image to be recognized in the specified number according to a deep learning detection algorithm. And determining a target area of the two-dimensional code in the image to be identified according to the position coordinates of the specified number of corner points. And carrying out image correction on the target area to obtain a corrected image. The image correction here may at least comprise a perspective transformation. And carrying out two-dimensional code recognition on the corrected image. Therefore, according to the scheme provided by the specification, before the two-dimensional code is identified, the two-dimensional code area is determined in the image to be identified on the basis of the deep learning detection algorithm. And then correcting and identifying the two-dimensional code area. Therefore, accurate identification of the two-dimension code in the imperfect image can be realized, and in addition, the identification efficiency of the two-dimension code can be greatly improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a schematic view of a two-dimensional code recognition system provided in the present specification;
fig. 2 is a flowchart of a two-dimensional code recognition method according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a two-dimensional code provided in the present specification;
fig. 4 is a schematic view of an enlargement process of an area to be identified provided in the present specification;
fig. 5 is a schematic view of a corner point of a two-dimensional code provided in this specification;
FIG. 6a is a schematic diagram of a corrected image provided herein;
FIG. 6b is a schematic diagram of a contrast enhanced image provided herein;
fig. 6c is a schematic diagram of a binarized image provided in this specification;
fig. 7 is a schematic view of a two-dimensional code recognition apparatus according to an embodiment of the present disclosure;
fig. 8 is a schematic view of a two-dimensional code recognition device according to an embodiment of the present disclosure.
Detailed Description
The scheme provided by the specification is described below with reference to the accompanying drawings.
Before describing the solution provided in the present specification, the inventive concept of the solution is introduced as follows:
due to the influence of factors such as the position, distance, angle and ambient illumination of the camera, the obtained image of the two-dimensional code to be recognized (referred to as the image to be recognized for short) is usually not a perfect image. In order to adapt to images to be recognized with various qualities, a complex multi-feature fusion logic is usually designed in the conventional method, that is, the complexity of the conventional two-dimensional code recognition method is usually higher. The two-dimensional code may refer to a PDF417 two-dimensional code, a Datamatrix two-dimensional code, a QR two-dimensional code, and the like. In the following description of the present specification, a QR two-dimensional code is explained as an example.
The applicant of the present application considers that the traditional two-dimensional code recognition method is complicated because the image to be recognized is not perfect by itself. If the image to be recognized can be well corrected before being recognized, the complexity of the two-dimensional code recognition method is greatly reduced. Therefore, the scheme is mainly proposed for the preprocessing flow (framework) of the image to be recognized.
Firstly, the two-dimensional code area of the image to be recognized is determined based on a deep learning detection algorithm, and then the two-dimensional code area is corrected and recognized. The deep learning detection algorithm is an algorithm which is relatively labor-consuming, so that unnecessary input is reduced as much as possible. One implementation idea may be: whether the image to be recognized contains the two-dimensional code is judged, and the deep learning detection algorithm is input under the condition that the image to be recognized contains the two-dimensional code.
In one implementation, a return character feature with a relatively high confidence level may be detected in the image to be recognized. If a return character feature with higher confidence coefficient is detected, the image to be recognized can be judged to contain the two-dimensional code. The confidence degree of the return character feature may be determined as follows: and taking the central point of the character-hui as a starting point, and extending a plurality of pixels to the periphery of the character-hui to obtain a regular rectangular area containing the character-hui. And carrying out gray histogram statistics on the positive rectangular region. If the statistical gray level histogram is a bimodal histogram, the confidence coefficient of the character-back feature is higher, otherwise, the confidence coefficient of the character-back feature is lower.
In another implementation manner, 3 ideal character-back features may be detected in the image to be recognized, and if the 3 ideal character-back features are detected, it may also be determined that the image to be recognized includes the two-dimensional code.
Secondly, the picture correction is usually a time-consuming process, and since we finally identify the two-dimensional code in the image to be identified, in order to accelerate the correction efficiency of the image to be identified, it may be considered to correct only the two-dimensional code region in the image to be identified. How is it determined from the image to be recognized that the two-dimensional code region is recognized?
One implementation idea may be to detect a specified number of corner points of the two-dimensional code in the image to be recognized according to a deep learning detection algorithm. And determining a two-dimensional code area in the image to be identified according to the position coordinates of the specified number of corner points. It should be noted that the deep learning detection algorithm may be obtained by training a plurality of images of a plurality of angular points of the two-dimensional code, which are calibrated in advance.
Another implementation idea may be to determine the two-dimensional code region according to the positions of the 3 ideal return character features detected in the image to be recognized.
And finally, the specified number of angular points of the two-dimensional code can be detected through a deep learning detection algorithm. Therefore, based on the coordinates of the specified number of corner points, the two-dimensional code region can be subjected to image correction processing such as perspective transformation, lens distortion correction and the like. When the multiple image correction processes can be performed simultaneously, multiple writing of image data in the memory can be avoided, and therefore the correction efficiency of the image can be greatly improved, and the recognition efficiency of the two-dimensional code can be further improved.
It can be understood that after the image to be recognized is subjected to the above-mentioned series of preprocessing, the quality of the image to be recognized can be greatly improved, so that the contents contained in the two-dimensional code can be recognized more easily by a subsequent recognition algorithm.
Based on the inventive concept, the scheme provided by the specification can be obtained. The scheme provided in the present specification is described in detail below.
Fig. 1 is a schematic view of a two-dimensional code recognition system provided in this specification. In fig. 1, the two-dimensional code recognition system 10 may include: a feature detection module 102, a corner detection module 104, an image correction module 106, and an identification module 108.
The feature detection module 102 is configured to detect a return character feature with a higher confidence in the image to be recognized. The character-hui feature here has the following features: the length ratio of the line segments formed by the black and white pixels is as follows: 1:1:3:1:1. By using the characteristic, the character-back feature can be recognized in the image to be recognized. The confidence level of the above-mentioned character-hui feature is determined as described above, and is not repeated herein.
And the corner detection module 104 is configured to detect a specified number of corners of the two-dimensional code in the image to be identified including the two-dimensional code. The image to be recognized including the two-dimensional code may be an image to be recognized in which a return character with a high confidence level is detected. As described above, the corner detection module 104 may specifically detect a specified number of corners of the two-dimensional code through a deep learning detection algorithm.
And an image correction module 106, configured to perform image correction on an area (i.e., a two-dimensional code area) determined by the position coordinates of the specified number of corner points. The image correction here may include, but is not limited to, perspective transformation, lens distortion correction, and the like. It should be noted that, due to the deep learning detection algorithm, the specified number of corner points of the two-dimensional code can be detected. Therefore, based on the specified number of angular points, perspective transformation and lens distortion correction can be simultaneously carried out on the two-dimensional code region, and the efficiency of image correction can be greatly improved.
And the identification module 108 is configured to identify the two-dimensional code region after the image correction. For example, the content contained in the output two-dimensional code is identified.
Optionally, the two-dimensional code recognition system may further include a contrast enhancement module 110. The contrast enhancement module 110 is configured to perform contrast enhancement on the two-dimensional code region after image correction by using a local histogram method, so as to obtain better contrast.
Further, a binarization module 112 may also be included. The binarization module 112 is configured to perform binarization processing on the two-dimensional code region after image correction or the two-dimensional code region after contrast enhancement, so that the two-dimensional code region is easier to identify.
Fig. 2 is a flowchart of a two-dimensional code identification method according to an embodiment of the present disclosure. The execution subject of the method may be a device with processing capabilities: the server or the system or the apparatus, for example, may be the two-dimensional code recognition system in fig. 1. As shown in fig. 2, the method may specifically include:
step 202, acquiring an image to be identified.
Here, the image to be recognized may be obtained through a camera of a terminal device, where the terminal device may refer to a smart phone, a tablet computer, a digital camera, or other similar terminal devices. After the image to be recognized is acquired, the image to be recognized may be subjected to grayscale processing to obtain a grayscale image. It should be noted that, the value range of the gray value (pixel value for short) of the pixel point in the gray image may be: [0,255].
As described above, in order to reduce unnecessary input of the deep learning detection algorithm, after the above grayscale image is obtained, a determination step of whether or not the two-dimensional code is included may be performed as follows. The step may be specifically executed by the feature detection module 102, and specifically may include:
step a, carrying out feature detection on the gray level image to detect whether the image to be identified contains the character-back feature.
As described above, the character-hui feature in this specification has a feature of 1:1:3:1:1, and thus the character-hui feature can be detected based on the feature. It should be noted that, in the case that the image to be recognized is perfect, 3 circle characters can be detected. When the return character features in the image to be recognized are deformed and shielded or the image to be recognized is a large-angle image, the ideal 3 return character features cannot be detected, but a single return character feature can be detected generally, so that the method for judging whether the two-dimensional code is included in the embodiment of the specification has high robustness. Taking the two-dimensional code shown in fig. 3 as an example, the character hui in the upper left corner can be detected.
And b, if the character-hui feature is detected, expanding a plurality of pixels around the character-hui feature by taking the central point of the character-hui feature as a starting point to obtain a regular rectangular area containing the character-hui feature.
Here, "surrounding" may refer to four directions of the character of a. Wherein the number of pixels expanded in each direction is determined by the size of the character-back. Specifically, according to the characteristics of 1:1:3:1:1, the character-hui feature in this specification may include 7 × 7 dot matrix cells, and assuming that 1 dot matrix cell corresponds to 1 pixel, the size of the character-hui feature is: 1 × 7 ═ 7 pixels. In an implementation manner, the number of the extended pixels may be: 1 × 8, which is set to 8 because the resulting regular rectangular area needs to contain the character of a hyphen (i.e. more than 7 dot matrix cells), where 1 represents the aforementioned 1 pixel. Of course, in practical application, 8 in the formula may be replaced by any number greater than 8, which is not limited in this specification.
And c, carrying out gray histogram statistics on the regular rectangular region.
The abscissa of the gray-scale histogram may be different pixel values included in the regular rectangular region, and as described above, the value range of the pixel values here is: [0,255], and the ordinate represents the number of different pixel values.
And d, if the counted gray level histogram is a bimodal histogram, judging that the image to be recognized contains the two-dimensional code.
It should be noted that the steps b to d may be performed when the ideal 3 characters are not detected. If 3 ideal return character features can be detected through the step a, it can be directly judged that the image to be recognized contains the two-dimensional code without executing the steps b to d, which is not limited in the specification.
The method for judging whether the image to be recognized contains the two-dimensional code or not by detecting the return character features with high single confidence coefficient can reduce the false recognition rate of whether the image to be recognized contains the two-dimensional code or not.
And 204, when the image to be recognized contains the two-dimensional code, detecting the angular points of the two-dimensional code in the image to be recognized in the specified number according to a deep learning detection algorithm.
Here, the angular point detection module 104 may detect a specified number of angular points of the two-dimensional code in the image to be recognized according to a deep learning detection algorithm.
Optionally, in order to ensure that when the image to be recognized includes a two-dimensional code, the deep learning detection algorithm can detect a specified number of corner points of the two-dimensional code, before executing step 204, in an embodiment of this specification, a step of determining the size of the two-dimensional code may also be executed, where the step includes:
and acquiring the size of the character-back feature. And converting the size of the two-dimensional code according to a preset conversion rule and the size of the character-hui characteristic. And if the size of the two-dimensional code does not meet the preset condition, extracting the area to be recognized with the return character as the center from the image to be recognized. And amplifying the area to be identified.
The conversion process of the size of the two-dimensional code may be as follows: assume that the size of the retrieved return character feature is: 3 × 7-21 pixels, so that 1 dot matrix unit of the character hui can be determined to correspond to 3 pixels. And assume that the preset conversion rule is: the size of the two-dimensional code is determined according to the number of pixels corresponding to 1 dot matrix unit and a preset maximum two-dimensional code dot matrix. Then, when the preset maximum two-dimensional code lattice is: 57 × 57, the size of the two-dimensional code may be: 3 × 57 ═ 171 pixels.
In practical applications, the preset conversion rule may also be set as another algorithm, for example, the size of the return character is enlarged by a preset multiple to determine the size of the two-dimensional code, which is not limited in this specification.
Fig. 4 shows a schematic diagram of the enlargement process of the area to be identified. In fig. 4, it is assumed that the size of the image to be recognized is: 1000 × 1000, and assuming that the size of the two-dimensional code obtained by conversion does not satisfy the preset condition according to the above conversion rule, an area to be recognized centered on the character-back feature may be extracted from the image to be recognized, where the size of the area to be recognized may be 400 × 400, and then the area to be recognized of 400 × 400 is enlarged.
It is understood that when the zooming-in operation of the region to be identified is also performed, step 204 may be replaced by: and detecting the specified number of angular points of the two-dimensional code in the amplified region to be identified according to a deep learning detection algorithm.
In an example, the specified number of corner points in step 204 or the above-mentioned replaced step may refer to 4 corner points of the two-dimensional code. Also for example in fig. 3, the detected 4 corner points can be as shown in fig. 5.
In addition, the deep learning detection algorithm in this specification may be obtained by training a plurality of images of a plurality of angular points of a predetermined number, which are calibrated in advance, of the two-dimensional code. By training the deep learning detection algorithm, the perception capability of human eyes on the two-dimensional code corner points can be simulated, so that higher robustness is obtained. The algorithm can also be updated faster when new scenes appear, by deep learning fine tuning (finetune).
Therefore, the single character-back feature with high confidence coefficient detected in the specification can be used for judging whether the image to be recognized contains the two-dimensional code or not, and the size of the character-back feature can also be used for converting the size of the two-dimensional code. When the size of the two-dimensional code does not meet the preset condition, the surrounding area of the character-hui feature can be amplified, and therefore the success rate of detecting the corner points of the two-dimensional code is improved. In addition, the amplified region can also be understood as coarse positioning of the two-dimensional code, and the search space of the deep learning detection algorithm can be reduced by the coarse positioning mode.
Furthermore, compared with the conventional method (i.e., positioning the two-dimensional code region based on 3 ideal return characters), the method for determining the two-dimensional code region through the deep learning detection algorithm in the embodiments of the present specification has better robustness. Specifically, the deep learning detection algorithm provided by the embodiment of the present specification can accurately locate the two-dimensional code region when the return character feature of the two-dimensional code is deformed and blocked or the image to be recognized is a large-angle image.
And step 206, determining a target area of the two-dimensional code in the image to be identified according to the position coordinates of the specified number of corner points.
Taking fig. 5 as an example, the target region determined in this step may be a rectangular region formed by 4 corner points in the figure.
And step 208, carrying out image correction on the target area to obtain a corrected image.
For example, the image correction module 108 may perform the steps 206 and 208.
Taking fig. 5 as an example, after image correction is performed on its target area, a corrected image as shown in fig. 6a can be obtained.
The image correction may include at least a perspective transformation. Further, lens distortion correction and the like may also be included. It should be noted that since the 4 corner points of the target region can be determined already through step 204, this step can directly perform perspective transformation. And lens distortion correction is not needed to be carried out on the target area firstly to determine 4 corner points of the target area, and then perspective transformation is carried out on the target area. In one implementation, when the lens distortion correction is performed on the target area, the lens distortion correction and the perspective transformation can be performed simultaneously, that is, only image data needs to be written into the memory once, so that the efficiency of image correction can be greatly improved.
Furthermore, since the lens distortion correction is a nonlinear change, it is very resource consuming. Therefore, the step only carries out image correction on the target area, and does not carry out image correction on the whole image to be recognized, so that the operation amount can be greatly reduced.
And step 210, performing two-dimensional code recognition on the corrected image.
For example, the two-dimensional code recognition may be performed on the corrected image by the recognition module 110.
In order to make the corrected image easier to identify, the embodiments of the present specification may further perform image processing steps such as contrast enhancement and binarization on the corrected image. Specifically, a local histogram method is first adopted to perform contrast enhancement processing on the corrected image, so as to obtain a contrast enhanced image. And then carrying out binarization processing on the enhanced image to obtain a binarized image. And finally, performing two-dimensional code identification on the binary image.
Taking fig. 6a as an example, after performing the contrast enhancement processing thereon, a contrast enhanced image as shown in fig. 6b can be obtained. After that, when the image shown in fig. 6b is subjected to binarization processing, a binarized image as shown in fig. 6c can be obtained.
In summary, the two-dimensional code recognition method provided in the embodiments of the present description determines whether the image to be recognized includes a two-dimensional code by detecting a single return character feature with a high confidence level. Discarding the image to be identified which does not contain the two-dimensional code, thereby avoiding that all images are subjected to a subsequent deep learning detection algorithm which consumes computational power. In addition, through the single character-hui feature with high confidence coefficient, coarse positioning of the two-dimensional code can be realized, so that when the size of the two-dimensional code does not meet a preset condition, the surrounding area of the two-dimensional code is amplified by taking the character-hui feature as a center. Moreover, the angular points of the two-dimensional code are positioned by training the deep learning detection algorithm, so that the problem that the traditional algorithm designs complex multi-feature fusion logic in order to adapt to various two-dimensional code image qualities can be avoided. Finally, based on the angular points detected by the deep learning detection algorithm, the two-dimensional code region can be subjected to image correction processing such as perspective change and lens distortion correction, so that the image correction efficiency is greatly improved.
Corresponding to the two-dimensional code identification method, an embodiment of the present specification further provides a two-dimensional code identification device, as shown in fig. 7, where the device may include:
an acquiring unit 702 is configured to acquire an image to be recognized.
A detecting unit 704, configured to detect, when the to-be-identified image acquired by the acquiring unit 702 includes a two-dimensional code, a specified number of corner points of the two-dimensional code in the to-be-identified image according to a deep learning detection algorithm.
The function of the detection unit 704 may be implemented by the corner detection module 104.
The determining unit 706 is configured to determine, according to the position coordinates of the specified number of corner points detected by the detecting unit 704, a target area where the two-dimensional code is located in the image to be identified.
A correcting unit 708, configured to perform image correction on the target area determined by the determining unit 706 to obtain a corrected image, where the image correction at least may include perspective transformation. Further, lens distortion correction and the like may also be included.
The functions of the determination unit 706 and the correction unit 708 described above may be implemented by the image correction module 106.
And the identifying unit 710 is used for performing two-dimensional code identification on the image corrected by the correcting unit 708.
Wherein the function of the recognition unit 710 may be implemented by the recognition module 108.
Optionally, the apparatus may further include: the determining unit 712 is configured to perform feature detection on the image to be recognized to detect whether the image to be recognized includes a backscript feature. If the character-hui feature is detected, a central point of the character-hui feature is taken as a starting point, and a plurality of pixels are expanded to the periphery of the character-hui feature to obtain a regular rectangular area containing the character-hui feature. And carrying out gray histogram statistics on the positive rectangular region. And if the statistical gray level histogram is a double-peak histogram, judging that the image to be identified contains the two-dimensional code.
The function of the determination unit 712 may be implemented by the feature detection module 102.
Optionally, the apparatus may further include: a scaling unit 714, an extraction unit 716, and an amplification unit 718.
The obtaining unit 702 is further configured to obtain the size of the character-backed feature.
The scaling unit 714 is configured to scale the size of the two-dimensional code according to a preset scaling rule and the size of the character-backed feature acquired by the acquisition unit 702.
And an extracting unit 716, configured to extract, if the size of the two-dimensional code converted by the converting unit 714 does not satisfy a preset condition, an area to be recognized with the return character as a center from the image to be recognized.
An enlarging unit 718, configured to enlarge the region to be identified extracted by the extracting unit 716.
The detection unit 704 is specifically configured to:
and detecting the specified number of angular points of the two-dimensional code in the amplified region to be identified according to a deep learning detection algorithm.
The identifying unit 710 is specifically configured to:
and performing contrast enhancement processing on the corrected image by adopting a local histogram method to obtain a contrast enhanced image.
And carrying out binarization processing on the contrast enhanced image to obtain a binarized image.
And carrying out two-dimensional code identification on the binary image.
The function of the identification unit 710 here can be realized by the above-mentioned identification module 108, contrast enhancement module 110, and binarization module 112.
The functions of each functional module of the device in the above embodiments of the present description may be implemented through each step of the above method embodiments, and therefore, a specific working process of the device provided in one embodiment of the present description is not repeated herein.
In the two-dimensional code recognition apparatus provided in an embodiment of the present specification, the obtaining unit 702 obtains an image to be recognized. When the image to be recognized contains the two-dimensional code, the detection unit 704 detects a specified number of corner points of the two-dimensional code in the image to be recognized according to a deep learning detection algorithm. The determining unit 706 determines a target area where the two-dimensional code is located in the image to be identified according to the position coordinates of the specified number of corner points. The correction unit 708 performs image correction on the target area to obtain a corrected image, where the image correction at least may include perspective transformation. The recognition unit 710 performs two-dimensional code recognition on the corrected image. Therefore, accurate identification of the two-dimension code in the imperfect image can be achieved, and in addition, the identification efficiency of the two-dimension code can be greatly improved.
Corresponding to the two-dimensional code identification method, an embodiment of the present specification further provides a two-dimensional code identification device, as shown in fig. 8, the device may include: memory 802, one or more processors 804, and one or more programs. Wherein the one or more programs are stored in the memory 802 and configured to be executed by the one or more processors 804, the programs when executed by the processors 804 implement the steps of:
and acquiring an image to be identified.
And when the image to be recognized contains the two-dimensional code, detecting the angular points of the two-dimensional code in the image to be recognized in the specified number according to a deep learning detection algorithm.
And determining a target area of the two-dimensional code in the image to be identified according to the position coordinates of the specified number of corner points.
And carrying out image correction on the target area to obtain a corrected image, wherein the image correction at least comprises perspective transformation.
And carrying out two-dimensional code recognition on the corrected image.
The two-dimensional code recognition device provided by one embodiment of the specification can realize accurate recognition of the two-dimensional code in the imperfect image.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The steps of a method or algorithm described in connection with the disclosure herein may be embodied in hardware or may be embodied in software instructions executed by a processor. The software instructions may consist of corresponding software modules that may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, a hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in a server. Of course, the processor and the storage medium may reside as discrete components in a server.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in this invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The above-mentioned embodiments, objects, technical solutions and advantages of the present specification are further described in detail, it should be understood that the above-mentioned embodiments are only specific embodiments of the present specification, and are not intended to limit the scope of the present specification, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the present specification should be included in the scope of the present specification.

Claims (9)

1. A two-dimensional code identification method comprises the following steps:
acquiring an image to be identified;
detecting a return character feature with higher confidence coefficient in the image to be recognized; wherein, the judgment process of the return character with higher confidence coefficient comprises the following steps: taking the central point of the character-hui as a starting point, and expanding a plurality of pixels to the periphery of the character-hui to obtain a regular rectangular area containing the character-hui; carrying out gray level histogram statistics on the regular rectangular region, and if the statistical gray level histogram is a bimodal histogram, judging that the confidence coefficient of the character-back feature is higher;
if a return character feature with higher confidence coefficient is detected, judging that the image to be recognized contains two-dimensional codes;
when the image to be recognized contains the two-dimensional code, detecting the angular points of the two-dimensional code in the image to be recognized in the specified number according to a deep learning detection algorithm;
determining a target area of the two-dimensional code in the image to be identified according to the position coordinates of the angular points with the designated number;
according to the position coordinates of the angular points with the designated number, carrying out image correction on the target area to obtain a corrected image; the image correction comprises at least a perspective transformation;
and carrying out two-dimensional code recognition on the corrected image.
2. The method according to claim 1, further comprising, before the detecting a specified number of corner points of the two-dimensional code in the image to be recognized according to a deep learning detection algorithm, the following:
acquiring the size of the character-hui feature;
converting the size of the two-dimensional code according to a preset conversion rule and the size of the character-hui feature;
if the size of the two-dimensional code does not meet a preset condition, extracting a region to be identified with the return character as a center from the image to be identified;
amplifying the area to be identified;
the detecting the angular points of the two-dimensional code with the appointed number in the image to be identified according to the deep learning detection algorithm comprises the following steps:
and detecting the specified number of angular points of the two-dimensional code in the amplified region to be identified according to a deep learning detection algorithm.
3. The method of claim 1, the image correction further comprising lens distortion correction.
4. The method according to any one of claims 1-3, wherein the performing two-dimensional code recognition on the corrected image comprises:
performing contrast enhancement processing on the corrected image by adopting a local histogram method to obtain a contrast enhanced image;
carrying out binarization processing on the contrast enhanced image to obtain a binarized image;
and carrying out two-dimensional code identification on the binary image.
5. A two-dimensional code recognition device includes:
the device comprises an acquisition unit, a recognition unit and a processing unit, wherein the acquisition unit is used for acquiring an image to be recognized;
the detection unit is used for detecting a return character feature with higher confidence coefficient in the image to be recognized; wherein, the judgment process of the return character with higher confidence coefficient comprises the following steps: taking the central point of the character-hui as a starting point, and expanding a plurality of pixels to the periphery of the character-hui to obtain a regular rectangular area containing the character-hui; carrying out gray level histogram statistics on the regular rectangular region, and if the statistical gray level histogram is a bimodal histogram, judging that the confidence coefficient of the character-back feature is higher;
if a return character feature with higher confidence coefficient is detected, judging that the image to be recognized contains two-dimensional codes;
when the image to be recognized acquired by the acquisition unit contains a two-dimensional code, detecting the angular points of the two-dimensional code in the image to be recognized in the specified number according to a deep learning detection algorithm;
the determining unit is used for determining a target area of the two-dimensional code in the image to be identified according to the position coordinates of the specified number of corner points detected by the detecting unit;
the correction unit is used for carrying out image correction on the target area determined by the determination unit according to the position coordinates of the specified number of corner points to obtain a corrected image; the image correction comprises at least a perspective transformation;
and the identification unit is used for carrying out two-dimensional code identification on the image corrected by the correction unit.
6. The apparatus of claim 5, further comprising: a conversion unit, an extraction unit and an amplification unit;
the obtaining unit is further configured to obtain the size of the character-hui feature;
the conversion unit is used for converting the size of the two-dimensional code according to a preset conversion rule and the size of the character-hui feature acquired by the acquisition unit;
the extracting unit is used for extracting a to-be-identified area with the return character as the center from the to-be-identified image if the size of the two-dimensional code converted by the converting unit does not meet a preset condition;
the amplifying unit is used for amplifying the area to be identified extracted by the extracting unit;
the detection unit is specifically configured to:
and detecting the specified number of angular points of the two-dimensional code in the amplified region to be identified according to a deep learning detection algorithm.
7. The apparatus of claim 5, the image correction further comprising lens distortion correction.
8. The apparatus according to any of claims 5-7, the identification unit being specifically configured to:
performing contrast enhancement processing on the corrected image by adopting a local histogram method to obtain a contrast enhanced image;
carrying out binarization processing on the contrast enhanced image to obtain a binarized image;
and carrying out two-dimensional code identification on the binary image.
9. A two-dimensional code recognition device comprising:
a memory;
one or more processors; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs when executed by the processors implement the steps of:
acquiring an image to be identified;
detecting a return character feature with higher confidence coefficient in the image to be recognized; wherein, the judgment process of the return character with higher confidence coefficient comprises the following steps: taking the central point of the character-hui as a starting point, and expanding a plurality of pixels to the periphery of the character-hui to obtain a regular rectangular area containing the character-hui; carrying out gray level histogram statistics on the regular rectangular region, and if the statistical gray level histogram is a bimodal histogram, judging that the confidence coefficient of the character-back feature is higher;
if a return character feature with higher confidence coefficient is detected, judging that the image to be recognized contains two-dimensional codes;
when the image to be recognized contains the two-dimensional code, detecting the angular points of the two-dimensional code in the image to be recognized in the specified number according to a deep learning detection algorithm;
determining a target area of the two-dimensional code in the image to be identified according to the position coordinates of the angular points with the designated number;
according to the position coordinates of the angular points with the designated number, carrying out image correction on the target area to obtain a corrected image; the image correction comprises at least a perspective transformation;
and carrying out two-dimensional code recognition on the corrected image.
CN201811513649.8A 2018-12-11 2018-12-11 Two-dimensional code identification method, device and equipment Active CN110046529B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201811513649.8A CN110046529B (en) 2018-12-11 2018-12-11 Two-dimensional code identification method, device and equipment
TW108133787A TWI726422B (en) 2018-12-11 2019-09-19 Two-dimensional code recognition method, device and equipment
PCT/CN2019/114218 WO2020119301A1 (en) 2018-12-11 2019-10-30 Two-dimensional code identification method, apparatus, and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811513649.8A CN110046529B (en) 2018-12-11 2018-12-11 Two-dimensional code identification method, device and equipment

Publications (2)

Publication Number Publication Date
CN110046529A CN110046529A (en) 2019-07-23
CN110046529B true CN110046529B (en) 2020-06-09

Family

ID=67273847

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811513649.8A Active CN110046529B (en) 2018-12-11 2018-12-11 Two-dimensional code identification method, device and equipment

Country Status (3)

Country Link
CN (1) CN110046529B (en)
TW (1) TWI726422B (en)
WO (1) WO2020119301A1 (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110046529B (en) * 2018-12-11 2020-06-09 阿里巴巴集团控股有限公司 Two-dimensional code identification method, device and equipment
CN110490023A (en) * 2019-08-27 2019-11-22 广东工业大学 A kind of two dimensional code deformation restoration methods, device and equipment
CN110705329B (en) * 2019-09-30 2021-09-14 联想(北京)有限公司 Processing method and device and electronic equipment
CN112686959A (en) * 2019-10-18 2021-04-20 菜鸟智能物流控股有限公司 Method and device for correcting image to be recognized
CN111860489A (en) * 2019-12-09 2020-10-30 北京嘀嘀无限科技发展有限公司 Certificate image correction method, device, equipment and storage medium
CN113378595B (en) * 2020-03-10 2023-09-22 顺丰科技有限公司 Two-dimensional code positioning method, device, equipment and storage medium
CN111222510B (en) * 2020-03-13 2024-03-15 中冶长天国际工程有限责任公司 Trolley grate image pickup method and system of sintering machine
CN111612012A (en) * 2020-05-25 2020-09-01 信雅达系统工程股份有限公司 Health code identification method and device
CN111428707B (en) * 2020-06-08 2020-11-10 北京三快在线科技有限公司 Method and device for identifying pattern identification code, storage medium and electronic equipment
CN117372011A (en) * 2020-06-15 2024-01-09 支付宝(杭州)信息技术有限公司 Counting method and device of traffic card, code scanning equipment and counting card server
CN111723802A (en) * 2020-06-30 2020-09-29 北京来也网络科技有限公司 AI-based two-dimensional code identification method, device, equipment and medium
CN112818979B (en) * 2020-08-26 2024-02-02 腾讯科技(深圳)有限公司 Text recognition method, device, equipment and storage medium
CN112308899A (en) * 2020-11-09 2021-02-02 北京经纬恒润科技股份有限公司 Trailer angle identification method and device
CN112541367A (en) * 2020-12-11 2021-03-23 上海品览数据科技有限公司 Multiple two-dimensional code identification method based on deep learning and image processing
CN114139564A (en) * 2021-12-07 2022-03-04 Oppo广东移动通信有限公司 Two-dimensional code detection method and device, terminal equipment and training method for detection network
US11922269B2 (en) 2021-12-22 2024-03-05 Bayer Aktiengesellschaft Reading out optically readable codes
CN116882433B (en) * 2023-09-07 2023-12-08 无锡维凯科技有限公司 Machine vision-based code scanning identification method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200188A (en) * 2014-08-25 2014-12-10 北京慧眼智行科技有限公司 Method and system for rapidly positioning position detection patterns of QR code
CN105260693A (en) * 2015-12-01 2016-01-20 浙江工业大学 Laser two-dimensional code positioning method

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104424458B (en) * 2013-08-23 2017-08-04 希姆通信息技术(上海)有限公司 Image-recognizing method and device, the webserver, image recognition apparatus and system
CN104268498B (en) * 2014-09-29 2017-09-19 杭州华为数字技术有限公司 A kind of recognition methods of Quick Response Code and terminal
CN104809422B (en) * 2015-04-27 2017-09-05 江苏中科贯微自动化科技有限公司 QR code recognition methods based on image procossing
CN104881770A (en) * 2015-06-03 2015-09-02 秦志勇 Express bill information identification system and express bill information identification method
CN105046184B (en) * 2015-07-22 2017-07-18 福建新大陆自动识别技术有限公司 Quick Response Code coding/decoding method and system based on distorted image correction
CN105701434A (en) * 2015-12-30 2016-06-22 广州卓德信息科技有限公司 Image correction method for two-dimensional code distorted image
CN106951812B (en) * 2017-03-31 2018-12-07 腾讯科技(深圳)有限公司 Identify the method, apparatus and terminal of two dimensional code
CN108416412B (en) * 2018-01-23 2021-04-06 浙江瀚镪自动化设备股份有限公司 Logistics composite code identification method based on multitask deep learning
CN108629221B (en) * 2018-05-11 2021-08-10 南京邮电大学 Correction method of fold distortion QR two-dimensional code
CN110046529B (en) * 2018-12-11 2020-06-09 阿里巴巴集团控股有限公司 Two-dimensional code identification method, device and equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200188A (en) * 2014-08-25 2014-12-10 北京慧眼智行科技有限公司 Method and system for rapidly positioning position detection patterns of QR code
CN105260693A (en) * 2015-12-01 2016-01-20 浙江工业大学 Laser two-dimensional code positioning method

Also Published As

Publication number Publication date
CN110046529A (en) 2019-07-23
TWI726422B (en) 2021-05-01
TW202024997A (en) 2020-07-01
WO2020119301A1 (en) 2020-06-18

Similar Documents

Publication Publication Date Title
CN110046529B (en) Two-dimensional code identification method, device and equipment
US10817741B2 (en) Word segmentation system, method and device
KR101617681B1 (en) Text detection using multi-layer connected components with histograms
CN107220640B (en) Character recognition method, character recognition device, computer equipment and computer-readable storage medium
US9076056B2 (en) Text detection in natural images
CN109117846B (en) Image processing method and device, electronic equipment and computer readable medium
WO2018059365A1 (en) Graphical code processing method and apparatus, and storage medium
CN110647882A (en) Image correction method, device, equipment and storage medium
US11341739B2 (en) Image processing device, image processing method, and program recording medium
CN112329779A (en) Method and related device for improving certificate identification accuracy based on mask
US20200302135A1 (en) Method and apparatus for localization of one-dimensional barcodes
CN112307786B (en) Batch positioning and identifying method for multiple irregular two-dimensional codes
CN113557520A (en) Character processing and character recognition method, storage medium and terminal device
CN110210467B (en) Formula positioning method of text image, image processing device and storage medium
CN113129298A (en) Definition recognition method of text image
CN112163443A (en) Code scanning method, code scanning device and mobile terminal
CN114998347B (en) Semiconductor panel corner positioning method and device
CN116976372A (en) Picture identification method, device, equipment and medium based on square reference code
CN112308062B (en) Medical image access number identification method in complex background image
CN109389000B (en) Bar code identification method and computer applying same
CN109871910B (en) Handwritten character recognition method and device
JPH07168910A (en) Document layout analysis device and document format identification device
CN113065480B (en) Handwriting style identification method and device, electronic device and storage medium
CN109117844B (en) Password determination method and device
Ahmed Signage recognition based wayfinding system for the visually impaired

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40011374

Country of ref document: HK

TR01 Transfer of patent right

Effective date of registration: 20201013

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Patentee after: Innovative advanced technology Co.,Ltd.

Address before: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Patentee before: Advanced innovation technology Co.,Ltd.

Effective date of registration: 20201013

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Patentee after: Advanced innovation technology Co.,Ltd.

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Patentee before: Alibaba Group Holding Ltd.

TR01 Transfer of patent right