CN111523342A - Two-dimensional code detection and correction method in complex scene - Google Patents
Two-dimensional code detection and correction method in complex scene Download PDFInfo
- Publication number
- CN111523342A CN111523342A CN202010340631.3A CN202010340631A CN111523342A CN 111523342 A CN111523342 A CN 111523342A CN 202010340631 A CN202010340631 A CN 202010340631A CN 111523342 A CN111523342 A CN 111523342A
- Authority
- CN
- China
- Prior art keywords
- dimensional code
- image
- dimension code
- network model
- positioning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 28
- 238000012937 correction Methods 0.000 title claims abstract description 26
- 238000000034 method Methods 0.000 title claims abstract description 26
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 11
- 238000012549 training Methods 0.000 claims description 25
- 230000009466 transformation Effects 0.000 claims description 19
- 238000012360 testing method Methods 0.000 claims description 18
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 14
- 230000006870 function Effects 0.000 claims description 5
- 238000012216 screening Methods 0.000 claims description 4
- 239000011159 matrix material Substances 0.000 claims description 2
- 238000011176 pooling Methods 0.000 claims description 2
- 230000001131 transforming effect Effects 0.000 claims description 2
- 238000013527 convolutional neural network Methods 0.000 abstract 1
- 230000008569 process Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 230000004807 localization Effects 0.000 description 3
- 238000012706 support-vector machine Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/146—Methods for optical code recognition the method including quality enhancement steps
- G06K7/1482—Methods for optical code recognition the method including quality enhancement steps using fuzzy logic or natural solvers, such as neural networks, genetic algorithms and simulated annealing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2148—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/1408—Methods for optical code recognition the method being specifically adapted for the type of code
- G06K7/1417—2D bar codes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/1439—Methods for optical code recognition including a method step for retrieval of the optical code
- G06K7/1443—Methods for optical code recognition including a method step for retrieval of the optical code locating of the code in an image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/1439—Methods for optical code recognition including a method step for retrieval of the optical code
- G06K7/1456—Methods for optical code recognition including a method step for retrieval of the optical code determining the orientation of the optical code with respect to the reader and correcting therefore
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Toxicology (AREA)
- Electromagnetism (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Automation & Control Theory (AREA)
- Fuzzy Systems (AREA)
- Quality & Reliability (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a two-dimension code detection and correction method in a complex scene, which mainly solves the problem that the existing two-dimension code identification correlation algorithm fails in the complex scene. The algorithm detects the two-dimensional code in a complex scene by using local HOG characteristics in a cascading mode, and further screens and regresses positioning points in the two-dimensional code by using a convolutional neural network so as to accurately position and correct the two-dimensional code. Through the algorithm, the two-dimension code can be quickly and accurately positioned and corrected in a complex scene, and the method has high practical value and popularization value in the fields with two-dimension code identification function requirements such as mobile payment, industrial detection, robot two-dimension code navigation and the like.
Description
Technical Field
The invention relates to the technical field of image detection, in particular to a two-dimensional code detection and correction method in a complex scene.
Background
Two-dimensional codes are the most common coding form in daily life and widely applied to mobile payment, information acquisition and the like; at present, in the prior art, binarization processing is adopted for conventional two-dimensional code identification to obtain mode information of a two-dimensional code in an image, and then a locator of the two-dimensional code is determined according to the mode information of the two-dimensional code in the image to realize detection and location of the two-dimensional code. However, the binarization identification efficiency is low, the accuracy is poor, only a simple scene is identified, and the accurate positioning detection of the two-dimensional code under a complex environment cannot be realized. After the two-dimensional code is positioned, the two-dimensional code needs to be calibrated before identification, and a commonly used method at present is to calculate 3 correction points according to the proportion of a black-white interval ratio of the positioning point in an image, but the two-dimensional code which is relatively fuzzy and stained is difficult to correct.
In addition, the invention also discloses a method for detecting and identifying a two-dimensional code by using key points, for example, the invention patents in China, wherein the patent application number is 201911168409.3 and the name is 'two-dimensional code detection system and detection method based on key point detection', and the invention patents comprise an image input module for inputting a two-dimensional code image to be detected; the image processing module is used for carrying out image processing on the two-dimensional code image to obtain an image to be detected which meets the key point detection requirement; the two-dimensional code detection module is used for carrying out two-dimensional code region detection on the image to be detected according to a preset attitude estimation algorithm to obtain a two-dimensional code region on the image to be detected; and the two-dimensional code identification module is used for carrying out two-dimensional code identification on each two-dimensional code area obtained by detection and outputting the identified content. However, it uses the key points to detect, but it also has a problem that the key points cannot be accurately detected.
Therefore, a two-dimensional code detection and correction method under a complex scene with accurate detection and simple steps is urgently needed.
Disclosure of Invention
In view of the above problems, an object of the present invention is to provide a two-dimensional code detection and correction method in a complex scene, and the technical scheme adopted by the present invention is as follows:
a two-dimensional code detection and correction method under a complex scene comprises the following steps:
acquiring a plurality of two-dimension code images in a natural scene and a payment scene, marking the position and three positioning points of the two-dimension code in the images by using a two-dimension code identification algorithm, and marking unidentified images;
constructing a cascade classifier with local HOG characteristics;
dividing the image into a training set and a testing set, and acquiring a two-dimension code area positive sample and a non-two-dimension code area negative sample from the training sample; the HOG characteristics of the positive samples of the two-dimension code area and the negative samples of the non-two-dimension code area are distinguished by learning;
carrying out window scanning on a two-dimensional code image to be detected by utilizing the learned HOG characteristics to obtain two-dimensional code area coordinates in the two-dimensional code image;
constructing a positioning point regression network model consisting of a convolution layer, a pooling layer or a full connection layer; carrying out random rotation, color transformation, blurring and deformation treatment on the two-dimensional code image of the training set to obtain an expanded enhanced data set;
inputting the training set data into a positioning point regression network model, and adjusting parameters of the positioning point regression network model according to an L1 loss function to obtain a trained network model; testing the precision by using a test data set;
loading the trained network model, and sending any image containing the two-dimension code in the test set into the network model to obtain three positioning point coordinates corresponding to the two-dimension code image in the image and the confidence coefficient of the two-dimension code;
and (4) calculating affine transformation parameters by using the coordinates of the three positioning points, and performing affine transformation correction on the two-dimensional code image.
Preferably, the classifiers comprise 5 layers of weak classifiers, and the first layer comprises 4 local HOG features, the second layer of weak classifiers comprises 8 local HOG features, the third layer of weak classifiers comprises 16 local HOG features, and the remaining layers of classifiers comprise 32 local HOG features.
Further, the localization point regression network model composed of convolution layer, posing layer or full connection layer comprises the following steps:
presetting a network comprising 5 3 × 3 convolution layers and 3 1 × 1 convolution layers according to the characteristics of the positioning points of the two-dimensional code;
carrying out random rotation, color transformation, blurring and deformation treatment on the two-dimensional code image of the training set to obtain an expanded enhanced data set;
inputting the training set data into a positioning point regression network model, and adjusting parameters of the positioning point regression network model according to a loss function to obtain a trained network model; testing the precision by using a test data set;
loading the trained network model, and sending any image area containing the two-dimensional code into a network to obtain three positioning point coordinates corresponding to the two-dimensional code image in the image and the confidence coefficient of the two-dimensional code;
and (4) calculating affine transformation parameters by using the coordinates of the three positioning points, and performing affine transformation correction on the two-dimensional code image.
Furthermore, the two-dimensional code positioning by using the local HOG features comprises the following steps:
dividing the image into a training set and a testing set, and acquiring a two-dimension code area positive sample and a non-two-dimension code area negative sample from the training sample;
clustering the positive samples of the two-dimension code area and the negative samples of the non-two-dimension code area by adopting a clustering algorithm, and sorting and screening the positive samples of the two-dimension code area and the negative samples of the non-two-dimension code area by using the appearance frequency of the positive samples of the two-dimension code area and the negative samples of the non-two-dimension code area after clustering;
loading each layer of classifier for learning according to frequency by using the sorted positive samples of the two-dimension code regions and the sorted negative samples of the non-two-dimension code regions to obtain local HOG characteristics on different scales and positions;
and carrying out window scanning on the two-dimensional code image to be detected by utilizing the HOG characteristics obtained by learning to obtain the two-dimensional code area coordinates in the two-dimensional code image.
Further, the coordinate correction of the two-dimensional code image by using the coordinates of the three positioning points includes the following steps:
and (3) calculating an affine transformation matrix according to the relative positions of the obtained 3 positioning points, and transforming the image through affine transformation to obtain a corrected two-dimensional code image.
A two-dimensional code detection and correction system under a complex scene comprises:
the two-dimensional code positioning module is used for positioning a two-dimensional code image of a complex scene by adopting local HOG characteristics;
and the two-dimension code correction module corrects any image area containing the two-dimension code by adopting a positioning point regression network model.
Compared with the prior art, the invention has the following beneficial effects:
(1) the method skillfully adopts the cascaded local HOG characteristics to position the two-dimensional code image of the complex scene. The method has the advantages that the non-two-dimensional code regions can be rapidly arranged layer by layer through 3-level connection, the possible two-dimensional code regions in the final image can be obtained, and particularly the two-dimensional code regions can be rapidly and accurately obtained in a complex scene.
(2) The invention adopts a series of convolution layers, posing layers or full-connection layers to carry out correction point regression and further screening on the two-dimensional code area obtained in the first step, and has the advantages that: calibration points in two-dimensional code images with different qualities can be obtained quickly and accurately, and interference in two-dimensional code areas can be eliminated effectively.
(3) The invention carries out expansion enhancement processing on the two-dimensional code image of the training set, and has the advantages that: a large number of samples can be obtained before the network training is corrected, and overfitting in the model training process is prevented.
(4) The invention adopts a clustering algorithm to cluster the negative samples in the positioning process, and utilizes the appearance frequency of the clustered negative samples to sort and screen the two-dimensional code images, and has the advantages that: most of non-two-dimension code areas can be eliminated on the first layer and the second layer of the cascade classifier, and then two-dimension code positioning can be rapidly and accurately carried out.
The invention solves the defects of failure or lower accuracy rate, lower speed and the like of two-dimension code identification in the current complex scene, and has very high practical value and popularization value in the fields of mobile payment, logistics and the like.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope of protection, and it is obvious for those skilled in the art that other related drawings can be obtained according to these drawings without inventive efforts.
FIG. 1 is a flow chart of the positioning training process and the detection process of the present invention.
FIG. 2 is a schematic diagram of the localization point regression network model of the present invention.
FIG. 3 is a general flowchart of the detection and calibration process of the present invention.
Detailed Description
To further clarify the objects, technical solutions and advantages of the present application, the present invention will be further described with reference to the accompanying drawings and examples, and embodiments of the present invention include, but are not limited to, the following examples. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Examples
As shown in fig. 1 to fig. 3, the present embodiment provides a two-dimensional code detection and correction system in a complex scene, which includes a two-dimensional code positioning module that uses local HOG features to position a two-dimensional code image in the complex scene, and a two-dimensional code correction module that uses a localization point regression network model to correct any image area containing a two-dimensional code.
The detection and correction method of the present embodiment is briefly described below:
the first step is as follows: the method comprises the steps of obtaining a large number of two-dimension code images in a natural scene and a payment scene, marking the positions and three positioning points of the images by using the existing two-dimension code identification algorithm, and manually marking the unidentified images.
The second step is that: constructing a cascade of classifiers with local HOG features, wherein the classifiers comprise 5 layers of weak classifiers, the first layer comprises 4 local HOG features, the second layer comprises 8 local HOG features, the third layer comprises 16 local HOG features, and the fourth layer and fifth layer comprise 32 local HOG features.
The third step: dividing the image into a training set and a testing set, and acquiring a two-dimension code area positive sample and a non-two-dimension code area negative sample from the training sample; the method comprises the steps of presetting the width, position and dimension HOG characteristics of a two-dimensional code image, and classifying positive samples of a two-dimensional code region and negative samples of a non-two-dimensional code region to obtain the local HOG characteristics of the scale and position of a classifier. And clustering the negative samples by adopting a clustering algorithm, and sorting and screening the positive and negative samples by using the appearance frequency of the clustered negative sample region. HOG features with different widths, different positions and different dimensions are designed for the two-dimensional code image. And then, classifying the positive and negative samples by adopting an SVM (support vector machine) or other traditional classification algorithms, learning by using different samples in each layer, and selecting local HOG (histogram of oriented gradient) features with proper scales and positions for each layer.
The fourth step: and traversing the image to be detected by using the trained 3-layer classifier to obtain the final possible two-dimensional code region coordinate.
The fifth step: and constructing a positioning point regression network model consisting of a series of convolution layers, posing layers or full connection layers, wherein the network comprises two groups of outputs, one group of coordinates of the output positioning point and one group of outputs whether the two-dimensional codes exist or not. The network containing 5 3 × 3 convolution layers and 3 1 × 1 convolution layers is designed according to the characteristics of the positioning points of the two-dimensional code, 128 × 128 images are input into the network, two groups of data are output, and one group of data is 3 positioning point coordinates and has 6 dimensions in total.
And a sixth step: and dividing the two-dimensional code image obtained in the first step and the manually marked data into a training set and a testing set, and performing random rotation, color transformation, blurring, deformation and other processing on the two-dimensional code image of the training set to obtain an expanded enhanced data set.
The seventh step: inputting the training set data into a positioning point regression network model, and adjusting parameters of the positioning point regression network model according to a loss function to obtain a trained network model; and testing the accuracy using the test data set.
Eighth step: and loading the trained network model, and sending an image area possibly containing the two-dimensional code into a network to obtain three positioning point coordinates corresponding to the two-dimensional code image in the image and the confidence coefficient of the two-dimensional code.
The ninth step: and (4) calculating affine transformation parameters by using the coordinates of the three positioning points, and performing affine transformation correction on the two-dimensional code image.
The above-mentioned embodiments are only preferred embodiments of the present invention, and do not limit the scope of the present invention, but all the modifications made by the principles of the present invention and the non-inventive efforts based on the above-mentioned embodiments shall fall within the scope of the present invention.
Claims (6)
1. A two-dimensional code detection and correction method under a complex scene is characterized by comprising the following steps:
acquiring a plurality of two-dimension code images in a natural scene and a payment scene, marking the position and three positioning points of the two-dimension code in the images by using a two-dimension code identification algorithm, and marking unidentified images;
constructing a cascade classifier with local HOG characteristics;
dividing the image into a training set and a testing set, and acquiring a two-dimension code area positive sample and a non-two-dimension code area negative sample from the training sample; the HOG characteristics of the positive samples of the two-dimension code area and the negative samples of the non-two-dimension code area are distinguished by learning;
carrying out window scanning on a two-dimensional code image to be detected by utilizing the learned HOG characteristics to obtain two-dimensional code area coordinates in the two-dimensional code image;
constructing a positioning point regression network model consisting of a convolution layer, a pooling layer or a full connection layer; carrying out random rotation, color transformation, blurring and deformation treatment on the two-dimensional code image of the training set to obtain an expanded enhanced data set;
inputting the training set data into a positioning point regression network model, and adjusting parameters of the positioning point regression network model according to an L1 loss function to obtain a trained network model; testing the precision by using a test data set;
loading the trained network model, and sending any image containing the two-dimension code in the test set into the network model to obtain three positioning point coordinates corresponding to the two-dimension code image in the image and the confidence coefficient of the two-dimension code;
and (4) calculating affine transformation parameters by using the coordinates of the three positioning points, and performing affine transformation correction on the two-dimensional code image.
2. The method as claimed in claim 1, wherein the classifiers include 5 layers of weak classifiers, and the first layer includes 4 local HOG features, the second layer includes 8 local HOG features, the third layer includes 16 local HOG features, and the remaining layers include 32 local HOG features.
3. The method for detecting and correcting the two-dimensional code under the complex scene according to claim 1, wherein the anchor point regression network model composed of convolutional layer, posing layer or fully-connected layer comprises the following steps:
presetting a network comprising 5 3 × 3 convolution layers and 3 1 × 1 convolution layers according to the characteristics of the positioning points of the two-dimensional code;
carrying out random rotation, color transformation, blurring and deformation treatment on the two-dimensional code image of the training set to obtain an expanded enhanced data set;
inputting the training set data into a positioning point regression network model, and adjusting parameters of the positioning point regression network model according to a loss function to obtain a trained network model; testing the precision by using a test data set;
loading the trained network model, and sending any image area containing the two-dimensional code into a network to obtain three positioning point coordinates corresponding to the two-dimensional code image in the image and the confidence coefficient of the two-dimensional code;
and (4) calculating affine transformation parameters by using the coordinates of the three positioning points, and performing affine transformation correction on the two-dimensional code image.
4. The method according to claim 1, wherein the two-dimensional code positioning using the local HOG feature comprises the following steps:
dividing the image into a training set and a testing set, and acquiring a two-dimension code area positive sample and a non-two-dimension code area negative sample from the training sample;
clustering the positive samples of the two-dimension code area and the negative samples of the non-two-dimension code area by adopting a clustering algorithm, and sorting and screening the positive samples of the two-dimension code area and the negative samples of the non-two-dimension code area by using the appearance frequency of the positive samples of the two-dimension code area and the negative samples of the non-two-dimension code area after clustering;
loading each layer of classifier for learning according to frequency by using the sorted positive samples of the two-dimension code regions and the sorted negative samples of the non-two-dimension code regions to obtain local HOG characteristics on different scales and positions;
and carrying out window scanning on the two-dimensional code image to be detected by utilizing the HOG characteristics obtained by learning to obtain the two-dimensional code area coordinates in the two-dimensional code image.
5. The method for detecting and correcting the two-dimensional code under the complex scene according to claim 1, wherein the coordinate correction of the two-dimensional code image by using the coordinates of the three positioning points comprises the following steps:
and (3) calculating an affine transformation matrix according to the relative positions of the obtained 3 positioning points, and transforming the image through affine transformation to obtain a corrected two-dimensional code image.
6. A two-dimensional code detection and correction system under a complex scene is characterized in that the two-dimensional code detection and correction method under the complex scene is adopted, and the two-dimensional code detection and correction system comprises the following components:
the two-dimensional code positioning module is used for positioning a two-dimensional code image of a complex scene by adopting local HOG characteristics;
and the two-dimension code correction module corrects any image area containing the two-dimension code by adopting a positioning point regression network model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010340631.3A CN111523342A (en) | 2020-04-26 | 2020-04-26 | Two-dimensional code detection and correction method in complex scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010340631.3A CN111523342A (en) | 2020-04-26 | 2020-04-26 | Two-dimensional code detection and correction method in complex scene |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111523342A true CN111523342A (en) | 2020-08-11 |
Family
ID=71903601
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010340631.3A Pending CN111523342A (en) | 2020-04-26 | 2020-04-26 | Two-dimensional code detection and correction method in complex scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111523342A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112417918A (en) * | 2020-11-13 | 2021-02-26 | 珠海格力电器股份有限公司 | Two-dimensional code identification method and device, storage medium and electronic equipment |
CN113112175A (en) * | 2021-04-25 | 2021-07-13 | 山东新一代信息产业技术研究院有限公司 | Efficient warehouse shelf counting method and management system |
CN113239712A (en) * | 2021-04-27 | 2021-08-10 | 上海深豹智能科技有限公司 | Two-dimensional code high-speed decoding method and system |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104268498A (en) * | 2014-09-29 | 2015-01-07 | 杭州华为数字技术有限公司 | Two-dimension code recognition method and terminal |
CN108961262A (en) * | 2018-05-17 | 2018-12-07 | 南京汇川工业视觉技术开发有限公司 | A kind of Bar code positioning method under complex scene |
CN109543673A (en) * | 2018-10-18 | 2019-03-29 | 浙江理工大学 | A kind of low contrast punching press character recognition algorithm based on Interactive Segmentation |
CN109800616A (en) * | 2019-01-17 | 2019-05-24 | 柳州康云互联科技有限公司 | A kind of two dimensional code positioning identification system based on characteristics of image |
CN109815770A (en) * | 2019-01-31 | 2019-05-28 | 北京旷视科技有限公司 | Two-dimentional code detection method, apparatus and system |
CN109934249A (en) * | 2018-12-14 | 2019-06-25 | 网易(杭州)网络有限公司 | Data processing method, device, medium and calculating equipment |
US20190384954A1 (en) * | 2018-06-18 | 2019-12-19 | Abbyy Production Llc | Detecting barcodes on images |
-
2020
- 2020-04-26 CN CN202010340631.3A patent/CN111523342A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104268498A (en) * | 2014-09-29 | 2015-01-07 | 杭州华为数字技术有限公司 | Two-dimension code recognition method and terminal |
CN108961262A (en) * | 2018-05-17 | 2018-12-07 | 南京汇川工业视觉技术开发有限公司 | A kind of Bar code positioning method under complex scene |
US20190384954A1 (en) * | 2018-06-18 | 2019-12-19 | Abbyy Production Llc | Detecting barcodes on images |
CN109543673A (en) * | 2018-10-18 | 2019-03-29 | 浙江理工大学 | A kind of low contrast punching press character recognition algorithm based on Interactive Segmentation |
CN109934249A (en) * | 2018-12-14 | 2019-06-25 | 网易(杭州)网络有限公司 | Data processing method, device, medium and calculating equipment |
CN109800616A (en) * | 2019-01-17 | 2019-05-24 | 柳州康云互联科技有限公司 | A kind of two dimensional code positioning identification system based on characteristics of image |
CN109815770A (en) * | 2019-01-31 | 2019-05-28 | 北京旷视科技有限公司 | Two-dimentional code detection method, apparatus and system |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112417918A (en) * | 2020-11-13 | 2021-02-26 | 珠海格力电器股份有限公司 | Two-dimensional code identification method and device, storage medium and electronic equipment |
CN112417918B (en) * | 2020-11-13 | 2022-03-18 | 珠海格力电器股份有限公司 | Two-dimensional code identification method and device, storage medium and electronic equipment |
CN113112175A (en) * | 2021-04-25 | 2021-07-13 | 山东新一代信息产业技术研究院有限公司 | Efficient warehouse shelf counting method and management system |
CN113112175B (en) * | 2021-04-25 | 2023-03-21 | 山东新一代信息产业技术研究院有限公司 | Efficient warehouse shelf counting method and management system |
CN113239712A (en) * | 2021-04-27 | 2021-08-10 | 上海深豹智能科技有限公司 | Two-dimensional code high-speed decoding method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107506763B (en) | Multi-scale license plate accurate positioning method based on convolutional neural network | |
CN109840556B (en) | Image classification and identification method based on twin network | |
CN108898047B (en) | Pedestrian detection method and system based on blocking and shielding perception | |
CN107590498A (en) | A kind of self-adapted car instrument detecting method based on Character segmentation level di- grader | |
CN109118473B (en) | Angular point detection method based on neural network, storage medium and image processing system | |
CN111523342A (en) | Two-dimensional code detection and correction method in complex scene | |
CN110070090B (en) | Logistics label information detection method and system based on handwritten character recognition | |
CN111611874B (en) | Face mask wearing detection method based on ResNet and Canny | |
CN107506765B (en) | License plate inclination correction method based on neural network | |
CN110223310B (en) | Line structure light center line and box edge detection method based on deep learning | |
CN102385592B (en) | Image concept detection method and device | |
CN111127417B (en) | Printing defect detection method based on SIFT feature matching and SSD algorithm improvement | |
CN115147418B (en) | Compression training method and device for defect detection model | |
CN113158895A (en) | Bill identification method and device, electronic equipment and storage medium | |
CN114913498A (en) | Parallel multi-scale feature aggregation lane line detection method based on key point estimation | |
CN111339932B (en) | Palm print image preprocessing method and system | |
CN116188756A (en) | Instrument angle correction and indication recognition method based on deep learning | |
CN110321890B (en) | Digital instrument identification method of power inspection robot | |
CN108960005B (en) | Method and system for establishing and displaying object visual label in intelligent visual Internet of things | |
CN113283466A (en) | Instrument reading identification method and device and readable storage medium | |
CN113128518A (en) | Sift mismatch detection method based on twin convolution network and feature mixing | |
CN111950556A (en) | License plate printing quality detection method based on deep learning | |
CN109829511B (en) | Texture classification-based method for detecting cloud layer area in downward-looking infrared image | |
CN115169375B (en) | AR and gun ball linkage-based high-level material visualization method | |
CN111160142A (en) | Certificate bill positioning detection method based on numerical prediction regression model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200811 |