CN113688649A - Quick QR code positioning method - Google Patents

Quick QR code positioning method Download PDF

Info

Publication number
CN113688649A
CN113688649A CN202110936157.5A CN202110936157A CN113688649A CN 113688649 A CN113688649 A CN 113688649A CN 202110936157 A CN202110936157 A CN 202110936157A CN 113688649 A CN113688649 A CN 113688649A
Authority
CN
China
Prior art keywords
image
code
dimensional code
positioning
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110936157.5A
Other languages
Chinese (zh)
Inventor
荣超
鹿伟民
王守立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Bosaifu Medical Technology Co ltd
Original Assignee
Jiangsu Bosaifu Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Bosaifu Medical Technology Co ltd filed Critical Jiangsu Bosaifu Medical Technology Co ltd
Priority to CN202110936157.5A priority Critical patent/CN113688649A/en
Publication of CN113688649A publication Critical patent/CN113688649A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1443Methods for optical code recognition including a method step for retrieval of the optical code locating of the code in an image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/146Methods for optical code recognition the method including quality enhancement steps

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a quick QR code positioning method, which comprises the steps of inputting an image with a two-dimensional code; roughly positioning a two-dimensional code in an image, extracting a decoding area from a complex work background of the image, preliminarily separating the work background from the decoding area, and accurately isolating each decoding area when a plurality of areas to be decoded exist in the input image; accurately searching a positioning pattern in the decoding area, and finely positioning the two-dimensional code in the image; carrying out affine transformation on the two-dimensional code after fine positioning; decoding the two-dimensional code after affine transformation; and outputting the information in the two-dimensional code in the image. The invention divides the original positioning method into two steps of coarse positioning and fine positioning, can effectively reduce the influence of environmental noise, improves the stability of two-dimensional code positioning, reduces the processing area of a fine positioning algorithm, greatly improves the speed of the algorithm, simultaneously enables the algorithm to be well suitable for the condition that a plurality of two-dimensional codes exist on a picture, and realizes the simultaneous reading of the plurality of two-dimensional codes.

Description

Quick QR code positioning method
Technical Field
The invention belongs to the technical field of computer software, and particularly relates to a quick QR code positioning method.
Background
Two-dimensional codes (QR codes) were produced in the 40 s of the 20 th century and are designed to solve the problem of insufficient information capacity of one-dimensional bar codes. With the introduction of the error correcting code in the two-dimensional code encoding technology, the decoding success rate of the two-dimensional code is greatly increased, and even the information in the two-dimensional code which is stained, defective and fuzzy can be recovered through the decoding technology. The existing two-dimensional code is a mature protocol based on visual information interaction and a mature coding mode, fully utilizes information of a two-dimensional space and is coded based on an error correcting code, and has the advantages of large information amount and high stability, so that the two-dimensional code is widely applied to various industries.
At present, in the process of manufacturing pathological sections, particularly hard tissue sections, a large number of two-dimensional codes are used as section IDs, the two-dimensional codes are required to be identified more quickly and stably, and continuous real-time identification needs to be supported. However, most of the conventional two-dimensional code recognition libraries are based on single recognition, that is, the whole image is searched for the positioning mark of the two-dimensional code to position the two-dimensional code, and then the two-dimensional code is recognized (as shown in fig. 1), which results in low code reading efficiency and failure of meeting the real-time requirement.
Disclosure of Invention
In order to overcome the defects that the existing two-dimension code is low in positioning speed and cannot meet the real-time requirement, the invention provides a quick QR code positioning method, which can achieve the purposes of removing environmental noise, improving the stability of the algorithm, reducing a positioning pattern search area and improving the speed of the algorithm by adding a coarse positioning step, and can effectively realize the function of simultaneously positioning a plurality of two-dimension codes.
In order to solve the technical problems and achieve the technical effects, the invention is realized by the following technical scheme:
a quick QR code positioning method comprises the following steps:
s1, inputting an image with a two-dimensional code;
s2, roughly positioning the two-dimensional code in the image, wherein the work content is,
extracting a decoding area from a complex work background of an image, preliminarily separating the work background from the decoding area, and accurately isolating each decoding area when a plurality of areas to be decoded exist in the input image to prepare for the next fine positioning;
s3, accurately searching a positioning pattern in the decoding area, and finely positioning the two-dimensional code in the image;
s4, performing affine transformation on the two-dimensional code after fine positioning;
s5, decoding the two-dimensional code after affine transformation;
and S6, outputting the information in the two-dimensional code in the image.
Further, in step S2, the algorithm for two-dimensional code coarse positioning is to use the HOG as a feature, and use the SVM classifier to perform full-image fast search on the incoming real-time image, to initially position each region to be decoded in the image, and to provide reliable algorithm dependence for the next two-dimensional code fine positioning correction.
Further, the method for realizing the two-dimensional code coarse positioning specifically comprises the following steps:
s2.1, inputting an image;
s2.2, preprocessing the input image;
s2.3, performing edge extraction on the preprocessed image;
s2.4, respectively segmenting the preprocessed image and the image subjected to edge extraction into small area images;
s2.5, pre-filtering each small area image of the image subjected to edge extraction to compress detection time;
s2.6, extracting the HOG characteristics of the edge points of all the small area images;
s2.7, classifying the HOG characteristics of the edge points of each small block area image by using an SVM classifier to obtain a classification result, wherein the HOG characteristics of the edge points of each small block area image comprise code areas and non-code areas;
s2.8, determining a two-dimensional code area by using connected domain analysis according to the classification result;
and S2.9, outputting a coarse positioning result of the two-dimensional code.
Further, in step S2.2, the image preprocessing method is to convert the incoming RGB color image into a gray-scale image with 256 levels of gray scale to prepare for the edge extraction of the next image.
Further, in step S2.3, the image edge extraction method is to perform Canny edge detection on the gray-scale image of the preprocessed image to obtain an edge image with edge point information of the whole image.
Further, in step S2.4, the image of the small block area is divided by dividing the gray scale image of the image and the edge image of the image into the small block area images with the same size and without overlapping.
Further, in step S2.5, the method for pre-filtering the small block region image includes obtaining a conclusion of distinguishing according to the edge point ratio setting threshold in each small block region image according to the characteristics of the black and white decoding region in the two-dimensional code;
the calculation formula of the edge point ratio is as follows:
Figure BDA0003213249130000031
wherein BE is the ratio of edge points, patch _ pixels is the total number of pixels in each small block area image, and all _ edges is the number of edge points in each small block area image.
Further, in step S2.6, the method for extracting the edge point HOG feature of the small block region image is to perform gradient direction statistics on the pixel points containing the edge points by using the small block region image only containing the edge point information obtained by Canny edge detection.
Further, in step S2.7, the method for classifying the code region and the non-code region includes using the edge point HOG of the small region image as a feature, training the training data set in advance by using the SVM classifier, and after obtaining the training model, directly using the training model to classify the code region and the non-code region of the sample during code reading, so as to obtain a classification result.
Further, in step S2.8, the method for determining the two-dimensional code region is to divide the two-dimensional code region marked on the classification result map into blocks according to the way of dividing connected domains, find out the maximum circumscribed rectangle of each connected domain, and thus extract a plurality of rectangular frames as the candidate regions of the two-dimensional code.
The invention has the beneficial effects that:
the invention improves the traditional two-dimension code positioning method, divides the original positioning method into two steps of coarse positioning and fine positioning, can effectively reduce the influence of environmental noise, improves the stability of two-dimension code positioning, reduces the processing area of a fine positioning algorithm, greatly improves the speed of the algorithm, simultaneously enables the algorithm to be well suitable for the condition that a plurality of two-dimension codes exist on a picture, and realizes the simultaneous reading of the plurality of two-dimension codes, thereby overcoming the defects that the positioning speed of the traditional two-dimension code is low and the real-time requirement cannot be met.
The foregoing is a summary of the present invention, and in order to provide a clear understanding of the technical means of the present invention and to be implemented in accordance with the present specification, the following is a detailed description of the preferred embodiments of the present invention. Specific embodiments of the present invention are given in detail by the following examples.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a flowchart of a two-dimensional code positioning method in the prior art;
FIG. 2 is a flowchart of a two-dimensional code positioning method according to the present invention;
FIG. 3 is a flowchart of a two-dimension code coarse positioning method according to the present invention;
fig. 4 shows a region sample image (a) and an edge point HOG feature map image (b) when the edge point HOG feature is extracted according to the present invention.
Detailed Description
The present invention will be described in detail with reference to examples. The description set forth herein is intended to provide a further understanding of the invention and forms a part of this application and is intended to be an exemplification of the invention and is not intended to limit the invention to the particular embodiments illustrated.
Referring to fig. 2, a quick QR code positioning method includes the following steps:
and S1, inputting the image with the two-dimensional code.
S2, roughly positioning the two-dimensional code in the image, wherein the work content is,
extracting a decoding area from a complex work background of an image, preliminarily separating the work background from the decoding area, and accurately isolating each decoding area when a plurality of areas to be decoded exist in the input image to prepare for the next fine positioning.
The algorithm for the two-dimensional code coarse positioning is characterized in that HOG is used as a feature, an SVM classifier is used for carrying out full-image fast search on an incoming real-time image, each region to be decoded in the image is preliminarily positioned, and reliable algorithm dependence is provided for the next step of fine positioning correction of the two-dimensional code. The HOG (histogram of Oriented gradient) gradient direction histogram is a description operator based on shape edge characteristics, has good rotation invariance and illumination invariance, can describe local texture characteristics of an image, and has good distinguishing degree on a code area and a non-code area of a two-dimensional code; the method is characterized in that a gradient histogram of the pixel points is counted by calculating the direction and the amplitude of the pixel points, and the histogram is used as a characteristic value and directly used for classification detection on an SVM classifier.
Referring to fig. 3, the method for implementing coarse positioning of the two-dimensional code is specifically as follows:
s2.1, inputting an image;
s2.2, preprocessing the input image;
converting the transmitted RGB color image into a gray image with 256-level gray scale, and preparing for the edge extraction of the next image; by doing so, the occupation of algorithm processing memory is greatly reduced, and simultaneously, redundant information which is not needed by part of algorithm detection can be removed, the information density is improved, and the operation cost is reduced;
s2.3, performing edge extraction on the preprocessed image;
canny edge detection is carried out on the gray level image of the preprocessed image, and an edge image with the edge point information of the whole image is obtained; the Canny algorithm is a perfect edge detection algorithm, a Gaussian filter is used for smoothing an image, the noise resistance is strong, the amplitude and the direction of a gradient are calculated by using finite difference of first-order partial derivatives, the non-maximum value inhibition is carried out on the amplitude of the gradient, double-threshold detection and edge connection are adopted, two different thresholds are used for respectively detecting a strong edge and a weak edge, and when the weak edge is connected with the strong edge, the weak edge is contained in an output image, so that the edge detection performance is improved;
s2.4, respectively segmenting the preprocessed image and the image subjected to edge extraction into small area images;
dividing a gray scale image of the image and an edge image of the image into small block area images which have the same size (32 x 32) and are not overlapped;
s2.5, pre-filtering each small area image of the edge map to compress detection time;
according to the characteristics of black and white decoding areas in the two-dimensional code, a conclusion that a threshold value is set according to the ratio of edge points in each small area image for distinguishing is obtained; the calculation formula of the edge point ratio is as follows:
Figure BDA0003213249130000061
wherein BE is the ratio of the edge points, patch _ pixels is the total number of pixels in each small block area image, and all _ edges is the number of the edge points in each small block area image;
s2.6, extracting the HOG characteristics of the edge points of each small area image of the edge image and the gray image;
the classical HOG features have great redundancy for feature extraction of the two-dimensional code region, so the method further improves the algorithm. According to the invention, the classical HOG characteristic is abandoned, and HOG characteristic statistics is directly carried out on the edge points of each small region image, as shown in FIG. 4, a graph represents a region sample image, and b graph is an edge point HOG characteristic image; the edge information features with high discriminability are extracted, so that a one-dimensional code region in a complex background can be positioned more robustly; the implementation method comprises the steps of carrying out gradient direction statistics on pixel points containing edge points by utilizing a small block area image which only contains edge point information and is obtained by Canny edge detection; if the number of the gradient direction intervals is set to be 18, dividing (0 degrees and 180 degrees) into a small area every 10 degrees for addition statistics;
s2.7, classifying the HOG characteristics of the edge points of each small block area image by using an SVM classifier to obtain a classification result, wherein the HOG characteristics of the edge points of each small block area image comprise code areas and non-code areas;
using an edge point HOG of a small area image as a feature, using an SVM classifier to train a training data set in advance, and directly using the training model to classify a code region and a non-code region of a sample when reading a code after obtaining the training model to obtain a classification result; the SVM classifier can be used for classifying linear/nonlinear characteristics and can also be used for regression problems, the generalization error rate is low, the learning capability is good, and the learned result is good in popularization. The SVM classifier can solve the machine learning problem under the condition of small samples, can also solve the high-dimensional problem, and can avoid the problems of neural network structure selection and local minimum points. The method is the best existing classifier, can be directly used without modification, can obtain a lower error rate, and can make a good classification decision on data points outside a training set;
s2.8, determining a two-dimensional code area by using connected domain analysis according to the classification result;
partitioning two-dimensional code regions marked on the classification result graph according to a mode of dividing connected domains, finding out the maximum external rectangle of each connected domain, and extracting a plurality of rectangular frames to be used as candidate regions of the two-dimensional code; finally, only reading codes in the candidate area is needed, the problem of identification of a plurality of two-dimensional codes in one image is solved, and code reading time can be greatly reduced.
And S2.9, outputting a coarse positioning result of the two-dimensional code.
And S3, accurately searching a positioning pattern in the decoding area, and finely positioning the two-dimensional code in the image.
And S4, performing affine transformation on the two-dimensional code after the fine positioning.
And S5, decoding the two-dimensional code after affine transformation.
And S6, outputting the information in the two-dimensional code in the image.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and those skilled in the art can make various modifications and variations. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A quick QR code positioning method is characterized by comprising the following steps:
s1, inputting an image with a two-dimensional code;
s2, roughly positioning the two-dimensional code in the image, wherein the work content is,
extracting a decoding area from a complex work background of an image, preliminarily separating the work background from the decoding area, and accurately isolating each decoding area when a plurality of areas to be decoded exist in the input image to prepare for the next fine positioning;
s3, accurately searching a positioning pattern in the decoding area, and finely positioning the two-dimensional code in the image;
s4, performing affine transformation on the two-dimensional code after fine positioning;
s5, decoding the two-dimensional code after affine transformation;
and S6, outputting the information in the two-dimensional code in the image.
2. The QR code positioning method according to claim 1, wherein: in step S2, the algorithm for two-dimensional code coarse positioning is to use the HOG as a feature and use the SVM classifier to perform full-image fast search on the incoming real-time image, initially position each region to be decoded in the image, and provide reliable algorithm dependence for the next two-dimensional code fine positioning correction.
3. The QR code positioning method according to claim 2, wherein the two-dimensional code coarse positioning is realized by the following method:
s2.1, inputting an image;
s2.2, preprocessing the input image;
s2.3, performing edge extraction on the preprocessed image;
s2.4, respectively segmenting the preprocessed image and the image subjected to edge extraction into small area images;
s2.5, pre-filtering each small area image of the image subjected to edge extraction;
s2.6, extracting the HOG characteristics of the edge points of all the small area images;
s2.7, classifying the HOG characteristics of the edge points of each small block area image by using an SVM classifier to obtain a classification result, wherein the HOG characteristics of the edge points of each small block area image comprise code areas and non-code areas;
s2.8, determining a two-dimensional code area by using connected domain analysis according to the classification result;
and S2.9, outputting a coarse positioning result of the two-dimensional code.
4. The QR code positioning method according to claim 3, wherein: in step S2.2, the image preprocessing method is to convert the incoming RGB color image into a gray-scale image with 256 levels of gray scale, and prepare for edge extraction of the next image.
5. The QR code positioning method according to claim 4, wherein: in step S2.3, the image edge extraction method is to perform Canny edge detection on the gray scale image of the preprocessed image to obtain an edge image with edge point information of the whole image.
6. The QR code positioning method according to claim 5, wherein: in step S2.4, the image of the small block area is divided by dividing the gray scale image of the image and the edge image of the image into small block area images with the same size and without overlapping.
7. The QR code positioning method according to claim 6, wherein: in step S2.5, the method for pre-filtering the small block region image is to obtain a conclusion of distinguishing according to the edge point ratio setting threshold in each small block region image according to the characteristics of the black and white decoding region in the two-dimensional code;
the calculation formula of the edge point ratio is as follows:
Figure FDA0003213249120000021
wherein BE is the ratio of edge points, patch _ pixels is the total number of pixels in each small block area image, and all _ edges is the number of edge points in each small block area image.
8. The QR code positioning method according to claim 7, wherein: in step S2.6, the method for extracting the edge point HOG feature of the small block region image is to perform gradient direction statistics on the pixel points containing the edge points by using the small block region image containing only the edge point information obtained by Canny edge detection.
9. The QR code positioning method according to claim 8, wherein: in step S2.7, the method for classifying the code region and the non-code region includes using the edge point HOG of the small region image as a feature, training the training data set in advance by using the SVM classifier, and after obtaining the training model, directly using the training model to classify the code region and the non-code region of the sample during code reading, so as to obtain a classification result.
10. The QR code positioning method according to claim 9, wherein: in step S2.8, the method for determining the two-dimensional code region is to divide the two-dimensional code region marked on the classification result map into blocks according to the way of dividing connected domains, find out the maximum circumscribed rectangle of each connected domain, and thus extract a plurality of rectangular frames as candidate regions of the two-dimensional code.
CN202110936157.5A 2021-08-16 2021-08-16 Quick QR code positioning method Pending CN113688649A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110936157.5A CN113688649A (en) 2021-08-16 2021-08-16 Quick QR code positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110936157.5A CN113688649A (en) 2021-08-16 2021-08-16 Quick QR code positioning method

Publications (1)

Publication Number Publication Date
CN113688649A true CN113688649A (en) 2021-11-23

Family

ID=78579949

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110936157.5A Pending CN113688649A (en) 2021-08-16 2021-08-16 Quick QR code positioning method

Country Status (1)

Country Link
CN (1) CN113688649A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040011872A1 (en) * 2002-07-18 2004-01-22 Hajime Shimizu Two-dimensional code reading method, two-dimensional code reading program, recording medium with two-dimensional code reading program, two-dimentional code reading device, digital camera and portable terminal with digital camera
CN102096795A (en) * 2010-11-25 2011-06-15 西北工业大学 Method for recognizing worn two-dimensional barcode image
WO2016197816A2 (en) * 2016-03-15 2016-12-15 中兴通讯股份有限公司 Terminal positioning method and device, and electronic device
CN106485183A (en) * 2016-07-14 2017-03-08 深圳市华汉伟业科技有限公司 A kind of Quick Response Code localization method and system
CN108920992A (en) * 2018-08-08 2018-11-30 长沙理工大学 A kind of positioning and recognition methods of the medical label bar code based on deep learning
CN108961262A (en) * 2018-05-17 2018-12-07 南京汇川工业视觉技术开发有限公司 A kind of Bar code positioning method under complex scene
WO2019169532A1 (en) * 2018-03-05 2019-09-12 深圳前海达闼云端智能科技有限公司 License plate recognition method and cloud system
CN111931538A (en) * 2020-07-07 2020-11-13 广东奥普特科技股份有限公司 Positioning method of Micro QR two-dimensional code
CN112651259A (en) * 2020-12-29 2021-04-13 芜湖哈特机器人产业技术研究院有限公司 Two-dimensional code positioning method and mobile robot positioning method based on two-dimensional code
CN112949338A (en) * 2021-03-16 2021-06-11 太原科技大学 Two-dimensional bar code accurate positioning method combining deep learning and Hough transformation

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040011872A1 (en) * 2002-07-18 2004-01-22 Hajime Shimizu Two-dimensional code reading method, two-dimensional code reading program, recording medium with two-dimensional code reading program, two-dimentional code reading device, digital camera and portable terminal with digital camera
CN102096795A (en) * 2010-11-25 2011-06-15 西北工业大学 Method for recognizing worn two-dimensional barcode image
WO2016197816A2 (en) * 2016-03-15 2016-12-15 中兴通讯股份有限公司 Terminal positioning method and device, and electronic device
CN106485183A (en) * 2016-07-14 2017-03-08 深圳市华汉伟业科技有限公司 A kind of Quick Response Code localization method and system
WO2019169532A1 (en) * 2018-03-05 2019-09-12 深圳前海达闼云端智能科技有限公司 License plate recognition method and cloud system
CN108961262A (en) * 2018-05-17 2018-12-07 南京汇川工业视觉技术开发有限公司 A kind of Bar code positioning method under complex scene
CN108920992A (en) * 2018-08-08 2018-11-30 长沙理工大学 A kind of positioning and recognition methods of the medical label bar code based on deep learning
CN111931538A (en) * 2020-07-07 2020-11-13 广东奥普特科技股份有限公司 Positioning method of Micro QR two-dimensional code
CN112651259A (en) * 2020-12-29 2021-04-13 芜湖哈特机器人产业技术研究院有限公司 Two-dimensional code positioning method and mobile robot positioning method based on two-dimensional code
CN112949338A (en) * 2021-03-16 2021-06-11 太原科技大学 Two-dimensional bar code accurate positioning method combining deep learning and Hough transformation

Similar Documents

Publication Publication Date Title
Kumar et al. A detailed review of feature extraction in image processing systems
CN104751142B (en) A kind of natural scene Method for text detection based on stroke feature
CN106203539B (en) Method and device for identifying container number
CN107563380A (en) A kind of vehicle license plate detection recognition method being combined based on MSER and SWT
CN112364862B (en) Histogram similarity-based disturbance deformation Chinese character picture matching method
CN108256518B (en) Character area detection method and device
CN110991374B (en) Fingerprint singular point detection method based on RCNN
CN102163336B (en) Method for coding and decoding image identification codes
Pirgazi et al. An efficient robust method for accurate and real-time vehicle plate recognition
Rastegar et al. An intelligent control system using an efficient License Plate Location and Recognition Approach
CN113688649A (en) Quick QR code positioning method
Devi et al. A comparative Study of Classification Algorithm for Printed Telugu Character Recognition
CN111753842B (en) Method and device for detecting text region of bill
CN113283299A (en) Method for enhancing partial discharge signal PRPD atlas data based on CGAN network
Sathya et al. Vehicle license plate recognition (vlpr)
Chen et al. License Plate Recognition from Low-Quality Videos.
Visilter et al. Development of OCR system for portable passport and visa reader
Shivani Techniques of Text Detection and Recognition: A Survey
Lin et al. Text extraction from name cards using neural network
Amarapur et al. Video text extraction from images for character recognition
Likesh et al. Binarization of Malayalam Palm Leaf Images using VGG Net and Integrated Otsu
Johansson Network threat modeling
Zhao et al. Multivariable Recognition Method for Visual Symbols of Environmental Sign Based on Sequential Similarity.
Kumar Scene text segmentation and recognition by applying trimmed median filter using energetic edge detection schemes and ocr
JP2979089B2 (en) Character recognition method for scene images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination