CN111310508B - Two-dimensional code identification method - Google Patents

Two-dimensional code identification method Download PDF

Info

Publication number
CN111310508B
CN111310508B CN202010092621.2A CN202010092621A CN111310508B CN 111310508 B CN111310508 B CN 111310508B CN 202010092621 A CN202010092621 A CN 202010092621A CN 111310508 B CN111310508 B CN 111310508B
Authority
CN
China
Prior art keywords
image
dimensional code
blur
motion blur
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010092621.2A
Other languages
Chinese (zh)
Other versions
CN111310508A (en
Inventor
曹政才
李俊年
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Chemical Technology
Original Assignee
Beijing University of Chemical Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Chemical Technology filed Critical Beijing University of Chemical Technology
Priority to CN202010092621.2A priority Critical patent/CN111310508B/en
Publication of CN111310508A publication Critical patent/CN111310508A/en
Application granted granted Critical
Publication of CN111310508B publication Critical patent/CN111310508B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1452Methods for optical code recognition including a method step for retrieval of the optical code detecting bar code edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/146Methods for optical code recognition the method including quality enhancement steps
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a two-dimensional code identification method, which comprises the steps of selecting QR two-dimensional codes in different backgrounds according to actual identification requirements, adding motion blurs with different blur lengths and blur angles, and designing and manufacturing a two-dimensional code data set based on the motion blurs. Secondly, an image de-motion blur algorithm based on the generated countermeasure network is adopted, the image feature extraction network based on the feature pyramid is adopted to extract feature mapping of the image, and a new GAN loss function is designed to enable the image de-blur algorithm based on the generated countermeasure network to be faster in training speed and higher in robustness, so that a very efficient image de-blur effect can be provided. An image binarization algorithm based on adaptive threshold and a series of algorithms based on image processing are designed to better extract the edge region of an image and correct the image. The method and the device can be applied to two-dimensional code identification environments with different backgrounds.

Description

Two-dimensional code identification method
Technical Field
The invention relates to the field of robot image processing, in particular to a two-dimensional code identification method.
Background
In recent years, with the rapid development of science and technology, two-dimensional codes are used as one of emerging automatic identification technologies, and by virtue of the advantages of the two-dimensional codes in the aspects of identification speed, storage capacity, error correction capability and the like, the two-dimensional codes are increasingly applied to many industries and fields, such as shared bicycle, mobile payment, warehousing scheduling, robot positioning navigation, social software and the like, so that great convenience is brought to the daily life of people.
In recent years, thanks to the rapid development of deep learning, researchers have proposed many image processing algorithms based on convolutional neural network networks, which perform the given task through iterative training on large-scale training samples. Compared with the traditional algorithm, the convolutional neural network-based algorithm has better performance and higher robustness, so that a new research hotspot in the field of artificial intelligence is formed quickly.
Motion blur is very likely to occur in the two-dimensional code recognition process. The image deblurring problem is a typical ill-posed problem, or may be called an inverse problem, and thus is difficult to process, especially difficult to depict detailed information such as edges and textures. The traditional method firstly obtains information about a blur kernel through other means, and then reconstructs a clear image according to the blur kernel, and under the normal condition, a blur function is unknown, so that the traditional method has great limitation and is not enough to solve the problem of image motion blur caused by various complex factors in actual life. Kupyn et al proposed the use of a Conditional Generation adaptive Network (Conditional general adaptive Network) to deblur images in IEEE conference on Computer Vision and Pattern Recognition, and achieved faster operating speeds with a more simplified Network architecture. Experiments prove that the image deblurring algorithm can obtain a more excellent image deblurring effect and is enough to prove that the deep learning algorithm has excellent performance on the image deblurring problem. However, through related technology search, it is found that a recognition method for a two-dimensional code under a strong motion blur condition is not satisfied at present.
Disclosure of Invention
The following presents a simplified summary of one or more aspects in order to provide a basic understanding of such aspects. This summary is not intended to detail all aspects, the sole purpose of which is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.
Aiming at the situation that the two-dimensional code image cannot be accurately identified due to motion blur, the novel two-dimensional code identification method based on the combination of the motion blur removal algorithm of the generation countermeasure network and the traditional image processing algorithm is adopted, so that the two-dimensional code image can still be accurately decoded and identified when being greatly influenced by the motion blur, the robustness of the two-dimensional code identification is improved, and the problems in the conventional two-dimensional code identification method are effectively solved.
The invention provides a two-dimensional code identification method, which comprises the following steps:
step 1: the QR two-dimensional code dataset based on motion blur is designed and manufactured and is used for training and testing an image de-motion blur algorithm, and the method comprises the following steps:
step 1.1: the method comprises the steps of designing a QR two-dimensional code dataset based on motion blur, and selecting a background where a two-dimensional code is located, information contained in the two-dimensional code, and the distance between the two-dimensional code and a camera according to an actual human-computer interaction scene.
Step 1.2: and acquiring the image information of the two-dimensional code by using a kinect camera under the selected background, and adding motion blurs with different blur lengths and blur angles to finish the manufacturing of the motion-blurred QR two-dimensional code dataset.
Step 2: constructing an image de-motion blur algorithm based on a generated countermeasure network, training by using a QR two-dimensional code data set based on motion blur to obtain parameter weight, and still obtaining a de-motion blur effect under the condition that an image blur kernel is unknown;
step 2.1: in the image motion blur removing algorithm based on the generated countermeasure network, an image feature extraction network based on a feature pyramid is adopted, so that the extracted feature mapping contains more semantic information and the quality is ensured;
step 2.2: in the image de-motion blur algorithm based on the generation countermeasure network, the current latest recovery model is adopted to meet the requirement on image enhancement, and the high-quality image de-blur effect is achieved;
step 2.3: by designing the GAN loss function, the training speed of the image deblurring algorithm based on the generated countermeasure network is higher, and the robustness is higher.
And step 3: aiming at the possible uneven local area intensity of a two-dimensional code image, the method comprises an image binarization algorithm based on adaptive threshold processing, and is used for dividing the image into high-intensity and low-intensity areas and realizing image binarization; aiming at the problems of two-dimensional code image edge detection and geometric correction, firstly, a morphological expansion method is applied to obtain the edge of a target area, the boundary of the target area is extracted through a canny operator, then, a straight line where the boundary is located is obtained through Hough transformation, and finally, the two-dimensional code image is corrected through bilinear interpolation operation. Which comprises the following steps:
step 3.1: determining a pixel value R on a rectangle R within a moving window having a size S x SsSo that the window size depends on the width W of the image, the maximum possible window width being 1/64 of W, in the step of image binarization, each pixel is set to black if its value satisfies a given threshold expression; otherwise it is set to white.
Step 3.2: in order to improve the performance of an image binarization algorithm, an image is divided into a high-intensity area and a low-intensity area; calculating the sum of pixels in the rectangle R to obtain a local threshold, and then comparing with the average value of the whole image; in the regions of different intensities, the threshold value is calculated differently.
Step 3.3: after the image is binarized, performing expansion operation based on computer morphology to obtain an edge area of the two-dimensional code image; detecting the edge area based on a canny operator, then solving the slope and the rotation angle of a straight line where the edge is located based on improved Hough transformation, and finally performing rotation correction on the image based on a bilinear interpolation method and finishing the identification of the two-dimensional code.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention.
Fig. 1 is a flowchart of a two-dimensional code recognition method in the present invention.
FIG. 2 is a schematic diagram of an image de-motion blur algorithm based on a generation countermeasure network in the present invention.
FIGS. 3 and 4 are graphs showing the change of the value of the loss function in the training process of the image de-motion blur algorithm of the present invention.
Fig. 5 is a diagram of the effect of the inventive process on the set of motion blurred two-dimensional data.
FIG. 6 is a graph of the processing results of the two-dimensional code image edge detection and geometric correction algorithm provided by the present invention.
Detailed Description
For a better understanding of the technical solutions of the present invention, the following further describes embodiments of the present invention with reference to the accompanying drawings and specific examples. It is noted that the aspects described below in connection with the figures and the specific embodiments are only exemplary and should not be construed as imposing any limitation on the scope of the present invention.
A new method for identifying two-dimensional codes is shown in fig. 1:
step 1: and constructing a QR two-dimensional code image based on the Zxing data packet by using MATLAB software.
Step 2: and selecting the background of the two-dimensional code according to the actual human-computer interaction scene, wherein the two-dimensional code comprises information, the distance between the two-dimensional code and the camera, and acquiring image information of the two-dimensional code by using a kinect camera.
And step 3: the motion blur is added to the two-dimensional code image using MATLAB software. Wherein the blurring length L is 6-10 (one length is selected for each pixel unit), and the blurring angle theta is 10-180 degrees (one angle is selected for every 10 degrees), so that the motion blurring two-dimensional code dataset is manufactured. And the two-dimensional code decoding App is used for verification, and the constructed motion blur two-dimensional code image can not be directly decoded.
And 4, step 4: an image de-motion blur algorithm based on a generation countermeasure network is constructed, and a flow chart of the algorithm is shown in FIG. 2. Based on the algorithm, firstly, an image feature extraction network based on a feature pyramid is designed, and the image feature extraction network comprises a bottom-up path and a top-down path. The bottom-up path is a convolutional network for feature extraction, with spatial resolution downsampled in the bottom-up process. More semantic feature information can be extracted and compressed. On the other hand, the high resolution details of the image are supplemented by the top-down path.
Step 4.1: the image feature extraction network is composed of a feature pyramid backbone network, five feature mappings of different scales are obtained from the two-dimensional code image through the image feature extraction network, and then convolution and pooling are carried out on the feature mappings to obtain different features.
Step 4.2: the extracted features are upsampled to an input size 1/4 and concatenated into a tensor containing different levels of semantic information.
Step 4.3: an upsampling layer and a convolution layer are added at the end of the image feature extraction network to restore sharp images and remove artifacts, and a jump connection from input to output is introduced to learn the residual between the input and the true value.
And 5: because the feature pyramid based image feature extraction network has the superior property of plug and play, the MobileNet-V2 network is used here as the backbone network for image enhancement. The network can reduce the complexity of a network structure, reduce the parameter scale of a model, improve the speed of network training and obtain an efficient image deblurring effect through the backbone network.
Step 6: for the arbiter for generating the countermeasure network, an arbiter for determining global and local is used here, by calculating the Wasserstein distance:
Figure BDA0002384214220000051
to design a new loss function of the discriminator, where PγRepresenting the generation distribution, PgIndicating a true distribution, Π (P)γ,Pg) Represents PγAnd PgSet of all joint distributions combined, E(x,y)~γ[||x-y||]Representing the expected value of the sample versus distance under the joint distribution gamma. The new GAN loss function expression is as follows:
d _ Loss function:
Figure BDA0002384214220000052
l is the G _ Loss functionG=0.5×Lp+0.005*LX+0.011*Ladv
Wherein D (x) denotes a discriminator, G (z) denotes a generator, LpRepresents the mean square error, LXDenotes the loss of perception, LadvIncluding global and local discriminant penalties. Through training, the motion blur which is not uniformly distributed can be removed quickly and high-quality.
And 7: the proposed image de-motion blur algorithm based on the generative countermeasure network is trained in a Python environment using a public data set GoPro, with the main training parameters: batch size is 1, Optimizer Adam, Learning rate is 0.0001. In the training process, the values of the D _ Loss and G _ Loss functions change as shown in fig. 3 and 4, and the value of the Loss function tends to be stable after the iterative training of about 200 epoch.
And 8: because the two-dimensional code mainly takes modules with dark and light colors as binary codes of information, a decoding program needs to convert the modules into corresponding binary information streams to obtain the coded information, and therefore binaryzation needs to be carried out on the deblurred two-dimensional code image.
Step 8.1: integral images are a method that can be used to accelerate the calculation of image regions, and the mathematical model is:
Figure BDA0002384214220000061
where I (a, b) denotes an integrated value, and I (a, b) denotes the pixel intensity of an arbitrary point (a, b).
Step 8.2: the integral image can be quickly calculated by I (a, b) ═ f (a, b) + I (a-1, b) + I (a, b-1) -I (a-1, b-1), where f (a, b) denotes the pixel intensity of the image and I (a, b) denotes the position of the original grayscale image.
Step 8.3: determining a pixel value R on a rectangle R within a moving window having a size S x SsSuch that the window size depends on the width W of the image, the maximum window width possible being 1/64 of W. Wherein R iss=I(a2,b2)-I(a1,b2)-I(a2,b1)+I(a1,b1) At this time
Figure BDA0002384214220000062
Step 8.4: the number of pixels C on the rectangle R ═ a2-a1)×(b2-b1)。
Step 8.5: in the image binarization step, if the value of each pixel satisfies a given threshold value expression, each pixel is set to black; otherwise it is set to white. Local threshold passes through i (a, b). times.C ≦ RsX (1-T), T represents the percentage.
Step 8.6: in order to improve the performance of the image binarization algorithm, the image can be divided into two areas of high intensity and low intensity. The threshold value is calculated differently in regions of different intensities. Wherein in the high intensity region, the formula for the threshold calculation is as follows:
Rs=C×(M+sd)
R's×(1-T)<i'(a,b)×C
where M denotes the average pixel of the image, sd denotes the standard deviation of the image pixels, and i' (a, b) denotes the pixel value of the point (a, b).
Step 8.7: in the low intensity region, the formula for the threshold calculation is as follows:
Rs≥C×i(a,b)
where i (a, b) represents the pixel value of point (a, b).
Fig. 5 shows the operation result of the method on the constructed QR two-dimensional code motion blur data set.
And step 9: after the image binarization, firstly, the edge area of the two-dimensional code image is obtained based on the expansion operation of computer morphology. Secondly, detecting an edge region based on a traditional canny operator, then solving the slope and the rotation angle of a straight line where an edge is located based on improved Hough transformation, and finally performing rotation correction on the image based on a bilinear interpolation method, and completing the identification of the two-dimensional code, wherein the experimental result is shown in figure 6.
Step 9.1: based on the expansion operation of computer graphics, the edge area of the two-dimensional code image is obtained, and the mathematical formula of the expansion operation is as follows:
Figure BDA0002384214220000071
step 9.2: and carrying out edge detection on the two-dimensional code image based on the traditional canny operator.
Step 9.3: identifying the longest straight line segment extracted from the two-dimensional code image based on improved Hough transformation, detecting start and stop end points, and calculating the slope and the rotation angle of the two-dimensional code image by using the coordinate values of the straight line end points, wherein the corresponding equation is as follows:
ρ′=x cos θ′+y sin θ′
assuming that there are several points on the plane coordinate system, the straight lines passing through these several points form a straight line system, which respectively correspond to a sine curve on the polar coordinate system, and the common intersection point coordinates of these sine curves are (ρ ', θ').
Step 9.4: and performing rotation correction on the two-dimensional code image based on a bilinear interpolation algorithm by using the slope and the rotation angle of the longest straight line, and then identifying the two-dimensional code.
The method has the advantages that the two-dimensional code image which is influenced by the motion blur and cannot be decoded can be identified, the algorithm provided by the method has higher robustness, and the accurate decoding and identification of the two-dimensional code image which is influenced by the motion blur under different backgrounds can be realized.
While the foregoing has described a series of steps for simplicity of explanation, it will be understood and appreciated that the methodologies are not limited by the order of acts, as some acts may, in accordance with one or more steps, occur in different orders and, thus, the principles of acts may be understood by those skilled in the art.
Although illustrative embodiments of the present invention have been described in some detail so that those skilled in the art will appreciate, it is not limited thereto but is capable of making various changes and modifications within the spirit and scope of the invention as defined and limited by the appended claims.

Claims (3)

1. A two-dimensional code recognition method is characterized in that: the method comprises the following steps:
step 1: designing and manufacturing a QR two-dimensional code dataset based on motion blur, and using the QR two-dimensional code dataset for training and testing an image motion blur removing algorithm;
step 2: constructing an image de-motion blur algorithm based on a generated countermeasure network, training by using a QR two-dimensional code data set based on motion blur to obtain parameter weight, and still obtaining a de-motion blur effect under the condition of unknown blur kernel;
and step 3: aiming at the possible uneven local area intensity of a two-dimensional code image, the method comprises an image binarization algorithm based on adaptive threshold processing, and is used for dividing the image into high-intensity and low-intensity areas and realizing image binarization; aiming at the problems of two-dimensional code image edge detection and geometric correction, firstly, obtaining the edge of a target area by applying a morphological expansion method, extracting the boundary of the target area by a canny operator, then obtaining a straight line where the boundary is located by Hough transformation, and finally correcting the two-dimensional code image by bilinear interpolation operation;
in step 2, the method comprises the following steps:
step 2.1: in the image motion blur removing algorithm based on the generated countermeasure network, an image feature extraction network based on a feature pyramid is adopted, so that the extracted feature mapping contains more semantic information and the quality is ensured;
step 2.2: in the image de-motion blur algorithm based on the generation countermeasure network, the current latest recovery model is adopted to meet the requirement on image enhancement, and the high-quality image de-blur effect is achieved;
step 2.3: by designing the GAN loss function, the training speed of the image deblurring algorithm based on the generated countermeasure network is higher, and the robustness is higher.
2. The two-dimensional code recognition method of claim 1, wherein the core is as follows:
in step 1, the method comprises the following steps:
step 1.1: designing a QR two-dimensional code dataset based on motion blur, and selecting the background of the two-dimensional code dataset, information contained in a two-dimensional code, and the distance between the two-dimensional code and a camera according to an actual human-computer interaction scene;
step 1.2: and collecting image information by using a kinect camera under a selected background, and adding motion blurs with different blur lengths and blur angles to complete the design and manufacture of the QR two-dimensional code dataset of the motion blurs.
3. The two-dimensional code recognition method of claim 1, wherein the core is as follows:
the step 3 comprises the following steps:
step 3.1: determining a pixel value R on a rectangle R within a moving window having a size S x SsSuch that the window size depends on the figureThe width W of the image, the maximum possible window width is 1/64 of W, and in the step of image binarization, if the value of each pixel satisfies the given threshold expression, each pixel is set to black; otherwise, set to white;
step 3.2: in order to improve the performance of an image binarization algorithm, an image is divided into a high-intensity area and a low-intensity area; calculating the sum of pixels in the rectangle R to obtain a local threshold, and then comparing with the average value of the whole image; in the areas with different intensities, the calculation methods of the threshold values are different;
step 3.3: after the image is binarized, performing expansion operation based on computer morphology to obtain an edge area of the two-dimensional code image; detecting the edge area based on a canny operator, then solving the slope and the rotation angle of a straight line where the edge is located based on improved Hough transformation, and finally performing rotation correction on the image based on a bilinear interpolation method and finishing the identification of the two-dimensional code.
CN202010092621.2A 2020-02-14 2020-02-14 Two-dimensional code identification method Active CN111310508B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010092621.2A CN111310508B (en) 2020-02-14 2020-02-14 Two-dimensional code identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010092621.2A CN111310508B (en) 2020-02-14 2020-02-14 Two-dimensional code identification method

Publications (2)

Publication Number Publication Date
CN111310508A CN111310508A (en) 2020-06-19
CN111310508B true CN111310508B (en) 2021-08-10

Family

ID=71161729

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010092621.2A Active CN111310508B (en) 2020-02-14 2020-02-14 Two-dimensional code identification method

Country Status (1)

Country Link
CN (1) CN111310508B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215119B (en) * 2020-10-08 2022-04-12 华中科技大学 Small target identification method, device and medium based on super-resolution reconstruction
CN112528701B (en) * 2020-12-15 2022-09-20 平安科技(深圳)有限公司 Two-dimensional code detection method and device, electronic equipment and medium
CN113780492A (en) * 2021-08-02 2021-12-10 南京旭锐软件科技有限公司 Two-dimensional code binarization method, device and equipment and readable storage medium
CN116882433B (en) * 2023-09-07 2023-12-08 无锡维凯科技有限公司 Machine vision-based code scanning identification method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108345816A (en) * 2018-01-29 2018-07-31 广州中大微电子有限公司 A kind of Quick Response Code extracting method and system in the case where uneven illumination is even
CN108520504A (en) * 2018-04-16 2018-09-11 湘潭大学 A kind of blurred picture blind restoration method based on generation confrontation network end-to-end
US20190197279A1 (en) * 2017-12-26 2019-06-27 Alibaba Group Holding Limited Method, device, and system for generating, repairing, and identifying an incomplete qr code
CN110309687A (en) * 2019-07-05 2019-10-08 华中科技大学 A kind of bearing calibration of image in 2 D code and means for correcting

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104298947B (en) * 2014-08-15 2017-03-22 广东顺德中山大学卡内基梅隆大学国际联合研究院 Method and device for accurately positioning two-dimensional bar code
CN108647550B (en) * 2018-04-11 2021-07-16 中山大学 Machine learning-based two-dimensional code fuzzy clustering identification method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190197279A1 (en) * 2017-12-26 2019-06-27 Alibaba Group Holding Limited Method, device, and system for generating, repairing, and identifying an incomplete qr code
CN108345816A (en) * 2018-01-29 2018-07-31 广州中大微电子有限公司 A kind of Quick Response Code extracting method and system in the case where uneven illumination is even
CN108520504A (en) * 2018-04-16 2018-09-11 湘潭大学 A kind of blurred picture blind restoration method based on generation confrontation network end-to-end
CN110309687A (en) * 2019-07-05 2019-10-08 华中科技大学 A kind of bearing calibration of image in 2 D code and means for correcting

Also Published As

Publication number Publication date
CN111310508A (en) 2020-06-19

Similar Documents

Publication Publication Date Title
CN111310508B (en) Two-dimensional code identification method
CN110021024B (en) Image segmentation method based on LBP and chain code technology
CN111583097A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN107749987B (en) Digital video image stabilization method based on block motion estimation
CN114529459B (en) Method, system and medium for enhancing image edge
CN110852349A (en) Image processing method, detection method, related equipment and storage medium
CN110070548B (en) Deep learning training sample optimization method
CN111507908B (en) Image correction processing method, device, storage medium and computer equipment
CN109242959B (en) Three-dimensional scene reconstruction method and system
CN109447117B (en) Double-layer license plate recognition method and device, computer equipment and storage medium
CN112417955B (en) Method and device for processing tour inspection video stream
CN111681198A (en) Morphological attribute filtering multimode fusion imaging method, system and medium
Jain et al. A systematic literature review on qr code detection and pre-processing
CN113421210B (en) Surface point Yun Chong construction method based on binocular stereoscopic vision
JPWO2020220126A5 (en)
CN113505702A (en) Pavement disease identification method and system based on double neural network optimization
CN113139544A (en) Saliency target detection method based on multi-scale feature dynamic fusion
CN117132503A (en) Method, system, equipment and storage medium for repairing local highlight region of image
CN116129417A (en) Digital instrument reading detection method based on low-quality image
CN116363064A (en) Defect identification method and device integrating target detection model and image segmentation model
CN115937839A (en) Large-angle license plate image recognition method, calculation equipment and storage medium
Dai et al. An Improved ORB Feature Extraction Algorithm Based on Enhanced Image and Truncated Adaptive Threshold
CN111815658B (en) Image recognition method and device
Peng et al. Research on qr 2-d code graphics correction algorithms based on morphological expansion closure and edge detection
CN106875369B (en) Real-time dynamic target tracking method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant