CN110991457A - Two-dimensional code processing method and device, electronic equipment and storage medium - Google Patents

Two-dimensional code processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110991457A
CN110991457A CN201911175822.2A CN201911175822A CN110991457A CN 110991457 A CN110991457 A CN 110991457A CN 201911175822 A CN201911175822 A CN 201911175822A CN 110991457 A CN110991457 A CN 110991457A
Authority
CN
China
Prior art keywords
dimensional code
image
processing
deconvolution
code image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911175822.2A
Other languages
Chinese (zh)
Other versions
CN110991457B (en
Inventor
吴翔宇
杨帆
袁玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Reach Best Technology Co Ltd
Original Assignee
Reach Best Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Reach Best Technology Co Ltd filed Critical Reach Best Technology Co Ltd
Priority to CN201911175822.2A priority Critical patent/CN110991457B/en
Publication of CN110991457A publication Critical patent/CN110991457A/en
Application granted granted Critical
Publication of CN110991457B publication Critical patent/CN110991457B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to a two-dimensional code processing method and device, electronic equipment and a storage medium, and belongs to the field of image processing. According to the scheme provided by the disclosure, the acquired two-dimensional code image is input into the two-dimensional code processing model, convolution processing is carried out on the two-dimensional code image through the two-dimensional code processing model to obtain the first image characteristic of the two-dimensional code image, deconvolution processing is carried out on the first image characteristic to obtain the second image characteristic, the second image characteristic is mapped based on the target threshold value, the binary image of the two-dimensional code image is output, the image characteristic is extracted through the two-dimensional code processing model, processing of useless characteristics such as shadow or color gradual change in the two-dimensional code image can be achieved, the useful characteristics in the two-dimensional code image are strengthened, and therefore when the acquired image is not clear enough under the scenes such as weak light, strong light and low contrast, the two-dimensional code can be recognized, and the recognition success rate is improved.

Description

Two-dimensional code processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing, and in particular, to a two-dimensional code processing method and apparatus, an electronic device, and a storage medium.
Background
The two-dimensional code is a common coding form in daily life, and has been widely used in mobile payment, information acquisition, user verification and other scenes. In the decoding stage of the two-dimensional code, the binarization processing of the image is an indispensable link, and the binarization processing refers to mapping the two-dimensional code image acquired by the terminal into a [0,1] binarization space, so as to recover bit stream information required by the internal logic of the computer.
In the related art, when the two-dimensional code is binarized, two methods, namely global binarization and local binarization, are mainly adopted. For global binarization, namely adopting a single threshold value for the whole two-dimensional code image, comparing the gray value of each pixel in the image with the threshold value, and dividing the gray value into two parts according to the comparison result so as to distinguish a background and a decoding object; for local binarization, dividing the whole two-dimensional code image into a plurality of windows according to a certain rule, and dividing the pixel gray value of each window into two parts according to a threshold value for each window of the plurality of windows so as to realize binarization processing.
For global binarization, the binarization threshold value of the method is obtained according to the information of the whole image, so the method cannot process the condition that the image contains shadow or color gradually changes; for local binarization, under the scenes of weak light, strong light, low contrast and the like, when a two-dimensional code image acquired by a terminal is not clear enough, the local binarization is sensitive to the selection of the size of a binarization window, the selection of different window sizes directly influences the determination of each window threshold value, and further influences the binarization effect. The two modes can both lead to the condition that the two-dimensional code cannot be identified, and the success rate of identifying the two-dimensional code is low.
Disclosure of Invention
The disclosure provides a two-dimensional code processing method, a two-dimensional code processing device, an electronic device and a storage medium, which are used for at least solving the problem of low recognition success rate in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a two-dimensional code processing method, including:
acquiring a two-dimensional code image;
inputting the two-dimension code image into a two-dimension code processing model;
performing convolution processing on the two-dimensional code image through the two-dimensional code processing model to obtain a first image characteristic of the two-dimensional code image, performing deconvolution processing on the first image characteristic to obtain a second image characteristic, mapping the second image characteristic based on a target threshold value, and outputting a binary image of the two-dimensional code image.
In a possible implementation manner, the performing convolution processing on the two-dimensional code image to obtain the first image feature of the two-dimensional code image includes:
and inputting the two-dimensional code image into a first convolutional layer in a plurality of convolutional layers of the two-dimensional code processing model, performing convolutional calculation on the first convolutional layer, inputting a calculation result into a next convolutional layer until the calculation of each convolutional layer is completed, and outputting the obtained convolution as a first image characteristic of the two-dimensional code image.
In a possible implementation manner, the deconvolving the first image feature to obtain the second image feature includes:
and inputting the first image characteristic into a first deconvolution layer of a plurality of deconvolution layers of the two-dimensional code processing model, performing deconvolution calculation on the first deconvolution layer, inputting a calculation result into a next deconvolution layer until the calculation of each deconvolution layer is completed, and outputting the obtained deconvolution as a second image characteristic of the two-dimensional code image.
In a possible implementation manner, before performing convolution processing on the two-dimensional code image, the method further includes:
and preprocessing the two-dimensional code image, and performing image enhancement processing on the preprocessed two-dimensional code image.
In a possible implementation manner, the preprocessing the two-dimensional code image includes:
and determining an image part comprising the two-dimensional code from the two-dimensional code image, and carrying out size adjustment on the image part to obtain an image matched with the two-dimensional code processing model.
In a possible implementation manner, the performing image enhancement processing on the preprocessed two-dimensional code image includes:
keeping the binaryzation label of each pixel point unchanged, and adjusting at least one of brightness and contrast of the preprocessed two-dimensional code image.
In one possible implementation, the mapping the second image feature based on the target threshold includes:
comparing the target threshold value with the pixel value of each pixel point of the second image characteristic;
when the pixel value of any pixel point is smaller than the target threshold, the gray value of the pixel point is mapped into a first gray value, and when the pixel value of any pixel point is larger than the target threshold, the gray value of the pixel point is mapped into a second gray value.
In one possible implementation, the method further includes:
when an initial model is trained, a sample two-dimensional code image and a binarization label are obtained;
inputting the sample two-dimensional code image into an initial model;
performing convolution processing on the sample two-dimensional code image through the initial model to obtain a first sample image characteristic of the sample two-dimensional code image, and performing deconvolution processing on the first sample image characteristic to obtain a second sample image characteristic;
determining a gradient vector of the initial model through back propagation based on a loss function, wherein the loss function is a corresponding cross entropy loss between the second sample image feature and the binarization label;
and adjusting the weight of the initial model according to the gradient vector until the accuracy of the second sample characteristic or the loss function and the like meet the iteration cutoff condition or the iteration frequency reaches the preset frequency.
According to a second aspect of the embodiments of the present disclosure, there is provided a two-dimensional code processing apparatus, the apparatus including:
an acquisition unit configured to perform acquisition of a two-dimensional code image;
an input unit configured to perform input of the two-dimensional code image to a two-dimensional code processing model;
the convolution processing unit is configured to execute convolution processing on the two-dimensional code image through the two-dimensional code processing model to obtain a first image characteristic of the two-dimensional code image;
a deconvolution processing unit configured to perform deconvolution processing on the first image feature to obtain a second image feature;
a mapping unit configured to perform mapping of the second image feature based on a target threshold;
an output unit configured to perform outputting a binarized image of the two-dimensional code image.
In one possible implementation, the input unit is further configured to perform inputting the two-dimensional code image into a first convolutional layer of a plurality of convolutional layers of the two-dimensional code processing model;
the convolution processing unit is also configured to perform convolution calculations on the first convolution layer;
the input unit is further configured to perform input of the calculation result into the next convolutional layer until the calculation of each convolutional layer is completed, and output the obtained convolution as the first image feature of the two-dimensional code image.
In one possible implementation, the input unit is further configured to perform inputting the first image feature into a first deconvolution layer of a plurality of deconvolution layers of the two-dimensional code processing model;
the deconvolution processing unit further configured to perform a deconvolution calculation on the first deconvolution layer;
the input unit is further configured to input the calculation result into the next deconvolution layer until the calculation of each deconvolution layer is completed, and output the obtained deconvolution as the second image feature of the two-dimensional code image.
In one possible implementation, the apparatus further includes:
a preprocessing unit configured to perform preprocessing on the two-dimensional code image;
and the image processing enhancement unit is configured to perform image enhancement processing on the preprocessed two-dimensional code image.
In one possible implementation, the apparatus further includes:
a determination unit configured to perform determination of an image portion including a two-dimensional code from the two-dimensional code image;
and the adjusting unit is configured to perform size adjustment on the image part to obtain an image matched with the two-dimensional code processing model.
In one possible implementation, the apparatus further includes:
a holding unit configured to perform holding of the binarization label of each pixel point unchanged;
the adjusting unit is further configured to perform at least one of brightness and contrast adjustment on the preprocessed two-dimensional code image.
In one possible implementation, the apparatus further includes:
a comparison unit configured to perform a comparison of a target threshold with pixel values of respective pixel points of the second image feature;
the mapping unit is further configured to map the gray value of any one pixel point to a first gray value when the pixel value of the pixel point is smaller than a target threshold value, and map the gray value of the pixel point to a second gray value when the pixel value of the pixel point is larger than the target threshold value.
In one possible implementation, the apparatus further includes:
the sample acquisition unit is configured to acquire a sample two-dimensional code image and a binarization label when the initial model is trained;
a sample input unit configured to perform input of the sample two-dimensional code image to an initial model;
the sample convolution processing unit is configured to execute convolution processing on the sample two-dimensional code image through the initial model to obtain a first sample image characteristic of the sample two-dimensional code image;
the sample deconvolution processing unit is configured to perform deconvolution processing on the first sample image characteristic to obtain a second sample image characteristic;
a gradient vector determination unit configured to perform determining a gradient vector of the initial model by back propagation based on a loss function, the loss function being a corresponding cross entropy loss between the second sample image feature and the binarization label;
and the weight value adjusting unit is configured to adjust the weight value of the initial model according to the gradient vector until the accuracy of the second sample characteristic or the loss function and the like meet an iteration cutoff condition or the iteration frequency reaches a preset frequency.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
one or more processors;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instruction to realize the two-dimensional code processing method.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a storage medium, in which instructions are executed by a processor of an electronic device, so that the electronic device can execute the two-dimensional code processing method described above.
According to a fifth aspect of the embodiments of the present disclosure, an application program product is provided, where the application program product stores one or more instructions, and the one or more instructions are executable by a processor of an electronic device to implement the two-dimensional code processing method.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the method comprises the steps of inputting an obtained two-dimensional code image into a two-dimensional code processing model, carrying out convolution processing on the two-dimensional code image through the two-dimensional code processing model to obtain a first image characteristic of the two-dimensional code image, carrying out deconvolution processing on the first image characteristic to obtain a second image characteristic, mapping the second image characteristic based on a target threshold value, outputting a binary image of the two-dimensional code image, extracting the image characteristic through the two-dimensional code processing model, processing useless characteristics such as shadow or color gradual change in the two-dimensional code image, strengthening useful characteristics in the two-dimensional code image, realizing identification of the two-dimensional code when the obtained image is not clear enough under the scenes such as weak light, strong light and low contrast, and improving the identification success rate.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a schematic diagram illustrating an implementation environment of a two-dimensional code processing method according to an exemplary embodiment;
FIG. 2 is a flow diagram illustrating a two-dimensional code processing method according to an example embodiment;
FIG. 3 is a flow diagram illustrating a two-dimensional code processing method in accordance with an exemplary embodiment;
FIG. 4 is a schematic diagram illustrating a two-dimensional code processing model according to an exemplary embodiment;
FIG. 5 is a flowchart illustrating a method of training an initial model, according to an exemplary embodiment;
fig. 6 is a block diagram illustrating a two-dimensional code processing apparatus according to an exemplary embodiment;
fig. 7 is a block diagram illustrating an electronic device 700 in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The user information to which the present disclosure relates may be information authorized by the user or sufficiently authorized by each party.
Fig. 1 is a schematic diagram of an implementation environment of a two-dimensional code processing method according to an exemplary embodiment, and referring to fig. 1, the implementation environment specifically includes: an electronic device 101.
The electronic device 101 may be at least one of a smartphone, a smartwatch, a laptop computer, an MP3 player, an MP4 player, a laptop computer, and the like. The electronic device 101 may acquire a two-dimensional code image through the camera assembly and process the two-dimensional code image, thereby recognizing the two-dimensional code.
The electronic device 101 may be generally referred to as one of a plurality of electronic devices, and the embodiment is only illustrated by the electronic device 101. Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, the number of the electronic devices may be only a few, or the number of the electronic devices may be tens of or hundreds, or a larger number, and the number of the electronic devices and the type of the devices are not limited in the embodiments of the present disclosure.
Fig. 2 is a flowchart illustrating a two-dimensional code processing method according to an exemplary embodiment, and with reference to fig. 2, specific steps include:
in step S201, the electronic device acquires a two-dimensional code image.
In step S202, the electronic device inputs the two-dimensional code image to a two-dimensional code processing model.
In step S203, the electronic device performs convolution processing on the two-dimensional code image through the two-dimensional code processing model to obtain a first image feature of the two-dimensional code image, performs deconvolution processing on the first image feature to obtain a second image feature, maps the second image feature based on a target threshold, and outputs a binary image of the two-dimensional code image.
In a possible implementation manner, performing convolution processing on the two-dimensional code image to obtain a first image feature of the two-dimensional code image includes:
and inputting the two-dimensional code image into a first convolutional layer in a plurality of convolutional layers of the two-dimensional code processing model, performing convolutional calculation on the first convolutional layer, inputting a calculation result into a next convolutional layer until the calculation of each convolutional layer is completed, and outputting the obtained convolution as a first image characteristic of the two-dimensional code image.
In one possible implementation, deconvoluting the first image feature to obtain the second image feature includes:
and inputting the first image characteristic into a first deconvolution layer of a plurality of deconvolution layers of the two-dimensional code processing model, performing deconvolution calculation on the first deconvolution layer, inputting a calculation result into a next deconvolution layer until the calculation of each deconvolution layer is completed, and outputting the obtained deconvolution as a second image characteristic of the two-dimensional code image.
In a possible implementation manner, before performing convolution processing on the two-dimensional code image, the method further includes:
and preprocessing the two-dimensional code image, and performing image enhancement processing on the preprocessed two-dimensional code image.
In one possible implementation manner, the preprocessing the two-dimensional code image includes:
and determining an image part comprising the two-dimensional code from the two-dimensional code image, and carrying out size adjustment on the image part to obtain an image matched with the two-dimensional code processing model.
In a possible implementation manner, the image enhancement processing on the preprocessed two-dimensional code image includes:
keeping the binaryzation label of each pixel point unchanged, and adjusting at least one of brightness and contrast of the preprocessed two-dimensional code image.
In one possible implementation, mapping the second image feature based on a target threshold includes:
comparing the target threshold value with the pixel value of each pixel point of the second image characteristic;
when the pixel value of any pixel point is smaller than the target threshold, the gray value of the pixel point is mapped into a first gray value, and when the pixel value of any pixel point is larger than the target threshold, the gray value of the pixel point is mapped into a second gray value.
In one possible implementation, the method further includes:
when an initial model is trained, a sample two-dimensional code image and a binarization label are obtained;
inputting the sample two-dimensional code image into an initial model;
performing convolution processing on the sample two-dimensional code image through the initial model to obtain a first sample image characteristic of the sample two-dimensional code image, and performing deconvolution processing on the first sample image characteristic to obtain a second sample image characteristic;
determining a gradient vector of the initial model through back propagation based on a loss function, wherein the loss function is a corresponding cross entropy loss between the second sample image feature and the binarization label;
and adjusting the weight of the initial model according to the gradient vector until the accuracy of the second sample characteristic or the loss function and the like meet the iteration cutoff condition or the iteration frequency reaches the preset frequency.
According to the scheme provided by the disclosure, the acquired two-dimensional code image is input into the two-dimensional code processing model, convolution processing is carried out on the two-dimensional code image through the two-dimensional code processing model to obtain the first image characteristic of the two-dimensional code image, deconvolution processing is carried out on the first image characteristic to obtain the second image characteristic, the second image characteristic is mapped based on the target threshold value, the binary image of the two-dimensional code image is output, the image characteristic is extracted through the two-dimensional code processing model, processing of useless characteristics such as shadow or color gradual change in the two-dimensional code image can be achieved, the useful characteristics in the two-dimensional code image are strengthened, and therefore when the acquired image is not clear enough under the scenes such as weak light, strong light and low contrast, the two-dimensional code can be recognized, and the recognition success rate is improved.
In the solution provided by the present disclosure, the processing of the two-dimensional code image may be based on a deep learning segmentation algorithm, which may be implemented by a neural network, and fig. 3 is a flowchart illustrating a two-dimensional code processing method according to an exemplary embodiment, referring to fig. 3, where the method includes:
in step S301, the electronic device acquires a two-dimensional code image.
The two-dimensional code image may be a two-dimensional code image. In one possible implementation manner, the electronic device may scan the two-dimensional code pattern through the camera assembly to obtain the two-dimensional code image.
It should be noted that the camera assembly may be configured on an electronic device, or may be an external camera assembly connected to the electronic device, which is not limited in the embodiment of the present disclosure.
In step S302, the electronic device inputs the two-dimensional code image to an input data processing module of a two-dimensional code processing model.
It should be noted that the two-dimensional code processing model may be obtained by the electronic device from the server, or may be obtained by other ways, which is not limited in the embodiment of the present disclosure. The two-dimensional code processing model comprises an input data processing module, a convolution module, a deconvolution module and an output data processing module, referring to fig. 4, fig. 4 is a schematic diagram of a composition structure of a two-dimensional code processing model according to an exemplary embodiment. The input data processing module is used for processing the two-dimensional code image input into the input data processing module through some detection or cutting means to obtain an image part comprising the two-dimensional code, adjusting the size of the image part, generating a corresponding binary label for the image after the size is adjusted, keeping the binary label unchanged, and performing image enhancement processing on the image; the convolution module comprises a plurality of convolution layers and a pooling layer and is used for performing convolution processing on the image output by the input data module to extract the two-dimensional code image characteristics; the deconvolution module comprises a plurality of deconvolution layers and connecting layers and is used for reducing the first image characteristics output by the convolution module into the size of the original two-dimensional code image so as to ensure that a subsequent data output module can process the image; and the output data processing module is used for mapping the second image characteristics output by the deconvolution module according to the target threshold value to obtain a binary image of the two-dimensional code image, so that the visualization of the image characteristics is realized.
In step S303, the electronic device determines an image portion including a two-dimensional code from the two-dimensional code image through the input data processing module, and performs size adjustment on the image portion to obtain an image matched with the two-dimensional code processing model, so as to implement preprocessing on the two-dimensional code image.
In a possible implementation manner, the electronic device may pre-process the two-dimensional code image through an input data processing module of the two-dimensional code processing model, and determine an image portion including the two-dimensional code, that is, an image whose main body is the two-dimensional code, from the two-dimensional code image through detection and cutting. Further, the electronic device may further detect the size of the image portion including the two-dimensional code, compare the detected size with the size of the image that can be processed by the two-dimensional code processing model, and adjust the size of the image portion including the two-dimensional code according to the size of the image that can be processed by the two-dimensional code processing model when the two-dimensional code processing model cannot process the image portion including the two-dimensional code of the current size, so as to obtain an image matched with the two-dimensional code processing model. For example, if the two-dimensional code processing model used in steps S304 to S309 described below can process a square image with an image size of 256 × 256, the electronic device can simultaneously adjust the length and width of the image portion including the two-dimensional code to adjust the image size to 256 × 256, so as to achieve matching with the two-dimensional code processing model used in steps S304 to S309.
When the image portion including the two-dimensional code is determined from the two-dimensional code image, the electronic device may determine the square bounding box of the two-dimensional code image according to the position detection pattern by detecting the position detection pattern in the two-dimensional code image, and then cut off a blank area in the obtained two-dimensional code image according to the square bounding box to obtain the image portion whose main body is the two-dimensional code.
In step S304, the electronic device generates, through the input data processing module, a corresponding binarization label according to a binarization method based on a threshold value based on the preprocessed two-dimensional code image.
The binarization label refers to information of 0 or 1 corresponding to each pixel point of the two-dimensional code image. The binarization method based on the threshold may be Global Histogram binarization (Global Histogram binarization) or Hybrid binarization (Hybrid binarization), and the embodiment of the present disclosure does not limit which method is specifically adopted.
In a possible implementation manner, the electronic device may perform binarization through a global histogram, compare the pixel value of each pixel point of the preprocessed two-dimensional code image with a same threshold, and record the pixel value of the pixel point greater than the threshold as 1 and the pixel value of the pixel point less than the threshold as 0 to obtain a binarization label corresponding to each pixel point.
In another possible implementation manner, the electronic device may compare each pixel point of the preprocessed two-dimensional code image with other pixel points of its local neighborhood through mixed binarization, adaptively adjust a threshold value according to a pixel value distribution characteristic of the local neighborhood pixel points, and binarize each pixel point according to the local neighborhood pixel point to obtain a binarization label corresponding to each pixel point.
In step S305, the electronic device keeps the binarization label of each pixel unchanged through the input data processing module, and performs at least one of brightness and contrast adjustment on the preprocessed two-dimensional code image, so as to implement image enhancement processing on the preprocessed two-dimensional code image.
In a possible implementation manner, the electronic device may perform image enhancement processing on the two-dimensional code image within an allowable adjustment range, for example, reducing brightness, increasing brightness, reducing contrast, and the like, through an input data processing module of the two-dimensional code processing model, on the premise that a binarization label corresponding to each pixel point in the two-dimensional code image is not changed, so as to implement image enhancement processing under conditions of weak light, strong light, low contrast, and the like.
It should be noted that the operation of the image enhancement processing is mainly to adjust the color, brightness, contrast, and the like of an image, and the operation of the image enhancement processing may introduce noise to the image, and if the adjustment range is not constrained, it is likely that the content represented by the enhanced image may have a deviation, and therefore, it is necessary to determine the binarization label of each pixel point in the two-dimensional code image before performing image enhancement, and adjust the color, brightness, contrast, and the like of the image within the adjustment range in which the binarization label is not changed, so as to ensure the accuracy of the binarization label.
In step S306, the electronic device inputs the two-dimensional code image into a first convolution layer in a convolution module of the two-dimensional code processing model, performs convolution calculation on the first convolution layer, inputs a calculation result into a next convolution layer until the calculation of each convolution layer is completed, and outputs an obtained convolution as a first image feature of the two-dimensional code image.
It should be noted that, in the embodiment of the present disclosure, structures of the convolution module and the deconvolution module are not limited, and only one convolution module and deconvolution module with a good processing effect are taken as an example for description, and the structure of the convolution module is shown in table 1 below:
TABLE 1
The convolution module comprises three convolution layers, wherein the first convolution layer comprises two cascaded convolution units and a pooling unit, the second convolution layer comprises two cascaded convolution units and a pooling unit, and the third convolution layer comprises two cascaded convolution units.
In a possible implementation manner, the process of processing the two-dimensional code image by the electronic device through the convolution module, so as to obtain the first image feature may be specifically described as:
1. the electronic device performs Conv1_1 convolution on a two-dimensional code image with the input size of 256 × 3 through a first convolution unit of a first convolution layer, wherein the Conv1_1 convolution comprises a convolution operation with a window size of 3, a step size of 1 and a depth of 8 and a Linear rectification function (ReLu) activation operation, and a convolution output of 256 × 8 is obtained;
2. the electronic device performs Conv1_2 convolution on the result of 256 × 8 output by the previous convolution unit through the second convolution unit of the first convolution layer, wherein the Conv1_2 convolution comprises a convolution operation with a window size of 3, a step size of 1 and a depth of 8 and a ReLu activation operation to obtain 256 × 8 convolution output;
3. the electronic device performs MaxPooling1 pooling on the 256 × 8 result output by the previous convolution unit through the pooling unit of the first convolution layer, wherein the MaxPooling1 pooling comprises a maximum pooling operation with a window size of 2 and a step size of 2, and a pooling output of 128 × 8 is obtained;
similarly, the electronic device may continue to process the result output by the first convolution layer through the second convolution layer and the third convolution layer in the convolution module, and may finally obtain a convolution output of 64 × 32 as the first image feature of the two-dimensional code image.
The mathematical expression f (x) of the ReLu function is max (0, x), which converts all feature points less than 0 in the input feature map into 0, thereby resulting in high sparsity of the feature map.
In step S307, the electronic device inputs the first image feature into a first deconvolution layer in a deconvolution module of the two-dimensional code processing model, performs deconvolution calculation in the first deconvolution layer, inputs the calculation result into a next deconvolution layer until the calculation of each deconvolution layer is completed, and outputs the obtained deconvolution as a second image feature of the two-dimensional code image.
It should be noted that the size and depth of the input image of the deconvolution module are the same as those of the output image of the convolution module in step S306, that is, the output image of the convolution module is the input image of the deconvolution module, and in addition, the size of the output image of the deconvolution module is the same as that of the input image of the convolution module, so as to ensure that the subsequent processing procedure is performed normally.
The structure of the deconvolution module corresponding to the convolution module in step S306 is shown in table 2 below:
TABLE 2
Figure BDA0002289907260000111
The convolution module comprises three deconvolution layers, wherein the first deconvolution layer comprises a deconvolution unit, a connection unit and a convolution unit, the second deconvolution layer comprises a deconvolution unit, a connection unit and a convolution unit, and the third deconvolution layer comprises a convolution unit.
In a possible implementation manner, the process of processing, by the electronic device, the first image feature of the two-dimensional code image through the deconvolution module, so as to obtain the second image feature may be specifically described as:
1. the electronic device performs Deconv4_1 deconvolution on the first image feature with an input size of 64 × 32 by a deconvolution unit of a first deconvolution layer, wherein the deconvolution operation comprises a deconvolution operation with a window size of 2, a step size of 2, and a depth of 16, and a ReLu activation operation, so as to obtain a convolution output of 128 × 16;
2. the electronic device performs Concat4 connection on the result of 128 × 16 output by the deconvolution unit and the convolution output of Conv2_2 through a connection unit of the first deconvolution layer to obtain a convolution output of 128 × 32;
3. the electronic device performs Conv4_2 convolution on the result of 128 × 32 output by the connection unit through the convolution unit of the first deconvolution layer, wherein the result comprises a convolution operation with a window size of 3, a step size of 1 and a depth of 16 and a ReLu activation operation, and a convolution output of 128 × 16 is obtained;
similarly, the electronic device may continue to process the result output by the first deconvolution layer through the second deconvolution layer in the convolution module, and finally may obtain 256 × 8 convolution output, and further perform Conv6_1 convolution on the result of 256 × 8 output by the previous deconvolution layer through the convolution unit in the third deconvolution layer, where the Conv6_1 convolution includes a convolution operation with a window size of 1, a step size of 1, and a depth of 2, so as to obtain 256 × 2 convolution output as the second image feature of the two-dimensional code image.
It should be noted that the above-mentioned convolution processing and deconvolution processing is actually a process of extracting features and restoring the size of the image to the size before convolution processing, and each pixel on the original image can be mapped to one output of either 0 or 1 by convolution processing and deconvolution processing, so as to enhance the image content. The convolution module and the deconvolution module of the two-dimensional code processing model are used for processing the two-dimensional code image, and the size of a more flexible receptive field window can be introduced through neural network layers with different depths, so that richer adjacent pixel and adjacent color block information is integrated for carrying out binarization processing on the image.
In step S308, the electronic device compares the target threshold with the pixel value of each pixel of the second image feature through the output data processing module.
It should be noted that the output data processing module may compare each pixel point of the second image feature according to a preset target threshold, and map the pixel points into different gray values according to different comparison results.
In step S309, when the pixel value of any one of the pixel points is smaller than the target threshold, the electronic device maps the gray value of the pixel point to a first gray value.
For example, the electronic device may set the target threshold to 0.5, and when the pixel value of any one pixel point is greater than 0.5, the electronic device may map the gray value of the pixel point to 255.
In step S310, when the pixel value of any one pixel point is greater than the target threshold, the electronic device maps the gray value of the pixel point to a second gray value, so as to obtain a binary image of the two-dimensional code image.
For example, the electronic device may set the target threshold to be 0.5, and when the pixel value of any one pixel point is less than 0.5, the electronic device may map the gray value of the pixel point to be 0, so as to obtain a binary image of the two-dimensional code image.
In step S311, the electronic device outputs a binarized image of the two-dimensional code image.
It should be noted that, through the above steps S301 to S311, the binarization processing of the two-dimensional code image can be realized, and further, the electronic device can decode the binarized image based on the binarized image of the two-dimensional code to realize the identification of the two-dimensional code image.
According to the scheme, the acquired two-dimensional code image is input into the two-dimensional code processing model, convolution processing is carried out on the two-dimensional code image through the two-dimensional code processing model to obtain the first image characteristic of the two-dimensional code image, deconvolution processing is carried out on the first image characteristic to obtain the second image characteristic, the second image characteristic is mapped based on the target threshold value, the binarized image of the two-dimensional code image is output, selection of the size of a binarized window is not needed to be considered, the image characteristic is extracted through the two-dimensional code processing model, processing of useless characteristics such as shadow or color gradual change in the two-dimensional code image can be achieved, the useful characteristics in the two-dimensional code image are strengthened, and therefore when the acquired image is not clear enough in scenes such as weak light, strong light and low contrast, recognition of the two-dimensional code can be achieved, and the recognition success rate is improved.
The two-dimensional code processing method based on the deep learning segmentation algorithm has more robust performance than that of the traditional image algorithm under the conditions of weak light, strong light and low contrast. For the test set adopted by the scheme, a mixed binarization method is compared, on the whole sample, the scheme brings about 3% of recognition rate improvement, and on the difficult sample, the scheme brings about more than 20% of recognition rate improvement.
Fig. 3 illustrates a process of processing a two-dimensional code image by using a trained two-dimensional code processing model, before which an electronic device needs to train an initial model to obtain the two-dimensional code processing model, and referring to fig. 5, fig. 5 is a flowchart illustrating a method for training the initial model according to an exemplary embodiment, where the method includes:
in step S501, when training the initial model, the electronic device acquires a sample two-dimensional code image and a binarization label.
The initial model comprises an input data processing module, a convolution module, a deconvolution module and an output data processing module. The input data processing module is used for processing the two-dimensional code image input into the input data processing module through some detection or cutting means to obtain an image part comprising the two-dimensional code, adjusting the size of the image part, generating a corresponding binary label for the image after size adjustment, keeping the binary label unchanged, and performing image enhancement processing on the image; the convolution module comprises a plurality of convolution layers and a pooling layer and is used for performing convolution processing on the image output by the input data module to extract the two-dimensional code image characteristics; the deconvolution module comprises a plurality of deconvolution layers and connecting layers and is used for reducing the first image characteristics output by the convolution module into the size of the original two-dimensional code image so as to ensure that a subsequent data output module can process the image; and the output data processing module is used for mapping the second image characteristics output by the deconvolution module according to the target threshold value to obtain a binary image of the two-dimensional code image, so that the visualization of the image characteristics is realized. In the embodiment of the present disclosure, specific structures of the initial model convolution module and the deconvolution module may refer to step S306 and step S307, which are not described herein again.
It should be noted that the number of the sample two-dimensional code images may be multiple, and the electronic device may process the sample two-dimensional code images one by one, and the number of the sample two-dimensional code images is not limited in the embodiment of the present disclosure.
In step S502, the electronic device inputs the sample two-dimensional code image to an initial model.
In a possible implementation manner, the electronic device may perform various processing on the acquired sample two-dimensional code image through an input data processing module of the initial model, and determine the corresponding binarization label through a threshold-based binarization method. For a specific image processing process and a specific determination process of the binarization label, refer to steps S303 to S305, which are not described herein again.
In step S503, the electronic device performs convolution processing on the sample two-dimensional code image through the initial model to obtain a first sample image feature of the sample two-dimensional code image, and performs deconvolution processing on the first sample image feature to obtain a second sample image feature.
In a possible implementation manner, the electronic device performs convolution processing on the acquired sample two-dimensional code image through each convolution layer of the initial model to obtain a first sample image feature of the sample two-dimensional code image, and performs deconvolution processing through each deconvolution layer of the initial model to obtain a second sample image feature. For example, to obtain the two-dimensional code processing model used in the embodiment shown in fig. 4, the convolution module of the initial model may be trained through a convolution calculation process similar to that in step S406, and the deconvolution module of the initial model may be trained through a deconvolution calculation process similar to that in step S407.
In step S504, the electronic device determines a gradient vector of the initial model by back propagation based on a loss function, which is a corresponding cross-entropy loss between the second sample image feature and the binarization label.
In step S505, the electronic device adjusts the weight of the initial model according to the gradient vector until the accuracy of the second sample feature or the loss function meets the iteration cutoff condition or the iteration number reaches a preset number.
In a possible implementation manner, the electronic device adjusts each weight in the initial model according to the gradient vector to obtain a corrected initial model, continues to process the next two-dimensional code sample data by using the corrected initial model, repeats the above process until the accuracy of the calculation result or the loss function and the like meet the iteration cutoff condition or the iteration frequency reaches the preset frequency, at this time, the data in the initial model has been adjusted for many times, the accuracy is higher, and the initial model after multiple weight adjustments can be used as a two-dimensional code processing model to process the two-dimensional code.
The optimizer used in the weight adjustment process may be set as an adaptive moment estimation (Adam) optimizer with a learning rate of 0.01, and optionally, other optimizers may also be selected to perform weight adjustment, which is not limited in the embodiment of the present disclosure.
It should be noted that the above-mentioned model training process may be completed by means of a deep learning framework such as a convolutional neural network framework (convolutional architecture for Fast Feature Embedding, Caffe), a machine learning framework (tensrflow), and the embodiment of the present disclosure does not limit which framework is specifically used for model training.
Through the training to the initial model, can obtain two-dimensional code processing model, and then can handle the two-dimensional code image through two-dimensional code processing model, need not to consider the selection of binarization window size, can contain shade or colour gradual change in the image moreover, or when the image that acquires is not clear enough under scenes such as low light, highlight, low contrast, realize the discernment of two-dimensional code, improve the discernment success rate.
Through optimization methods such as neural network quantization and the like, the two-dimension code processing model generated by the method is appropriate in scale and rapid in reasoning, can be deployed at a mobile terminal, and can independently complete the processing task of the two-dimension code, or can be used as a supplement of a traditional image algorithm to assist the mobile terminal in completing the processing task of the two-dimension code in a complex environment.
Fig. 6 is a block diagram illustrating a two-dimensional code processing apparatus according to an exemplary embodiment, referring to fig. 6, the apparatus including: an acquisition unit 601, an input unit 602, a convolution processing unit 603, a deconvolution processing unit 604, a mapping unit 605, and an output unit 606.
An acquisition unit 601 configured to perform acquisition of a two-dimensional code image;
an input unit 602 configured to perform input of the two-dimensional code image to a two-dimensional code processing model;
a convolution processing unit 603 configured to perform convolution processing on the two-dimensional code image through the two-dimensional code processing model to obtain a first image feature of the two-dimensional code image;
a deconvolution processing unit 604 configured to perform deconvolution processing on the first image feature to obtain a second image feature;
a mapping unit 605 configured to perform mapping of the second image feature based on a target threshold;
an output unit 606 configured to perform outputting a binarized image of the two-dimensional code image.
In a possible implementation manner, the input unit 602 is further configured to perform inputting the two-dimensional code image into a first convolutional layer of a plurality of convolutional layers of the two-dimensional code processing model;
the convolution processing unit 603, further configured to perform convolution calculations on the first convolutional layer;
the input unit 602 is further configured to perform input of the calculation result into the next convolutional layer until the calculation of each convolutional layer is completed, and output the obtained convolution as the first image feature of the two-dimensional code image.
In a possible implementation, the input unit 602 is further configured to perform inputting the first image feature into a first deconvolution layer of a plurality of deconvolution layers of the two-dimensional code processing model;
the deconvolution processing unit 604, further configured to perform a deconvolution calculation on the first deconvolution layer;
the input unit 602 is further configured to perform input of the calculation result into the next deconvolution layer until the calculation of each deconvolution layer is completed, and output the resulting deconvolution as the second image feature of the two-dimensional code image.
In one possible implementation, the apparatus further includes:
a preprocessing unit configured to perform preprocessing on the two-dimensional code image;
and the image processing enhancement unit is configured to perform image enhancement processing on the preprocessed two-dimensional code image.
In one possible implementation, the apparatus further includes:
a determination unit configured to perform determination of an image portion including a two-dimensional code from the two-dimensional code image;
and the adjusting unit is configured to perform size adjustment on the image part to obtain an image matched with the two-dimensional code processing model.
In one possible implementation, the apparatus further includes:
a holding unit configured to perform holding of the binarization label of each pixel point unchanged;
the adjusting unit is further configured to perform at least one of brightness and contrast adjustment on the preprocessed two-dimensional code image.
In one possible implementation, the apparatus further includes:
a comparison unit configured to perform a comparison of a target threshold with pixel values of respective pixel points of the second image feature;
the mapping unit 605 is further configured to map the gray scale value of any one of the pixels to a first gray scale value when the pixel value of the pixel is smaller than the target threshold, and map the gray scale value of the pixel to a second gray scale value when the pixel value of the pixel is larger than the target threshold.
In one possible implementation, the apparatus further includes:
the sample acquisition unit is configured to acquire a sample two-dimensional code image and a binarization label when the initial model is trained;
a sample input unit configured to perform input of the sample two-dimensional code image to an initial model;
the sample convolution processing unit is configured to execute convolution processing on the sample two-dimensional code image through the initial model to obtain a first sample image characteristic of the sample two-dimensional code image;
the sample deconvolution processing unit is configured to perform deconvolution processing on the first sample image characteristic to obtain a second sample image characteristic;
a gradient vector determination unit configured to perform determining a gradient vector of the initial model by back propagation based on a loss function, the loss function being a corresponding cross entropy loss between the second sample image feature and the binarization label;
and the weight value adjusting unit is configured to adjust the weight value of the initial model according to the gradient vector until the accuracy of the second sample characteristic or the loss function and the like meet an iteration cutoff condition or the iteration frequency reaches a preset frequency.
According to the device, the obtained two-dimensional code image is input into the two-dimensional code processing model, convolution processing is carried out on the two-dimensional code image through the two-dimensional code processing model to obtain the first image characteristic of the two-dimensional code image, deconvolution processing is carried out on the first image characteristic to obtain the second image characteristic, the second image characteristic is mapped based on the target threshold value, the binary image of the two-dimensional code image is output, the image characteristic is extracted through the two-dimensional code processing model, processing of useless characteristics such as shadow or color gradual change in the two-dimensional code image can be achieved, the useful characteristics in the two-dimensional code image are strengthened, accordingly, when the image obtained under the scenes such as weak light, strong light and low contrast is not clear enough, the two-dimensional code can be recognized, and the recognition success rate is improved.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 7 is a block diagram illustrating an electronic device 700 in accordance with an example embodiment. The electronic device 700 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer iv, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Electronic device 700 may also be referred to by other names as user equipment, portable electronic device, laptop electronic device, desktop electronic device, and so on.
In general, the electronic device 700 includes: one or more processors 701 and one or more memories 702.
The processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 701 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 701 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 702 may include one or more computer-readable storage media, which may be non-transitory. Memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 702 is used to store at least one program code for execution by the processor 701 to implement the web page access acceleration method provided by the method embodiments in the present disclosure.
In some embodiments, the electronic device 700 may further optionally include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 703 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 704, display 705, camera 706, audio circuitry 707, positioning components 708, and power source 709.
The peripheral interface 703 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 701 and the memory 702. In some embodiments, processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 704 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 704 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 704 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 704 may communicate with other electronic devices via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 704 may also include NFC (Near Field Communication) related circuits, which are not limited by this disclosure.
The display screen 705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 705 is a touch display screen, the display screen 705 also has the ability to capture touch signals on or over the surface of the display screen 705. The touch signal may be input to the processor 701 as a control signal for processing. At this point, the display 705 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 705 may be one, providing the front panel of the electronic device 700; in other embodiments, the number of the display screens 705 may be at least two, and the at least two display screens are respectively disposed on different surfaces of the electronic device 700 or are in a folding design; in still other embodiments, the display 705 may be a flexible display disposed on a curved surface or on a folded surface of the electronic device 700. Even more, the display 705 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The Display 705 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 706 is used to capture images or video. Optionally, camera assembly 706 includes a front camera and a rear camera. Generally, a front camera is disposed on a front panel of an electronic apparatus, and a rear camera is disposed on a rear surface of the electronic apparatus. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 706 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 707 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 701 for processing or inputting the electric signals to the radio frequency circuit 704 to realize voice communication. For stereo capture or noise reduction purposes, the microphones may be multiple and disposed at different locations of the electronic device 700. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 701 or the radio frequency circuit 704 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 707 may also include a headphone jack.
The positioning component 708 is operable to locate a current geographic location of the electronic device 700 to implement navigation or LBS (location based Service). The positioning component 708 may be a positioning component based on the GPS (global positioning System) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
The power supply 709 is used to supply power to various components in the electronic device 700. The power source 709 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When power source 709 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the electronic device 700 also includes one or more sensors 710. The one or more sensors 710 include, but are not limited to: acceleration sensor 711, gyro sensor 712, pressure sensor 713, fingerprint sensor 714, optical sensor 715, and proximity sensor 716.
The acceleration sensor 711 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the electronic device 700. For example, the acceleration sensor 711 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 701 may control the display screen 705 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 711. The acceleration sensor 711 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 712 may detect a body direction and a rotation angle of the electronic device 700, and the gyro sensor 712 may cooperate with the acceleration sensor 711 to acquire a 3D motion of the user with respect to the electronic device 700. From the data collected by the gyro sensor 712, the processor 701 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 713 may be disposed on a side bezel of electronic device 700 and/or underlying display screen 705. When the pressure sensor 713 is disposed on a side frame of the electronic device 700, a user holding signal of the electronic device 700 may be detected, and the processor 701 may perform left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 713. When the pressure sensor 713 is disposed at a lower layer of the display screen 705, the processor 701 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 705. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 714 is used for collecting a fingerprint of a user, and the processor 701 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 714, or the fingerprint sensor 714 identifies the identity of the user according to the collected fingerprint. When the user identity is identified as a trusted identity, the processor 701 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 714 may be disposed on the front, back, or side of the electronic device 700. When a physical button or vendor Logo is provided on the electronic device 700, the fingerprint sensor 714 may be integrated with the physical button or vendor Logo.
The optical sensor 715 is used to collect the ambient light intensity. In one embodiment, the processor 701 may control the display brightness of the display screen 705 based on the ambient light intensity collected by the optical sensor 715. Specifically, when the ambient light intensity is high, the display brightness of the display screen 705 is increased; when the ambient light intensity is low, the display brightness of the display screen 705 is adjusted down. In another embodiment, processor 701 may also dynamically adjust the shooting parameters of camera assembly 706 based on the ambient light intensity collected by optical sensor 715.
A proximity sensor 716, also referred to as a distance sensor, is typically disposed on the front panel of the electronic device 700. The proximity sensor 716 is used to capture the distance between the user and the front of the electronic device 700. In one embodiment, the processor 701 controls the display screen 705 to switch from the bright screen state to the dark screen state when the proximity sensor 716 detects that the distance between the user and the front surface of the electronic device 700 is gradually decreased; when the proximity sensor 716 detects that the distance between the user and the front surface of the electronic device 700 is gradually increased, the processor 701 controls the display screen 705 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 7 does not constitute a limitation of the electronic device 700 and may include more or fewer components than those shown, or combine certain components, or employ a different arrangement of components.
In an exemplary embodiment, a storage medium comprising instructions, such as a memory 702 comprising instructions, executable by a processor 701 of an electronic device 700 to perform the above-described method is also provided. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, which includes one or more instructions that can be executed by the processor 701 of the electronic device to perform the method steps of the two-dimensional code processing method provided in the above-mentioned embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A two-dimensional code processing method is characterized by comprising the following steps:
acquiring a two-dimensional code image;
inputting the two-dimension code image into a two-dimension code processing model;
performing convolution processing on the two-dimensional code image through the two-dimensional code processing model to obtain a first image characteristic of the two-dimensional code image, performing deconvolution processing on the first image characteristic to obtain a second image characteristic, mapping the second image characteristic based on a target threshold value, and outputting a binary image of the two-dimensional code image.
2. The method of claim 1, wherein the convolving the two-dimensional code image to obtain the first image feature of the two-dimensional code image comprises:
and inputting the two-dimensional code image into a first convolutional layer in a plurality of convolutional layers of the two-dimensional code processing model, performing convolutional calculation on the first convolutional layer, inputting a calculation result into a next convolutional layer until the calculation of each convolutional layer is completed, and outputting the obtained convolutional layer as a first image feature of the two-dimensional code image.
3. The method of claim 2, wherein deconvolving the first image feature to obtain a second image feature comprises:
and inputting the first image characteristic into a first deconvolution layer of a plurality of deconvolution layers of the two-dimensional code processing model, performing deconvolution calculation on the first deconvolution layer, inputting a calculation result into a next deconvolution layer until the calculation of each deconvolution layer is completed, and outputting the obtained deconvolution as a second image characteristic of the two-dimensional code image.
4. The method according to claim 1, wherein before the convolution processing of the two-dimensional code image, the method further comprises:
and preprocessing the two-dimensional code image, and performing image enhancement processing on the preprocessed two-dimensional code image.
5. The method of claim 4, wherein the pre-processing the two-dimensional code image comprises:
and determining an image part comprising the two-dimensional code from the two-dimensional code image, and carrying out size adjustment on the image part to obtain an image matched with the two-dimensional code processing model.
6. The method according to claim 4, wherein the image enhancement processing on the preprocessed two-dimensional code image comprises:
keeping the binaryzation label of each pixel point unchanged, and adjusting at least one of brightness and contrast of the preprocessed two-dimensional code image.
7. The method of claim 1, wherein the mapping the second image feature based on a target threshold comprises:
comparing a target threshold value with the pixel value of each pixel point of the second image characteristic;
when the pixel value of any pixel point is smaller than a target threshold value, the gray value of the pixel point is mapped into a first gray value, and when the pixel value of any pixel point is larger than the target threshold value, the gray value of the pixel point is mapped into a second gray value.
8. A two-dimensional code processing apparatus, characterized in that the apparatus comprises:
an acquisition unit configured to perform acquisition of a two-dimensional code image;
an input unit configured to perform input of the two-dimensional code image to a two-dimensional code processing model;
the convolution processing unit is configured to execute convolution processing on the two-dimensional code image through the two-dimensional code processing model to obtain a first image characteristic of the two-dimensional code image;
a deconvolution processing unit configured to perform deconvolution processing on the first image feature to obtain a second image feature;
a mapping unit configured to perform mapping of the second image feature based on a target threshold;
an output unit configured to perform outputting a binarized image of the two-dimensional code image.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the two-dimensional code processing method of any one of claims 1 to 7.
10. A storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the two-dimensional code processing method according to any one of claims 1 to 7.
CN201911175822.2A 2019-11-26 2019-11-26 Two-dimensional code processing method and device, electronic equipment and storage medium Active CN110991457B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911175822.2A CN110991457B (en) 2019-11-26 2019-11-26 Two-dimensional code processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911175822.2A CN110991457B (en) 2019-11-26 2019-11-26 Two-dimensional code processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110991457A true CN110991457A (en) 2020-04-10
CN110991457B CN110991457B (en) 2023-12-08

Family

ID=70087171

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911175822.2A Active CN110991457B (en) 2019-11-26 2019-11-26 Two-dimensional code processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110991457B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112347805A (en) * 2020-11-25 2021-02-09 广东开放大学(广东理工职业学院) Multi-target two-dimensional code detection and identification method, system, device and storage medium
CN112417918A (en) * 2020-11-13 2021-02-26 珠海格力电器股份有限公司 Two-dimensional code identification method and device, storage medium and electronic equipment
CN113221737A (en) * 2021-05-11 2021-08-06 杭州海康威视数字技术股份有限公司 Method, device and equipment for determining material information and storage medium
CN113780492A (en) * 2021-08-02 2021-12-10 南京旭锐软件科技有限公司 Two-dimensional code binarization method, device and equipment and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016163064A (en) * 2015-02-26 2016-09-05 国立大学法人 鹿児島大学 Imaging device, extraction method of digital watermark, digital watermark and optimization method of opening and closing pattern
CN106875357A (en) * 2017-01-26 2017-06-20 上海正雅齿科科技有限公司 Image in 2 D code processing method
CN108737750A (en) * 2018-06-07 2018-11-02 北京旷视科技有限公司 Image processing method, device and electronic equipment
CN108960214A (en) * 2018-08-17 2018-12-07 中控智慧科技股份有限公司 Fingerprint enhancement binarization method, device, equipment, system and storage medium
CN110378854A (en) * 2019-07-17 2019-10-25 上海商汤智能科技有限公司 Robot graphics' Enhancement Method and device
CN110457972A (en) * 2019-08-05 2019-11-15 网易(杭州)网络有限公司 Two-dimensional code identification method and device, storage medium, electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016163064A (en) * 2015-02-26 2016-09-05 国立大学法人 鹿児島大学 Imaging device, extraction method of digital watermark, digital watermark and optimization method of opening and closing pattern
CN106875357A (en) * 2017-01-26 2017-06-20 上海正雅齿科科技有限公司 Image in 2 D code processing method
CN108737750A (en) * 2018-06-07 2018-11-02 北京旷视科技有限公司 Image processing method, device and electronic equipment
CN108960214A (en) * 2018-08-17 2018-12-07 中控智慧科技股份有限公司 Fingerprint enhancement binarization method, device, equipment, system and storage medium
CN110378854A (en) * 2019-07-17 2019-10-25 上海商汤智能科技有限公司 Robot graphics' Enhancement Method and device
CN110457972A (en) * 2019-08-05 2019-11-15 网易(杭州)网络有限公司 Two-dimensional code identification method and device, storage medium, electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘浩: "二维码数字图像的增强处理与数据挖掘应用研究", 《中国优秀硕士学位论文全文数据库 信息科技辑 (月刊)》, no. 2017 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112417918A (en) * 2020-11-13 2021-02-26 珠海格力电器股份有限公司 Two-dimensional code identification method and device, storage medium and electronic equipment
CN112417918B (en) * 2020-11-13 2022-03-18 珠海格力电器股份有限公司 Two-dimensional code identification method and device, storage medium and electronic equipment
CN112347805A (en) * 2020-11-25 2021-02-09 广东开放大学(广东理工职业学院) Multi-target two-dimensional code detection and identification method, system, device and storage medium
CN113221737A (en) * 2021-05-11 2021-08-06 杭州海康威视数字技术股份有限公司 Method, device and equipment for determining material information and storage medium
CN113221737B (en) * 2021-05-11 2023-09-05 杭州海康威视数字技术股份有限公司 Material information determining method, device, equipment and storage medium
CN113780492A (en) * 2021-08-02 2021-12-10 南京旭锐软件科技有限公司 Two-dimensional code binarization method, device and equipment and readable storage medium

Also Published As

Publication number Publication date
CN110991457B (en) 2023-12-08

Similar Documents

Publication Publication Date Title
CN108594997B (en) Gesture skeleton construction method, device, equipment and storage medium
CN109829456B (en) Image identification method and device and terminal
CN107945163B (en) Image enhancement method and device
CN110059685B (en) Character area detection method, device and storage medium
CN110991457B (en) Two-dimensional code processing method and device, electronic equipment and storage medium
CN110650379B (en) Video abstract generation method and device, electronic equipment and storage medium
CN110490179B (en) License plate recognition method and device and storage medium
CN109522863B (en) Ear key point detection method and device and storage medium
CN109360222B (en) Image segmentation method, device and storage medium
CN109558837B (en) Face key point detection method, device and storage medium
CN111461097A (en) Method, apparatus, electronic device and medium for recognizing image information
CN108776822B (en) Target area detection method, device, terminal and storage medium
CN110827195B (en) Virtual article adding method and device, electronic equipment and storage medium
CN112907725A (en) Image generation method, image processing model training method, image processing device, and image processing program
CN110503159B (en) Character recognition method, device, equipment and medium
CN111754386A (en) Image area shielding method, device, equipment and storage medium
CN110807769B (en) Image display control method and device
CN110070143B (en) Method, device and equipment for acquiring training data and storage medium
CN110189348B (en) Head portrait processing method and device, computer equipment and storage medium
CN110991445A (en) Method, device, equipment and medium for identifying vertically arranged characters
CN110675473A (en) Method, device, electronic equipment and medium for generating GIF dynamic graph
CN110232417B (en) Image recognition method and device, computer equipment and computer readable storage medium
CN112135191A (en) Video editing method, device, terminal and storage medium
CN110163192B (en) Character recognition method, device and readable medium
CN111860064A (en) Target detection method, device and equipment based on video and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant