CN110766117B - Two-dimensional code generation method and system - Google Patents

Two-dimensional code generation method and system Download PDF

Info

Publication number
CN110766117B
CN110766117B CN201810845760.0A CN201810845760A CN110766117B CN 110766117 B CN110766117 B CN 110766117B CN 201810845760 A CN201810845760 A CN 201810845760A CN 110766117 B CN110766117 B CN 110766117B
Authority
CN
China
Prior art keywords
image
dimensional code
code image
background
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810845760.0A
Other languages
Chinese (zh)
Other versions
CN110766117A (en
Inventor
徐明亮
吕培
李亚飞
周兵
李翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou University
Original Assignee
Zhengzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou University filed Critical Zhengzhou University
Priority to CN201810845760.0A priority Critical patent/CN110766117B/en
Publication of CN110766117A publication Critical patent/CN110766117A/en
Application granted granted Critical
Publication of CN110766117B publication Critical patent/CN110766117B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • G06K19/06009Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking
    • G06K19/06037Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking multi-dimensional coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • G06K19/06009Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking
    • G06K19/06046Constructional details
    • G06K19/06103Constructional details the marking being embedded in a human recognizable image, e.g. a company logo with an embedded two-dimensional code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses a method and a system for generating a two-dimensional code. The method comprises the following steps: converting the background image into a background gray image; processing the background gray image by adopting a Gaussian difference operator method based on edge tangential flow to obtain a contour image; performing fusion processing on the contour image, the background gray image and the first two-dimensional code image to obtain a second two-dimensional code image; performing color quantization processing on the second two-dimensional code image by adopting an LAB uniform color space to obtain a third two-dimensional code image; and restoring the color of the third two-dimensional code image to obtain a fourth two-dimensional code image. According to the method, the generation process of the two-dimension code and the cartoon rendering process of the background image are combined into a whole, the key area of visual feedback in the picture is identified through feature recognition and saliency area extraction, so that the generation of the black and white module in the area is adjusted through a threshold setting method based on the generation process of the two-dimension code module, and the attractiveness of the final two-dimension code generation result is improved.

Description

Two-dimensional code generation method and system
Technical Field
The invention relates to the technical field of two-dimensional codes, in particular to a method and a system for generating a two-dimensional code.
Background
A typical two-dimensional code is a pattern of black and white that is regularly distributed on a plane by a specific geometric figure. With the more and more extensive and frequent application of the two-dimensional code, the two-dimensional code has no characteristic in appearance and is not beautiful enough.
Disclosure of Invention
The embodiment of the invention provides a method and a system for generating a two-dimensional code, which aim to solve the problems that the two-dimensional code in the prior art has no characteristics and is not beautiful enough.
In a first aspect, a method for generating a two-dimensional code is provided, including:
converting the background image into a background gray image;
processing the background gray image by adopting a Gaussian difference operator method based on edge tangential flow to obtain a contour image;
fusing the contour image, the background gray image and the first two-dimensional code image to obtain a second two-dimensional code image;
performing color quantization processing on the second two-dimensional code image by adopting an LAB uniform color space to obtain a third two-dimensional code image;
and restoring the color of the third two-dimensional code image to obtain a fourth two-dimensional code image.
In a second aspect, a two-dimensional code image generation system is provided, including:
the transformation module is used for transforming the background image into a background gray image;
the Gaussian difference module is used for processing the background gray level image by adopting a Gaussian difference operator method based on edge tangential flow to obtain a contour image;
the fusion module is used for carrying out fusion processing on the contour image, the background gray image and the first two-dimensional code image to obtain a second two-dimensional code image;
the color quantization module is used for performing color quantization processing on the second two-dimensional code image by adopting an LAB uniform color space to obtain a third two-dimensional code image;
and the color restoration module is used for restoring the color of the third two-dimensional code image to obtain a fourth two-dimensional code image.
According to the embodiment of the invention, the generation process of the two-dimensional code and the cartoon rendering process of the background image are combined into a whole, and the key area of visual feedback in the picture is identified through feature identification and saliency area extraction, so that the generation of the black and white module in the area is adjusted through a threshold setting method based on the generation process of the two-dimensional code module, and the attractiveness of the final two-dimensional code generation result is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive labor.
Fig. 1 is a flowchart of a method for generating a two-dimensional code according to an embodiment of the present invention;
fig. 2 is an effect diagram of each step of a two-dimensional code generation process according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a process for processing an edge tangential flow image along an edge tangential flow by using a Gaussian difference operator method according to an embodiment of the present invention;
fig. 4 is a schematic effect diagram corresponding to the number of times of processing the edge tangential flow image along the edge tangential flow by using the gaussian difference operator method according to the embodiment of the present invention;
FIG. 5 is a schematic diagram of a bilateral filtering process according to an embodiment of the present invention;
FIG. 6 is a comparison graph of the color quantization effect of the embodiment of the present invention;
fig. 7 is a block diagram of a two-dimensional code generation system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention discloses a method for generating a two-dimensional code. As shown in fig. 1, the method comprises the steps of:
step S101: and transforming the background image into a background gray image.
The background image is a cartoon image, for example, as shown in fig. 2 (a).
Specifically, the steps include the following processes:
(1) The color of the background image is converted from the RGB color space to the LAB color space.
The LAB color space is a color space required for the gradation and binarization processing, and therefore, the background image is first converted from the RGB color space to the LAB color space, and for example, the converted image is shown in fig. 2 (b).
(2) And taking a gray level layer from the background image converted into the LAB color space to obtain a background gray level image.
For example, the background grayscale image is shown in fig. 2 (c).
Step S102: and processing the background gray image by adopting a Gaussian difference operator method based on edge tangential flow to obtain a contour image.
Specifically, in a preferred embodiment of the present invention, the step includes the following steps:
(1) And constructing an edge tangential flow on the background gray level image to obtain an edge tangential flow image.
The definition of the edge tangential flow is as follows: and for the picture I (z) (z = (x, y)) representing any pixel point of the picture, constructing a smooth edge flow field, and storing the edge flow field of the feature region. Defining t (z) as an edge tangent as a vector perpendicular to the picture gradient direction g (z) = Δ I (z). Such a vector field is defined as Edge Tangential Flow (ETF).
The edge tangential flow can be constructed on the background gray image by filtering, for example, as shown in fig. 2 (e) - (g). In particular, at each pixel-centric kernel, a non-linearly smooth vector is applied, such that the dominant edge direction is preserved and weaker edges are redirected to their dominant regions. At the same time, to maintain a sharp inflection region, smoothing is performed along directionally similar edges. Edge tangential flow filtering is defined by the formula:
Figure GDA0003837589550000041
wherein t (z) represents a tangent vector at the z position of the pixel point, the direction of the tangent vector points to a local edge, and 2 pi is taken as a period. Omega μ Representing the kernel radius mu at the picture pixel point z. Pixel point z 1 And z 2 At omega μ Within the defined range, the edge flow fields are jointly determined. By selecting these two points, the gradient values can be compared. k represents a vector normalization term, which can be set empirically. In the embodiment of the invention, the value of k is 3.
Figure GDA0003837589550000042
Wherein, ω is s Representing a spatial weight function.
Figure GDA0003837589550000043
Wherein, ω is m Representing an amplitude weighting function.
Figure GDA0003837589550000044
And
Figure GDA0003837589550000045
respectively represented at pixel point z 1 And z 2 The normalized gradient magnitude of (a). Omega m Is in the range of [0, 1]]And its value follows the amplitude difference
Figure GDA0003837589550000046
Is increased, indicating that more weight is given to the neighboring, gradient magnitude ratio center pixel z 1 Higher pixel z 2 Thereby ensuring protection of the leading edge direction.
ω d (x, y) = | t (x) · t (y) |. Wherein, ω is d The directional weighting function is expressed and plays a role of smoothing regions with similar directions. t (x) and t (y) represent normalized tangent vectors of the horizontal and vertical coordinates x and y, respectively. When the angle between the two vectors approaches 0, ω d The value of (a) is increased; when the two vectors are close to vertical, ω d The value of (c) is decreased. If the angle between the two vectors is greater than 90 degrees, the direction of the vectors t (x) and t (y) is rotated by applying a function, which is defined as follows:
Figure GDA0003837589550000051
through the above process, an edge tangential flow field is established.
(2) And processing the edge tangential flow image along the edge tangential flow by adopting a Gaussian difference operator method to obtain a first contour image.
Preferably, the edge tangential flow image may be normalized first, and the resulting image is shown in fig. 2 (g).
To obtain a continuous smooth, noise-free picture dominant contour curve, a first contour image is obtained along the edge tangential flow using the gaussian difference operator (FDoG) method. In the edge tangential flow field, t (z) represents the tangent vector at pixel point z, whose direction points to the local edge, i.e. there is a maximum difference in its vertical direction, i.e. the gradient direction. Along the edge tangential flow, applying a Gaussian difference operator method for filtering in the gradient direction.
By C z (s) represents a rheological curve of the edge tangential flow at the pixel point z (namely, a flow axis at the pixel point z), represents the edge direction in the middle of the contour line, s is a certain point in the longitudinal arc length at the pixel point z position in the image I, and the value of s can be positive or negative, and different values of s indicate different points on the longitudinal arc length at the pixel point z position. Suppose pixel point z is located at the center of the curve, i.e. C z (s) =0. By means of z,s Represents the simultaneous presence of t (C) in a straight line z (s)) and C z (s) the intersecting part (i.e. the gradient axis at pixel point z). t (C) z (s)) indicates the local edge direction of the rheological curve, representing the edge direction at both ends of the contour. Will l z,s Expressed by a determined transverse arc length parameter t, hence l z,s (t) represents a straight line l z,s At t. While assuming bit l z,s In C z (s) center, i.e. /) z,s (0)=C z (s). Note that z,s Parallel to the gradient vector g (C) z (s)). Fig. 3 is a schematic flow chart of filtering by applying the difference of gaussian operator method. The formula of filtering by the Gaussian difference operator method is as follows:
Figure GDA0003837589550000061
h (z) represents the pixel point processed by adopting a Gaussian difference operator method. I (l) z,a (a) Denotes an input image I at l z,a (a) A gradient value of the position. l. the z,a (a) Represents a straight line simultaneously with t (C) z (a) ) and C z (a) The intersecting part (i.e. the gradient axis at pixel point z). Wherein, a represents a certain point in the arc length at the z position of the pixel point in the image I, and b represents a one-dimensional filtering weight vector function on the gradient line. The meaning of this formula is as follows: when along the flow axis C z While moving, a one-dimensional filtering f (b) is applied on the gradient lines. The response of the single filter is then taken along the flow axis C using a point a in the arc length at the z position of the pixel point in the image I x Are accumulated and represented as
Figure GDA0003837589550000066
Wherein G is σ Represents variance of G σ (b) A univariate gaussian function of (2).
In particular, the method comprises the following steps of,
Figure GDA0003837589550000062
σ determines the length of the flow kernel and also the degree of line coherence. Generally, σ has a size of 3.
For filtering f (b), an edge model based on the gaussian difference operator method is applied, whose formula is as follows:
Figure GDA0003837589550000063
Figure GDA0003837589550000064
a univariate gaussian function representing the central interval region.
Figure GDA0003837589550000065
A univariate gaussian function representing the surrounding spaced region. Parameter sigma c And σ s For controlling the size of the central space and the surrounding space. Both the center spacing and the surrounding spacing act on the edges of the face. The gaussian difference operator method constructs a coordinate axis around the image edge information, as shown in fig. 3, the position represented by 0 is located at the edge center (i.e. the center of the width of the contour line), the "interval" refers to the distance from the position represented by 0, and the center interval and the surrounding interval are both a normal distribution curve based on the coordinate axis, because the distribution of the center interval and the surrounding interval is different due to different parameters. The center interval can deepen the depth of the center of the edge; the peripheral spacing deepens both the depth of the center of the edge and the depth of the pixels around the center of the edge. The central and peripheral spaces together may delineate the edges. In general, set σ c =1.6,σ s =1.6. Rho is used for controlling the noise detection degree, and the value range of rho is [0.97,1.0 ]]. In order to minimize the generated picture noise and not to affect the decoding function after being fused with the two-dimensional code, the value of ρ is 0.97.
(3) And carrying out binarization processing on the first contour image to obtain a second contour image.
After the processed pixel points H (z) are obtained, the pixel points jointly form a picture H. Converting the picture H into a black-and-white image through binarization processing, wherein a formula of the binarization processing is as follows:
Figure GDA0003837589550000071
tau takes a value of [0,1]. Generally, the value is 0.5. Second contour image after binarization processing
Figure GDA0003837589550000072
A target line graph which is a contour, for example, as shown in fig. 2 (h).
(4) And superposing the second contour image and the background image to obtain a third contour image.
To further enhance the effect of contouring, gaussian difference operator method filtering iteration operations may be applied to the input picture. The iterative operation is to superimpose the second contour image on the background image to obtain a third contour image.
(5) And processing the third contour image along the edge tangential flow by adopting a Gaussian difference operator method to obtain a fourth contour image.
And (3) processing the third contour image by adopting the method in the step (2) to obtain a fourth contour image, which is not described herein again.
(6) And carrying out binarization processing on the fourth contour image to obtain a contour image.
And (4) processing the third contour image by adopting the method in the step (3) to obtain a final contour image. As shown in fig. 2 (i), it will not be described herein.
It should be understood that the number of iterations in the embodiment of the present invention is two, but the present invention is not limited thereto, and the above steps (4) to (6) may be repeated until a satisfactory contour image is obtained.
Generally, the more the iteration times, the more obvious the line is, and the more abundant the details are; however, if the number of iterations is too large, the lines are more densely distributed, and the picture will appear more "disorderly", thereby reducing the visual effect thereof. Fig. 4 shows the background image (a), the image after the first gaussian difference operator method processing (b) and the second gaussian difference operator method processing (c). It is obvious from the figure that the line effect is clearer when the iteration times are 2, the details are richer, and the texture contour is more obvious.
In addition, before each time of processing by using the gaussian difference operator method, the input image may be subjected to a gaussian filtering operation, so as to obtain a smoother and softer line effect, as shown in fig. 2 (d).
Further, after each processing by the gaussian difference operator method, the obtained image may be subjected to region smoothing processing. The goal of region smoothing is to remove unnecessary detail inside the region while preserving important parts, and therefore this is a feature preserving picture smoothing method, usually implemented with bilateral filtering. However, conventional bilateral filtering has some limitations that ignore the direction of color contrast when a circular region smoothes out the differences in secondary colors, thereby removing some tiny but meaningful shape boundaries, making the image appear edge-roughened. The bilateral filtering method based on the edge tangential flow can better overcome the problems. Embodiments of the present invention may utilize two separate linear bilateral smoothing operations, one along the edge direction and the other along the gradient direction.
Specifically, the following formula is used to describe the bilateral filtering process based on the edge tangential flow, and the linear bilateral filtering along the edge direction is defined as follows:
Figure GDA0003837589550000081
wherein, C z Shows the rheological curve of the edge tangential flow, V e A weight normalization parameter is represented by a weight normalization parameter,
Figure GDA0003837589550000082
Figure GDA0003837589550000083
the spatial weight function in the bilateral filtering has the same expression as the spatial weight function, and is not described herein again. Along the rheological curve C z Direction, σ e The kernel size of pixel point z is determined. h represents a similarity weight function for following the rheological curve C z And when the direction is in, comparing the color difference of the pixel points of the on-axis point and the width center of the contour line. h (z, C) z (a),σ)=G σ (||I(z)-I(C z (a))||)。
Similarly, the bilateral filtering operation along the gradient direction is defined as follows:
Figure GDA0003837589550000091
l z (b) Representing the gradient axis at the z-position of the pixel point. V g A weight normalization parameter is represented by a weight normalization parameter,
Figure GDA0003837589550000092
Figure GDA0003837589550000093
representing a univariate gaussian function.
Step S103: and carrying out fusion processing on the contour image, the background gray image and the first two-dimensional code image to obtain a second two-dimensional code image.
Specifically, the steps include the following processes:
(1) And carrying out visual saliency area extraction on the background gray level image to obtain a visual saliency image.
The visually significant image is shown in fig. 2 (j).
Specifically, the process of extracting the visually significant region is as follows:
(1) the background grayscale image is divided into a plurality of sub-images.
If the side length of the background gray-scale image is n pixels and the side length of the sub-image is m pixels, the background gray-scale image is divided into n sub-images
Figure GDA0003837589550000094
Sub-images, and each sub-image has a size of m x m pixels.
(2) And carrying out binarization processing on each sub-image to obtain a visual saliency image.
Specifically, the calculation formula of the binarization processing is as follows:
Figure GDA0003837589550000101
wherein, mod G r RepresentThe binarization effect of the r-th sub-image of the background gray level image determines whether the sub-image is filled with black or white. Each subimage is symbolized as subG r . r denotes the subscript of the subimage, which ranges from 1 to
Figure GDA0003837589550000102
G w (x, y) represents the pixel weight at point (x, y) of each sub-image, and
Figure GDA0003837589550000103
G w representing a gaussian function.
Figure GDA0003837589550000104
(2) And carrying out binarization processing on the background gray level image to obtain a background binarization image.
The background binary image is shown in fig. 2 (k).
(3) And superposing the outline image, the visual saliency image and the first two-dimensional code image to obtain a fifth two-dimensional code image.
Wherein, the first two-dimensional code image is an original standard two-dimensional code image composed of black and white squares, as shown in fig. 2 (l); the fifth two-dimensional code image is shown in fig. 2 (m).
(4) And obtaining a sixth two-dimensional code image according to the binarization color value of the fifth two-dimensional code image and the binarization color value of the first two-dimensional code image.
Specifically, the steps include the following processes:
(1) and determining the binarization color value of each module of the fifth two-dimensional code image.
The value of each block of the fifth two-dimensional code image is calculated according to the following formula:
Figure GDA0003837589550000105
wherein, the set M represents the two-dimension code function part, the format information, the version information and the relevant module of the input data code.
Figure GDA0003837589550000111
And
Figure GDA0003837589550000112
respectively representing a first two-dimensional code image Q s And a fifth two-dimensional code image Q i The binarized color value of the r-th module. From the calculated values of the above formula, it can be determined whether the module is black or white, where 1 represents black and 0 represents white.
(2) And according to the binarization color value of each module of the fifth two-dimensional code image and the binarization color value of each module of the first two-dimensional code image, carrying out information resetting of bit streams on parts of the completion codes and the error correcting codes in each module of the fifth two-dimensional code image to obtain a sixth two-dimensional code image.
In particular, it can be seen from the above formula
Figure GDA0003837589550000113
And
Figure GDA0003837589550000114
may be different, lacking parts of the completion code and error correction code. Therefore, it is necessary to reset information of the bit stream B to maintain the bit streams B and B
Figure GDA0003837589550000115
And (5) the consistency is achieved. Since the change to the module is the same as the change to one byte (bit) in bitstream B, the values in bitstream B are modified for all bits of the padding and error correction codes according to the following rules: (1) If it is used
Figure GDA0003837589550000116
And
Figure GDA0003837589550000117
are the same, let B L(r) Keeping the original shape; (2) If it is used
Figure GDA0003837589550000118
And
Figure GDA0003837589550000119
is calculated according to the following formula, L (r) represents the bit of the index bit stream B corresponding to the r-th module:
Figure GDA00038375895500001110
by this step, the values in the bit stream B are reset in preparation for the following steps.
It should be understood that the sixth two-dimensional code image after the fusion processing through the above steps is a beautified binarized two-dimensional code without RS (Reed-Solomon) coding added.
(5) And coding the sixth two-dimensional code image according to the Reed-Solomon coding rule to obtain a second two-dimensional code image.
Specifically, the steps include the following processes:
and selecting k RS code words from the c information codes of the sixth two-dimensional code image, calculating the values of the remaining (c-k) code words according to an RS coding rule, and determining the word number of the complementary codes of the sixth two-dimensional code image to obtain the second two-dimensional code.
c denotes the total number of data codewords and error correction codewords. k denotes the number of data codewords. S. the c Representing the selected set of k RS codes. S. the c Can be based on minimizing the second two-dimensional code image Q b And a fifth two-dimensional code image Q i The visual distortion of (a) results in:
Figure GDA0003837589550000121
Figure GDA0003837589550000122
is a measure of
Figure GDA0003837589550000123
And
Figure GDA0003837589550000124
a function of coherence.
Figure GDA0003837589550000125
Second two-dimensional code image Q b Representing a two-dimensional code that can be successfully decoded and requires beautified binarization to be generated, as shown in fig. 2 (n).
η (r) represents the visual importance value of the r-th module. The value of visual importance η (r) is used to assemble S c The visual importance value η (r) is defined by introducing edge features and salient features. The specific visual importance value η (r) is defined as follows:
η(r)=∑ x,y λ 1 ·subE r (x,y)+λ 2 ·subS r (x,y)
λ 1 and λ 2 Represents a weighting coefficient, the value of which can be set empirically. In the embodiment of the invention, λ 1 =0.65,λ 2 =0.35。E r And S r Respectively representing an edge image and a saliency image of a picture. According to the method and the device, an opencv tool is utilized, and a sobel operator or a Laplacian operator in an opencv tool package is adopted to process to obtain the edge image. The Salient images were acquired with reference to the article effective medical Region Detection with Soft Image extraction by Cheng, M.M. et al (Cheng M, warrell J, lin W Y, et al. Effective medical Region Detection with Soft Image extraction [ C)]I/IEEE International Conference on Computer Vision. IEEE Computer Society,2013, 1529-1536).
The number of the filled codes can affect the aesthetic degree of the following two-dimensional code, the larger the number of the filled codes is, the larger the filled code field is, the larger the range of the beautifying operation for changing the picture is, so that the number of the k-d filled codes needs to be determined to determine the range which can be beautified by the subsequent steps. Since the data code word of the input d length has to be included in the set S c In practice, only (k-d) codewords need to be selected from the remaining (n-d) codewords. Such a selection method is too computationally expensive for exhaustive enumeration. For example, when d =20,for a two-dimensional code of error correction level (9, L), the search space is
Figure GDA0003837589550000131
Wherein d is determined by the amount of data written in the two-dimensional code. Because of S c Is a set of k code words, so there are k-d code word positions in addition to d data code words. Wherein n-d refers to code words at other positions of the picture, and k-d code words are selected from the code words and placed in the set S c And (5) as a completion code. In addition, the embodiment of the invention can apply a greedy algorithm in the prior art and determine a global optimal solution through a local optimal solution, thereby improving the efficiency of problem solution and solving the problem of overlarge time complexity so as to select k visual saliency codewords. Code words each containing 8 bytes are recycled and each byte corresponds to a black and white module. The visual importance value for a codeword is calculated by accumulating the visual importance values of all bits. The codewords are then ordered according to the visual importance value. The first k codewords with the highest visual importance value are selected as S c Among them.
Step S104: and carrying out color quantization processing on the second two-dimensional code image by adopting an LAB uniform color space to obtain a third two-dimensional code image.
Specifically, the steps include the following processes:
(1) And converting the second two-dimensional code image into an LAB color space.
(2) And performing region smoothing processing on the second two-dimensional code image converted into the LAB color space.
The region smoothing method is the region smoothing method described above, and is not repeated here, and the obtained image is shown in fig. 2 (o).
(3) And carrying out color quantization processing on the part corresponding to the filling codes of the second two-dimensional code image after the area smoothing processing to obtain a third two-dimensional code image.
In order to make the smoothed image have a cartoon effect, the image after the area smoothing is further processed by adopting a color quantization method. Color quantization is to integrate a plurality of visual colors in an image into a smaller number of colors and to regenerate an original image using the integrated colors. The integration process is based on the threshold setting criteria in the quantization formula. If the standard control is reasonable, the state of minimum quantization error is achieved, that is, the contrast between the generated image and the original image after quantization is small. Therefore, color quantization is a method used to reduce the number of colors in the original image while it is desirable to keep the visual effect of the original image unchanged. Processing in this way can make the image appear to have a painted effect, as shown in fig. 2 (p). Fig. 6 shows differences after color quantization of a picture, where (a) is an original image and (b) is a color quantized image.
The specific algorithm for color quantization may use the color quantization method proposed in the paper Real-time video interaction ([ 1] olsen S C, gooch B.real-time video interaction [ C ]// ACM SIGGRAPH.ACM, 2006. The calculation formula is as follows:
Figure GDA0003837589550000141
wherein, the first and the second end of the pipe are connected with each other,
Figure GDA0003837589550000142
representing the quantized image. Δ q represents the interval width between the pixel points around the target pixel point (i.e., the pixel point to be quantized) and the target pixel point.
Figure GDA0003837589550000143
And representing the position of the target pixel point. q. q.s nt And the maximum value of the interval width closest to the target pixel point is represented.
Figure GDA0003837589550000144
Representing a parameter that controls sharpness.
For color quantized images, the transition sharpness is independent of the underlying image, possibly producing many visible transitions in smooth shaded areas. To minimize disharmony transitions, parameters are defined that control sharpness
Figure GDA0003837589550000145
As a function of the luminance gradient in the cartoonized image. The embodiment of the invention only allows the high interval boundary to appear in the place with high brightness gradient value, and when the brightness gradient value is higher than 1, the high interval boundary appears; when the brightness gradient value is lower than 1, the interval boundaries are dispersed over a large area. Thus, embodiments of the present invention provide a way to trade off color reduction variation against increased quantization by defining a target sharpness range and a gradient range. The scope of definition of the target of the embodiment of the invention is defined by
Figure GDA0003837589550000146
Gradient range is Λ δδ ]. Embodiments of the present invention limit the calculated gradient to the gradient range Λ δδ ]Then by mapping it linearly to
Figure GDA0003837589550000147
In the step (1), the first step,
Figure GDA0003837589550000148
the values of (b) are taken as parameters in linear mapping and can be obtained by computer induction according to the values of the gradient range, thereby ensuring better control of the image sharpness. In the present example, q ∈ [8,10 ]],Λ δ =0,Ω δ =2,
Figure GDA0003837589550000149
Figure GDA0003837589550000151
A second advantage is the consistency for the color quantization of pictures. For standard quantization, a small brightness change can cause a large change in output, especially for the case where the input contains noise, which may enlarge the effective range of noise and affect the quality of the picture. By the color quantization mode based on the picture gradient, the color change generated in the output process can be softened in a relatively low contrast area, so that the visual effect generated by noise is not objectionable.
Step S105: and restoring the color of the third two-dimensional code image to obtain a fourth two-dimensional code image.
Specifically, the process is as follows:
(1) And carrying out color space reconstruction processing on the third two-dimensional code image.
The third two-dimensional code image is changed from black and white to a color defined by the LAB color space. The reconstructed image is shown in fig. 2 (q).
(2) And converting the color of the reconstructed third two-dimensional code image from an LAB color space to an RGB color space to obtain a fourth two-dimensional code image.
The fourth two-dimensional code image is shown in fig. 2 (r).
In summary, the two-dimensional code generation method provided by the embodiment of the invention combines the two-dimensional code generation process and the cartoon rendering process of the background image, and identifies the key area of visual feedback in the picture through feature recognition and saliency area extraction, so that the generation of the black and white module in the area is adjusted through a threshold setting method based on the two-dimensional code module generation process, and the attractiveness of the final two-dimensional code generation result is improved.
The embodiment of the invention also discloses a system for generating the two-dimensional code. As shown in fig. 7, the system includes the following modules:
and a transformation module 801, configured to transform the background image into a background grayscale image.
Preferably, the background image is a cartoon image.
And the gaussian difference module 802 is configured to process the background grayscale image by using a gaussian difference operator method based on edge tangential flow to obtain a contour image.
And the fusion module 803 is configured to perform fusion processing on the contour image, the background grayscale image, and the first two-dimensional code image to obtain a second two-dimensional code image.
And the color quantization module 804 is configured to perform color quantization processing on the second two-dimensional code image by using an LAB uniform color space to obtain a third two-dimensional code image.
And the color restoration module 805 is configured to restore the color of the third two-dimensional code image to obtain a fourth two-dimensional code image.
Preferably, the transformation module 801 comprises:
and the first conversion submodule is used for converting the color of the background image from an RGB color space to an LAB color space.
And the gray level taking submodule is used for taking a gray level from the background image converted into the LAB color space to obtain a background gray level image.
Preferably, the system further comprises:
and the Gaussian blur module is used for carrying out Gaussian blur processing on the background gray level image before the step of obtaining the contour image.
Preferably, the gaussian difference module 802 includes:
and the construction submodule is used for constructing the edge tangential flow on the background gray level image to obtain an edge tangential flow image.
And the first edge processing submodule is used for processing the edge tangential flow image along the edge tangential flow by adopting a Gaussian difference operator method to obtain a first contour image.
And the first binarization submodule is used for carrying out binarization processing on the first contour image to obtain a second contour image.
And the first superposition submodule is used for superposing the second contour image and the background image to obtain a third contour image.
And the second edge processing submodule is used for processing the third contour image along the edge tangential flow by adopting a Gaussian difference operator method to obtain a fourth contour image.
And the second binarization submodule is used for carrying out binarization processing on the fourth contour image to obtain a contour image.
Preferably, the system further comprises:
and the first region smoothing module is used for performing region smoothing processing on the contour image after the step of obtaining the contour image.
Preferably, the fusion module 803 includes:
and the extraction submodule is used for carrying out visual saliency region extraction on the background gray level image to obtain a visual saliency image.
And the third binarization submodule is used for carrying out binarization processing on the background gray level image to obtain a background binarization image.
And the second superposition submodule is used for superposing the outline image, the visual saliency image and the first two-dimensional code image to obtain a fifth two-dimensional code image.
And the fusion submodule is used for obtaining a sixth two-dimensional code image according to the binarization color value of the fifth two-dimensional code image and the binarization color value of the first two-dimensional code image.
And the coding submodule is used for coding the sixth two-dimensional code image according to the Reed-Solomon coding rule to obtain a second two-dimensional code image.
Preferably, the color quantization module 804 includes:
and the second conversion submodule is used for converting the second two-dimensional code image into an LAB color space.
And the second region smoothing module is used for performing region smoothing processing on the second two-dimensional code image converted into the LAB color space.
And the color quantization submodule is used for performing color quantization processing on the part corresponding to the filling code of the second two-dimensional code image after the region smoothing processing to obtain a third two-dimensional code image.
Preferably, the color restoration module 805 includes:
and the reconstruction submodule is used for carrying out color space reconstruction processing on the third two-dimensional code image.
And the third conversion submodule is used for converting the color of the reconstructed third two-dimensional code image from an LAB color space to an RGB color space to obtain a fourth two-dimensional code image.
In summary, the two-dimensional code generation system of the embodiment of the invention integrates the two-dimensional code generation process and the cartoon rendering process of the background image, and identifies the key area of visual feedback in the picture through feature recognition and saliency area extraction, so that the generation of the black and white module in the area is adjusted through a threshold setting method based on the two-dimensional code module generation process, and the attractiveness of the final two-dimensional code generation result is improved.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (9)

1. A method for generating a two-dimensional code is characterized by comprising the following steps:
converting the background image into a background gray image;
processing the background gray level image by adopting a Gaussian difference operator method based on edge tangential flow to obtain a contour image;
fusing the contour image, the background gray image and the first two-dimensional code image to obtain a second two-dimensional code image;
performing color quantization processing on the second two-dimensional code image by adopting an LAB uniform color space to obtain a third two-dimensional code image;
restoring the color of the third two-dimensional code image to obtain a fourth two-dimensional code image;
the step of obtaining the second two-dimensional code image includes:
performing visual saliency region extraction on the background gray level image to obtain a visual saliency image;
carrying out binarization processing on the background gray level image to obtain a background binarization image;
superposing the outline image, the visual saliency image and the first two-dimensional code image to obtain a fifth two-dimensional code image;
obtaining a sixth two-dimensional code image according to the binarization color value of the fifth two-dimensional code image and the binarization color value of the first two-dimensional code image;
encoding the sixth two-dimensional code image according to a Reed-Solomon encoding rule to obtain a second two-dimensional code image;
wherein, the step of obtaining the sixth two-dimensional code image includes:
determining a binarization color value of each module of the fifth two-dimensional code image;
and according to the binarization color value of each module of the fifth two-dimensional code image and the binarization color value of each module of the first two-dimensional code image, carrying out information resetting of a bit stream on parts of a complete code and an error correcting code in each module of the fifth two-dimensional code image to obtain a sixth two-dimensional code image.
2. The method of claim 1, wherein the step of transforming the background image into a background grayscale image comprises:
converting the color of the background image from an RGB color space to an LAB color space;
and taking a gray level layer from the background image converted into the LAB color space to obtain the background gray level image.
3. The method of claim 1, wherein the step of obtaining a contour image is preceded by the method further comprising:
and performing Gaussian blur processing on the background gray level image.
4. The method of claim 1, wherein the step of obtaining a contour image comprises:
constructing the edge tangential flow on the background gray level image to obtain an edge tangential flow image;
processing the edge tangential flow image along the edge tangential flow by adopting the Gaussian difference operator method to obtain a first contour image;
carrying out binarization processing on the first contour image to obtain a second contour image;
superposing the second contour image and the background image to obtain a third contour image;
processing the third contour image along the edge tangential flow by adopting the Gaussian difference operator method to obtain a fourth contour image;
and carrying out binarization processing on the fourth contour image to obtain the contour image.
5. The method of claim 1, wherein after the step of obtaining a contour image, the method further comprises:
and performing region smoothing processing on the contour image.
6. The method of claim 1, wherein the step of obtaining the third two-dimensional code image comprises:
converting the second two-dimensional code image into an LAB color space;
performing region smoothing processing on the second two-dimensional code image converted into the LAB color space;
and carrying out color quantization processing on the part corresponding to the filling code of the second two-dimensional code image after the area smoothing processing to obtain a third two-dimensional code image.
7. The method according to claim 1, wherein the step of obtaining the fourth two-dimensional code image comprises:
performing color space reconstruction processing on the third two-dimensional code image;
and converting the color of the reconstructed third two-dimensional code image from an LAB color space to an RGB color space to obtain a fourth two-dimensional code image.
8. The method of claim 1, wherein: the background image is a cartoon image.
9. A system for generating a two-dimensional code image, comprising:
the transformation module is used for transforming the background image into a background gray image;
the Gaussian difference module is used for processing the background gray level image by adopting a Gaussian difference operator method based on edge tangential flow to obtain a contour image;
the fusion module is used for carrying out fusion processing on the contour image, the background gray image and the first two-dimensional code image to obtain a second two-dimensional code image;
the color quantization module is used for performing color quantization processing on the second two-dimensional code image by adopting an LAB uniform color space to obtain a third two-dimensional code image;
the color restoration module is used for restoring the color of the third two-dimensional code image to obtain a fourth two-dimensional code image;
the fusion module includes:
the extraction submodule is used for carrying out visual saliency region extraction on the background gray level image to obtain a visual saliency image;
the third binarization submodule is used for carrying out binarization processing on the background gray level image to obtain a background binarization image;
the second superposition submodule is used for superposing the outline image, the visual saliency image and the first two-dimensional code image to obtain a fifth two-dimensional code image;
the fusion submodule is used for obtaining a sixth two-dimensional code image according to the binarization color value of the fifth two-dimensional code image and the binarization color value of the first two-dimensional code image;
the coding submodule is used for coding the sixth two-dimensional code image according to a Reed-Solomon coding rule to obtain the second two-dimensional code image;
wherein, the obtaining of the sixth two-dimensional code image includes:
determining a binarization color value of each module of the fifth two-dimensional code image;
and according to the binarization color value of each module of the fifth two-dimensional code image and the binarization color value of each module of the first two-dimensional code image, carrying out information resetting of a bit stream on parts of a complete code and an error correcting code in each module of the fifth two-dimensional code image to obtain a sixth two-dimensional code image.
CN201810845760.0A 2018-07-27 2018-07-27 Two-dimensional code generation method and system Active CN110766117B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810845760.0A CN110766117B (en) 2018-07-27 2018-07-27 Two-dimensional code generation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810845760.0A CN110766117B (en) 2018-07-27 2018-07-27 Two-dimensional code generation method and system

Publications (2)

Publication Number Publication Date
CN110766117A CN110766117A (en) 2020-02-07
CN110766117B true CN110766117B (en) 2022-12-13

Family

ID=69327936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810845760.0A Active CN110766117B (en) 2018-07-27 2018-07-27 Two-dimensional code generation method and system

Country Status (1)

Country Link
CN (1) CN110766117B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113284035A (en) * 2021-06-01 2021-08-20 江苏鑫合易家信息技术有限责任公司 System and method for generating dynamic picture with two-dimensional code watermark
CN117131896B (en) * 2023-08-29 2024-03-08 宁波邻家网络科技有限公司 AI two-dimension code generation method and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914863A (en) * 2014-03-10 2014-07-09 西藏民族学院 Method for abstractly drawing color image
CN105095939A (en) * 2015-09-07 2015-11-25 郑州普天信息技术有限公司 Two-dimensional code vision optimization method
CN106599965A (en) * 2016-11-25 2017-04-26 北京矩石科技有限公司 Method and device for making image cartoony and fusing image with 2D code
CN106778995A (en) * 2016-11-25 2017-05-31 北京矩石科技有限公司 A kind of art up two-dimensional code generation method and device with image co-registration
CN108154467A (en) * 2017-12-28 2018-06-12 昆明冶金高等专科学校 Method and system are intended in a kind of linear wall die sinking
CN108229234A (en) * 2017-12-07 2018-06-29 北京航空航天大学 A kind of fusion is digitally coded can scan image generation method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914863A (en) * 2014-03-10 2014-07-09 西藏民族学院 Method for abstractly drawing color image
CN105095939A (en) * 2015-09-07 2015-11-25 郑州普天信息技术有限公司 Two-dimensional code vision optimization method
CN106599965A (en) * 2016-11-25 2017-04-26 北京矩石科技有限公司 Method and device for making image cartoony and fusing image with 2D code
CN106778995A (en) * 2016-11-25 2017-05-31 北京矩石科技有限公司 A kind of art up two-dimensional code generation method and device with image co-registration
CN108229234A (en) * 2017-12-07 2018-06-29 北京航空航天大学 A kind of fusion is digitally coded can scan image generation method
CN108154467A (en) * 2017-12-28 2018-06-12 昆明冶金高等专科学校 Method and system are intended in a kind of linear wall die sinking

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《Restructuring the Tridiagonal and Bidiagonal QR Algorithms for Performance》;Field G.Van Zee等;《ACM Transactions on Mathematical Software Volume 40》;20140403;第1-34页 *
基于感兴趣区域和RS编码机制的QR码美化算法;徐晓宇等;《计算机应用》;20180511(第08期);全文 *
面向在线实时应用的卡通风格化方法;洪朝群等;《厦门理工学院学报》;20150228(第01期);全文 *

Also Published As

Publication number Publication date
CN110766117A (en) 2020-02-07

Similar Documents

Publication Publication Date Title
EP0841636B1 (en) Method and apparatus of inputting and outputting color pictures and continually-changing tone pictures
JP5230669B2 (en) How to filter depth images
US8208565B2 (en) Pre-processing method and system for data reduction of video sequences and bit rate reduction of compressed video sequences using temporal filtering
JP2010218547A (en) Method for up-sampling of image
JP2010218548A (en) Method for synthesizing virtual image
US20230051960A1 (en) Coding scheme for video data using down-sampling/up-sampling and non-linear filter for depth map
CN112184585B (en) Image completion method and system based on semantic edge fusion
CN110383696B (en) Method and apparatus for encoding and decoding super-pixel boundaries
CN109660821A (en) Method for processing video frequency, device, electronic equipment and storage medium
CN110766117B (en) Two-dimensional code generation method and system
CN111899295A (en) Monocular scene depth prediction method based on deep learning
US8837842B2 (en) Multi-mode processing of texture blocks
US20210241496A1 (en) Method and apparatus for encoding and decoding volumetric video data
Desai et al. Edge and mean based image compression
Chang et al. An image zooming technique based on vector quantization approximation
JP5888989B2 (en) Image processing apparatus and image processing method
WO2022141222A1 (en) Virtual viewport generation method and apparatus, rendering and decoding methods and apparatuses, device and storage medium
CN110717875B (en) High-definition image processing method
JP2010068084A (en) Image processing device and image processing method
JP2005073280A (en) Method for segmenting moving object of compressed moving image
CN116601958A (en) Virtual viewpoint drawing, rendering and decoding methods and devices, equipment and storage medium
CN113177878A (en) Method and device for realizing American cartoon style filter effect based on image transformation
JPH09116905A (en) Picture decoding method
JP4957572B2 (en) Image processing apparatus, image processing system, image processing method, and image processing program
JPH07262361A (en) Method and device for image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant