CN110909843B - Method, device, server and storage medium for modeling coded image - Google Patents

Method, device, server and storage medium for modeling coded image Download PDF

Info

Publication number
CN110909843B
CN110909843B CN201911150062.XA CN201911150062A CN110909843B CN 110909843 B CN110909843 B CN 110909843B CN 201911150062 A CN201911150062 A CN 201911150062A CN 110909843 B CN110909843 B CN 110909843B
Authority
CN
China
Prior art keywords
code point
pixel
image
pixels
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911150062.XA
Other languages
Chinese (zh)
Other versions
CN110909843A (en
Inventor
程烨
唐巧提
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Quantum Cloud Code Fujian Technology Co ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201911150062.XA priority Critical patent/CN110909843B/en
Publication of CN110909843A publication Critical patent/CN110909843A/en
Priority to PCT/CN2020/081547 priority patent/WO2021098111A1/en
Application granted granted Critical
Publication of CN110909843B publication Critical patent/CN110909843B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • G06K19/06009Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking
    • G06K19/06037Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking multi-dimensional coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • G06K19/06009Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking
    • G06K19/06046Constructional details
    • G06K19/06093Constructional details the marking being constructed out of a plurality of similar markings, e.g. a plurality of barcodes randomly oriented on an object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • G06K19/06009Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking
    • G06K19/06046Constructional details
    • G06K19/0614Constructional details the marking being selective to wavelength, e.g. color barcode or barcodes only visible under UV or IR
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/018Certifying business or products
    • G06Q30/0185Product, service or business identity fraud

Landscapes

  • Theoretical Computer Science (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Development Economics (AREA)
  • Accounting & Taxation (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses a method, a device, a server and a storage medium for modeling a coded image. The method comprises the following steps: acquiring a gray level image with a preset resolution, wherein a first pixel of the gray level image is positioned at a plurality of pixel coordinates; generating a first code point image according to a preset resolution, wherein the first code point image comprises a plurality of first code point pixels which are mutually spaced, and the first code point pixels are positioned at a plurality of first code point coordinates; matching the pixel coordinates and the first code point coordinates to set the gray value of the first code point pixel as the gray value of the first pixel at the corresponding coordinate position; and generating a corresponding target code point image according to the first code point coordinates of the first code point pixels and the gray value of the first pixels at the corresponding positions. The invention provides a method for modeling a coded image, which adjusts the size of code points by amplitude modulation along with the pixel value change of a gray level image, thereby having the technical effect of more vivid gray level change.

Description

Method, device, server and storage medium for modeling coded image
Technical Field
The embodiment of the invention relates to the field of digital anti-counterfeiting, in particular to a method, a device, a server and a storage medium for molding a coded image.
Background
The principle of digital anti-counterfeiting is that a unique code is set for each networked product and is stored in 315 an anti-counterfeiting central database. When a consumer purchases a product produced by an enterprise added with the digital anti-counterfeiting system, the digital anti-counterfeiting label can be seen on the package of the product. A group of codes (16 bits and above) consisting of multiple numbers can be seen only by uncovering the surface layer of the mark or scraping the coating of the mark, and the codes are unique and can only be used once.
Most of the existing digital anti-counterfeit labels adopt coded identification images such as bar codes, two-dimensional codes and the like. However, the code recognition images have the problems that the code point sizes are consistent and only the appearance effect of single gray scale can be realized.
Disclosure of Invention
The invention provides a method, a device, a server and a storage medium for modeling a coded image, which are used for realizing that the size of a code point is adjusted by amplitude modulation along with the change of a pixel value of a gray level image, thereby having the technical effect of more vivid gray level change.
In a first aspect, an embodiment of the present invention provides a method for modeling a coded image, including the following steps:
acquiring a gray level image with a preset resolution, wherein a first pixel of the gray level image is positioned at a plurality of pixel coordinates;
generating a first code point image according to a preset resolution, wherein the first code point image comprises a plurality of first code point pixels which are mutually spaced, and the first code point pixels are positioned at a plurality of first code point coordinates;
matching the pixel coordinates and the first code point coordinates to set the gray value of the first code point pixel as the gray value of the first pixel at the corresponding coordinate position;
and generating a corresponding target code point image according to the first code point coordinates of the first code point pixels and the gray value of the first pixels at the corresponding positions.
Furthermore, the target code point image comprises a plurality of reference code point blocks, each reference code point block at least has one second code point pixel corresponding to the first code point pixel, and the number of the second code point pixels of each reference code point block is in negative correlation with the gray value of the first pixel at the position corresponding to the first code point coordinate.
Further, matching the pixel coordinates and the first code point coordinates to set the gray scale value of the first code point pixel as the gray scale value of the first pixel at the corresponding coordinate position comprises:
traversing the first code point image along the two-dimensional XY direction;
judging the gray value of a first code point pixel of the first code point image;
and when the gray value of the first code point pixel of the first code point image is 0, acquiring the first code point coordinate of the first code point pixel.
Further, generating a corresponding target code point image according to the first code point coordinates of the first code point pixels and the gray scale value of the first pixels at the corresponding positions includes:
confirming the number of second code point pixels included in each reference code point block according to the gray value of the first pixel of the gray image, wherein the confirming formula of the number of second code point pixels included in each reference code point block is as follows:
d=[(255–a)/c]
wherein d is the number of second code point pixels included in each reference code point block of the target code point image, a is the gray value of the first pixel of the gray scale image at the pixel coordinate corresponding to the first code point coordinate, and c is the coefficient.
Further, generating a corresponding target code point image according to the first code point coordinates of the first code point pixels and the gray scale value of the first pixels at the corresponding positions further includes:
generating a second code point image with a gray value set to 255 according to a preset resolution;
and performing pixel filling on the second code point pixel according to the first code point coordinate of the first code point pixel and the number of the second code point pixels included in the corresponding reference code point block to obtain a target code point image.
Further, the pixel filling of the second code point pixel according to the first code point coordinate of the first code point pixel and the number of the second code point pixels included in the corresponding reference code point block to obtain the target code point image includes:
confirming the coordinates of first code point pixels corresponding to the current reference code point block to be filled and the number of second code point pixels included by the current reference code point block;
and performing pixel filling on the second code point pixel in a preset line or direction based on the coordinate of the first code point pixel corresponding to the current reference code point block and the number of the second code point pixels.
Further, based on the coordinates of the first code point pixels and the number of the second code point pixels corresponding to the current reference code point chunk, performing pixel filling on the second code point pixels in a preset line or direction includes:
judging whether the number of second code point pixels included in each reference code point block of the target code point image is 0 or not;
when the number of second code point pixels included in a reference code point chunk of the target code point image is not 0, traversing a preset code point-pixel corresponding relation table along the two-dimensional XY direction;
when the number of pixels in the preset code point-pixel corresponding relation table is less than or equal to the number of second code point pixels included in the reference code point block of the target code point image, the gray value at the coordinate position corresponding to the pixels in the preset code point-pixel corresponding relation table is set to be 0, and the coordinate corresponding to the pixels in the preset code point-pixel corresponding relation table is an offset coordinate along the two-dimensional XY axis direction relative to the first code point coordinate of the target first code point pixel.
In a second aspect, an embodiment of the present invention further provides an apparatus for modeling a coded image, including:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a gray level image with a preset resolution, and a first pixel of the gray level image is positioned at a plurality of pixel coordinates;
the first code point generating module is used for generating a first code point image according to a preset resolution, the first code point image comprises a plurality of first code point pixels which are mutually spaced, and the first code point pixels are positioned at a plurality of first code point coordinates;
the matching module is used for matching the pixel coordinates and the first code point coordinates so as to set the gray value of the first code point pixel as the gray value of the first pixel at the corresponding coordinate position;
and the second code point generating module is used for generating a corresponding target code point image according to the first code point coordinates of the first code point pixels and the gray value of the first pixels at the corresponding positions.
In a third aspect, an embodiment of the present invention further provides a server, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor, when executing the computer program, implements the steps of the method for modeling the encoded image in any of the above embodiments.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the steps of the method for modeling an encoded image in any of the above embodiments.
The invention adjusts the size of the code point by amplitude modulation along with the pixel value change of the gray level image, solves the problems that the code point size of the code identification image is consistent and only a single gray level appearance effect can be realized in the prior art, and achieves the technical effect that the more vivid gray level change presented by the code image does not damage the integrity and the aesthetic degree of the original packaging pattern design on the premise of realizing information tracing and product anti-counterfeiting.
Drawings
FIG. 1 is a flowchart of a method for modeling an encoded image according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for modeling an encoded image according to a second embodiment of the present invention;
fig. 3 is a table of a preset code point-pixel correspondence relationship according to a second embodiment of the present invention;
fig. 4 is a schematic diagram of a gray scale image according to a second embodiment of the present invention;
fig. 5 is a schematic diagram of a first code point image according to a second embodiment of the present invention;
fig. 6 is a partial data table in a preset code point-pixel correspondence table according to a second embodiment of the present invention;
fig. 7 is a schematic diagram of another code point image according to the second embodiment of the present invention;
fig. 8 is another partial data table in a preset code point-pixel correspondence table according to the second embodiment of the present invention;
fig. 9 is a schematic diagram of a target code point image according to a second embodiment of the present invention;
fig. 10 is a schematic structural diagram of an encoded image modeling apparatus according to a third embodiment of the present invention;
fig. 11 is a schematic structural diagram of a server according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. A process may be terminated when its operations are completed, but may have additional steps not included in the figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc.
Furthermore, the terms "first," "second," and the like may be used herein to describe various orientations, actions, steps, elements, or the like, but the orientations, actions, steps, or elements are not limited by these terms. These terms are only used to distinguish one direction, action, step or element from another direction, action, step or element. For example, a first acquisition module may be referred to as a second acquisition module, and similarly, a second acquisition module may be referred to as a first acquisition module, without departing from the scope of the present application. The first acquisition module and the second acquisition module are both acquisition modules, but they are not the same acquisition module. The terms "first", "second", etc. are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more features. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Example one
Fig. 1 is a flowchart of a method for modeling an encoded image according to an embodiment of the present invention. The present embodiment is applicable to the case of converting a grayscale image into an encoded image, and the method may be performed by a processor. As shown in fig. 1, the method for modeling a coded image of the present embodiment specifically includes the following steps:
step S110, obtaining a gray image with a preset resolution, wherein a first pixel of the gray image is located at a plurality of pixel coordinates;
specifically, the resolution refers to the number of pixels included in a unit inch, and in this embodiment, the preset resolution refers to the preset size of the grayscale image, that is, the width and height of the grayscale image.
Specifically, a grayscale image generally refers to an image having only one sample color per pixel. In the computer field, grayscale images are typically displayed as grays ranging from darkest black to brightest white, although in theory this sampling could be different shades of any color, and even different colors at different brightnesses. The gray image is different from the black and white image, and the black and white image only has two colors of black and white in the field of computer image; grayscale images also have many levels of color depth between black and white. However, outside the field of digital images, a "black-and-white image" also means a "grayscale image", and for example, a photograph of grayscale is generally called a "black-and-white photograph". Monochrome images are equivalent to grayscale images in some articles on digital images and to black and white images in other articles. In the present embodiment, a grayscale image refers to an image having only one sample color per pixel, and this sample color may be a color depth of many levels between black and white. The pixel value range of the grayscale image in this embodiment may be 0 to 255, and preferably, each pixel of the grayscale image in this embodiment may take an 8-bit grayscale value.
Step S120, generating a first code point image according to a preset resolution, wherein the first code point image comprises a plurality of first code point pixels which are mutually spaced, and the first code point pixels are positioned at a plurality of first code point coordinates;
specifically, the first code point image is generated according to the resolution of the grayscale image in step S110, i.e., the width and height of the first code point image are the same as those of the grayscale image. A code point image is an image in which a plurality of code points are arranged in a predetermined rule with a code point as a basic unit, and a coded image is an image in which specific data information is coded in a predetermined rule.
Step S130, matching the pixel coordinates and the first code point coordinates to set the gray value of the first code point pixel as the gray value of the first pixel at the corresponding coordinate position;
specifically, the gray scale value refers to the color depth of the dots in the encoded image, and generally ranges from 0 to 255, with white being 255 and black being 0. In this embodiment, the gray scale values of the first code point image may be 0 and 255, when the gray scale value of the first pixel at the coordinate corresponding to the first code point pixel is 0 and 255, the gray scale value of the first code point pixel at the coordinate corresponding to the first code point image may be directly set to 0 and 255, and when the gray scale value of the first pixel at the coordinate corresponding to the first code point image is between 0 and 255 instead of 0 and 255, the gray scale value of the first code point pixel may be appropriately adjusted according to the gray scale value of the first pixel (at this time, the gray scale value still takes 0 and 255, but the number of code point pixels in the reference code point block constituted by the first code point pixel may be appropriately adjusted according to the difference of the gray scale value of the first pixel), thereby ensuring that the appearance gray scale effect of the generated target code point image is close to the gray scale image.
Specifically, each first code point pixel in the first code point image may be traversed along the two-dimensional XY axis direction, and whether the gray value of each first code point pixel in the first code point image is 0 is determined. When the gray value of the first code point pixel is 0, it indicates that the first code point pixel corresponds to a code point, and the first code point coordinate of the first code point pixel is the coordinate of the corresponding code point. And reading the gray value of the first pixel of the pixel coordinate matched with the first code point coordinate from the gray image according to the obtained coordinate corresponding to the code point, thereby determining the gray value of the first pixel at the corresponding position of the first code point pixel.
Step S140, generating a corresponding target code point image according to the first code point coordinates of the first code point pixels and the gray-scale values of the first pixels at the corresponding positions.
In this embodiment, the target code point image includes a plurality of reference code point blocks, each of the reference code point blocks has at least one second code point pixel corresponding to the first code point pixel, and the number of the second code point pixels of each of the reference code point blocks is inversely related to the gray value of the first pixel at the position corresponding to the first code point coordinate.
In this embodiment, the existence of at least one second code point pixel corresponding to the first code point pixel in each reference code point block means that there exists one code point in each reference code point block, the code point is a common point of the first code point image and the target code point image, the code point coordinate of the code point matches with the first code point coordinate of the first code point pixel located on the first code point image, the number of the second code point pixels included in each reference code point block of the target code point image can be determined according to the gray value of the first pixel at the corresponding position of the first code point pixel obtained in step S130, and the second code point image is filled according to the preset code point-pixel correspondence table to obtain the target code point image, wherein the filling means that when the gray value of the first code point pixel does not reach the gray value of the code point expected to be obtained at the same coordinate on the target code point image, by setting the gradation value at the coordinates shifted in the XY axis direction with respect to the code point to 0, the gradation value at the coordinates on the target code point image is made to reach the expectation. Subsequently, it is determined whether the number of second code point pixels included in each reference code point block of the target code point image is 0. When the number of second code point pixels included in a reference code point chunk of the target code point image is not 0, traversing a preset code point-pixel corresponding relation table along the two-dimensional XY direction; when the number of pixels in the preset code point-pixel corresponding relation table is less than or equal to the number of second code point pixels included in the reference code point block of the target code point image, the gray value at the coordinate position corresponding to the pixels in the preset code point-pixel corresponding relation table is set to be 0, and the coordinate corresponding to the pixels in the preset code point-pixel corresponding relation table is an offset coordinate along the two-dimensional XY axis direction relative to the first code point coordinate of the second code point pixel of the corresponding target code point image.
Preferably, the number of second code point pixels included in each reference code point block can be determined by the following formula:
d=[(255–a)/c]
wherein d is the number of second code point pixels included in each reference code point chunk of the target code point image, a is a gray scale value of the first pixel of the gray scale image at the pixel coordinate corresponding to the first code point coordinate, c is a coefficient, and can be adjusted according to the density of the code points in the first code point image, the value range of c can be 1-20, and in this embodiment, the value of c is 5.714.
The first embodiment of the invention has the beneficial effects that by providing the method for modeling the coded image, the size of the code point is adjusted along with the pixel value change of the gray level image, so that the obtained coded image has the technical effect of more vivid gray level change.
Example two
The second embodiment of the invention is further optimized on the basis of the first embodiment. Fig. 2 is a flowchart of a method for modeling an encoded image according to a second embodiment of the present invention. As shown in fig. 2, the method for modeling a coded image of the present embodiment specifically includes the following steps:
step S210, a gray image with a preset resolution is obtained, and a first pixel of the gray image is located at a plurality of pixel coordinates.
Specifically, the resolution refers to the number of pixels included in a unit inch, and in this embodiment, the preset resolution refers to the preset size of the grayscale image, that is, the width and height of the grayscale image.
A grayscale image generally refers to an image having only one sample color per pixel. In the computer field, grayscale images are typically displayed as grays ranging from darkest black to brightest white, although in theory this sampling could be different shades of any color, and even different colors at different brightnesses. The gray image is different from the black and white image, and the black and white image only has two colors of black and white in the field of computer image; grayscale images also have many levels of color depth between black and white. However, outside the field of digital images, a "black-and-white image" also means a "grayscale image", and for example, a photograph of grayscale is generally called a "black-and-white photograph". Monochrome images are equivalent to grayscale images in some articles on digital images and to black and white images in other articles. In the present embodiment, a grayscale image refers to an image having only one sample color per pixel, and this sample color may be a color depth of many levels between black and white. The pixel value range of the grayscale image in this embodiment may be 0 to 255, and preferably, each pixel of the grayscale image in this embodiment may take an 8-bit grayscale value.
Step S220, generating a first code point image according to a preset resolution, wherein the first code point image comprises a plurality of first code point pixels which are mutually spaced, and the first code point pixels are positioned at a plurality of first code point coordinates;
specifically, the first code point image is generated according to the resolution of the grayscale image in step S210, i.e., the width and height of the first code point image are the same as those of the grayscale image. A code point image is an image in which a plurality of code points are arranged in a predetermined rule with a code point as a basic unit, and a coded image is an image in which specific data information is coded in a predetermined rule.
Step S231, traversing the first code point image along the two-dimensional XY direction;
step S232, judging the gray value of a first code point pixel of the first code point image;
step S233, when the gray value of the first code point pixel of the first code point image is 0, acquiring the first code point coordinate of the first code point pixel;
step S240, matching the pixel coordinates and the first code point coordinates to set the gray value of the first code point pixel as the gray value of the first pixel at the corresponding coordinate position;
the grey values refer to the color depth of points in the encoded image, typically ranging from 0 to 255, 255 for white and 0 for black. In this embodiment, the gray scale values of the first code point image may be 0 and 255, when the gray scale value of the first pixel at the coordinate corresponding to the first code point pixel is 0 and 255, the gray scale value of the first code point pixel at the coordinate corresponding to the first code point image may be directly set to 0 and 255, and when the gray scale value of the first pixel at the coordinate corresponding to the first code point image is between 0 and 255 instead of 0 and 255, the gray scale value of the first code point pixel may be appropriately adjusted according to the gray scale value of the first pixel (at this time, the gray scale value still takes 0 and 255, but the number of code point pixels in the reference code point block constituted by the first code point pixel may be appropriately adjusted according to the difference of the gray scale value of the first pixel), thereby ensuring that the appearance gray scale effect of the generated target code point image is close to the gray scale image.
Specifically, each first code point pixel in the first code point image may be traversed along the two-dimensional XY axis direction, and whether the gray value of each first code point pixel in the first code point image is 0 is determined. When the gray value of the first code point pixel is 0, it indicates that the first code point pixel corresponds to a code point, and the first code point coordinate of the first code point pixel is the coordinate of the corresponding code point. And reading the gray value of the first pixel of the pixel coordinate matched with the first code point coordinate from the gray image according to the obtained coordinate corresponding to the code point, thereby determining the gray value of the first pixel at the corresponding position of the first code point pixel.
In this embodiment, the target code point image includes a plurality of reference code point blocks, each of the reference code point blocks has at least one second code point pixel corresponding to the first code point pixel, and the number of the second code point pixels of each of the reference code point blocks is inversely related to the gray value of the first pixel at the position corresponding to the first code point coordinate.
Step S251, determining the number of second code point pixels included in each reference code point block of the target code point image according to the gray value of the first pixel of the gray image.
In the present embodiment, the confirmation formula of the number of second code point pixels included in each reference code point block is as follows:
d=[(255–a)/c]
wherein d is the number of second code point pixels included in each reference code point block of the target code point image, a is the gray value of the first pixel of the gray scale image at the pixel coordinate corresponding to the first code point coordinate, and c is the coefficient.
Specifically, c may be adjusted according to the density of code points in the first code point image, and a value range of c may be 1 to 20, and in this embodiment, c may take a value of 5.714.
Step 252, generating a second code point image with a gray value set to 255 according to a preset resolution;
specifically, in this embodiment, the grayscale value is set to 255, that is, the second code point image is a white background, and the second code point image is used to perform pixel filling according to the number of the first code point pixels and the second code point pixels, so as to obtain the target code point image.
Step S253, confirming the coordinates of first code point pixels corresponding to the current reference code point block to be filled and the number of second code point pixels included in the current reference code point block;
specifically, in the present embodiment, to be filled means that when the gradation value of the first code point pixel does not reach the gradation value of the code point that is expected to be obtained at the same coordinate on the target code point image, the gradation value at the coordinate shifted in the XY axis direction with respect to the code point is set to 0 so that the gradation value at the coordinate on the target code point image reaches the expectation.
Step S254, determining whether the number of second code point pixels included in each reference code point block of the target code point image is 0;
step S255, when the number of second code point pixels included in the reference code point block of the target code point image is not 0, traversing a preset code point-pixel corresponding relation table along the two-dimensional XY direction;
step S256, when the number of pixels in the preset code point-pixel correspondence table is less than or equal to the number of second code point pixels included in the reference code point block of the target code point image, setting a gray value at a coordinate corresponding to the pixel in the preset code point-pixel correspondence table as 0, where the coordinate corresponding to the pixel in the preset code point-pixel correspondence table is an offset coordinate along the two-dimensional XY axis direction with respect to the first code point coordinate of the target first code point pixel.
Specifically, fig. 3 is a preset code point-pixel correspondence table according to a second embodiment of the present invention. Fig. 4 is a schematic diagram of a gray scale image according to a second embodiment of the present invention. Fig. 5 is a schematic diagram of a second code point image for pixel filling according to a second embodiment of the present invention. Fig. 6 is a partial data table in a preset code point-pixel correspondence table according to a second embodiment of the present invention. Fig. 7 is a schematic diagram of another second code point image for pixel filling according to the second embodiment of the present invention. Fig. 8 is another partial data table in a preset code point-pixel correspondence table according to the second embodiment of the present invention. Fig. 9 is a schematic diagram of a target code point image according to a second embodiment of the present invention.
As shown in fig. 4, each square (i.e., a-p) in the gray scale image represents a pixel, each pixel having a respective gray scale value. As shown in fig. 3, 4, 5 and 6, each entry value in the table of fig. 3 represents a gray scale value of one pixel, the horizontal axis represents the X-offset coordinate corresponding to the pixel, and the vertical axis represents the Y-offset coordinate corresponding to the pixel. For example, assuming that the coordinate of the code point f in the first code point image in fig. 5 is (95,110), and the gray scale value at the coordinate (i.e. f) in the read gray scale image in fig. 4 is assumed to be 228, the formula for confirming the number of the second code point pixels included in each reference code point block in the target code point image is obtained:
d=[(255–a)/c]
wherein d is the number of second code point pixels included in each reference code point block of the target code point image, a is the gray value of the first pixel of the gray scale image at the pixel coordinate corresponding to the first code point coordinate, and c is the coefficient. When a is 228, c is 5.714, d is 4.
Specifically, the table shown in fig. 3 may be traversed in the two-dimensional XY-axis direction. That is, the entry values of pixels 1, 2, 3, and 4 in the table are equal to or less than 4, as shown in fig. 6. The grid with the dotted hatching in fig. 5 is the grid of the pixel to be filled, and according to the table shown in fig. 6, it is first seen that the offset coordinates of the entry of the pixel 1 in the preset table are (0,0), and then the gray value with the coordinates of (95+0,110+0), that is, (95,110), in the second code point image is set to 0; if the offset coordinate of the pixel 2 is (0,1), setting the gray value with the coordinate of (95+0,110+1), namely (95,111), in the second code point image to 0; then, seeing that the offset coordinate of the pixel 3 is (1,0), the gray value with the coordinate of (95+1,110+0), that is, (96,110), in the second code point image is set to 0; finally, if the offset coordinate of the pixel 4 is (1,1), the gray-level value with the coordinate of (95+1,110+1), i.e., (96,111), in the second code point image is set to 0. That is, the filling order of the pixels on the second code point image in fig. 5 is filled in the preset direction 1X shown in fig. 5, and the filling order may be changed according to the pixel order set in the preset code point-pixel correspondence table, but the order does not affect the number of fills.
For another example, as shown in fig. 3, 4, 7 and 8, the coordinates of the code point o in the first code point image in fig. 7 are (96,108), and the gray scale value at the coordinates (i.e., o) in the read gray scale image in fig. 4 is assumed to be 236, then the formula for confirming the number of second code point pixels included in each reference code point block in the target code point image is obtained:
d=[(255–a)/c]
wherein d is the number of second code point pixels included in each reference code point block of the target code point image, a is the gray value of the first pixel of the gray scale image at the pixel coordinate corresponding to the first code point coordinate, and c is the coefficient. When a is 238, c is 5.714, d is 3.
That is, the entries in the table less than or equal to 3 are only 1, 2, and 3, as in the table shown in fig. 8. The grid with the hatched portion of the dotted line in fig. 7 is the grid of the pixel to be filled, and according to the table shown in fig. 8, firstly, it is seen that the offset coordinate of the pixel 1 in the preset table is (0,0), and then the gray value with the coordinate of (96+0,108+0), that is, (96,108) in the second code point image is set to 0; if the offset coordinate of the pixel 2 is (0,1), setting the gray value with the coordinate of (96+0,108+1), namely (96,109), in the second code point image to 0; finally, the offset coordinate of the pixel 3 is (1,0), and the gray value with the coordinate of (96+1,108+0), i.e., (97,108), in the second code point image is set to 0. That is, the filling order of the pixels on the second dot image in fig. 7 is filled in the preset direction 2X shown in fig. 7, and this filling order may be changed according to the pixel order set in the preset dot-pixel correspondence table, but this order does not affect the number of fills as well.
The final target code point image is obtained by filling the code point f in fig. 5 and the code point o in fig. 7 with pixels, as shown in fig. 9. In this embodiment, the reference code point chunks may be connected to each other or may be independent code point chunks, and the specific connection or independence of the reference code point chunks may be determined according to a specific code point-pixel correspondence table.
The second embodiment of the invention has the beneficial effects that by providing the method for modeling the coded image, the size of the code point image is adjusted by amplitude modulation along with the pixel value change of the gray level image, the problems that the code point sizes of the coded identification images are consistent and only a single gray level appearance effect can be realized in the prior art are solved, and the technical effect that the coded image can present more vivid gray level change on the premise of realizing information tracing and product anti-counterfeiting and not damaging the integrity and the attractiveness of the original packaging pattern design is achieved.
EXAMPLE III
Fig. 10 is a schematic structural diagram of an apparatus for modeling an encoded image according to a third embodiment of the present invention. As shown in fig. 10, the apparatus 300 for modeling a coded image according to the present embodiment includes:
a first obtaining module 310, configured to obtain a grayscale image with a preset resolution, where a first pixel of the grayscale image is located at a plurality of pixel coordinates;
a first code point generating module 320, configured to generate a first code point image according to a preset resolution, where the first code point image includes a plurality of first code point pixels spaced from each other, and the first code point pixels are located at a plurality of first code point coordinates;
the matching module 330 is configured to match the pixel coordinates with the first code point coordinates, so as to set the gray value of the first code point pixel as the gray value of the first pixel at the corresponding coordinate position;
the second code point generating module 340 is configured to generate a corresponding target code point image according to the first code point coordinates of the first code point pixels and the gray scale value of the first pixels at the corresponding positions.
In this embodiment, the target code point image includes a plurality of reference code point blocks, each of the reference code point blocks has at least one second code point pixel corresponding to the first code point pixel, and the number of the second code point pixels of each of the reference code point blocks is inversely related to the gray value of the first pixel at the position corresponding to the first code point coordinate.
In this embodiment, the apparatus 300 for modeling a coded image further comprises:
and the first traversal module is used for traversing the first code point image along the two-dimensional XY direction.
The first judging module is used for judging the gray value of the first code point pixel of the first code point image.
And the coordinate acquisition module is used for acquiring the first code point coordinate of the first code point pixel when the gray value of the first code point pixel of the first code point image is 0.
In this embodiment, the second code point generating module 340 includes:
the quantity acquisition unit is used for confirming the quantity of second code point pixels included by each reference code point block of the target code point image according to the gray value of the first pixel of the gray image;
the generating unit is used for generating a second code point image with a gray value set to be 255 according to the preset resolution;
the coordinate filling unit is used for confirming the coordinates of first code point pixels corresponding to the current reference code point block to be filled and the number of second code point pixels included by the current reference code point block;
a first judgment unit configured to judge whether the number of second code point pixels included in each reference code point block of the target code point image is 0;
and the first traversal unit is used for traversing the preset code point-pixel corresponding relation table along the two-dimensional XY direction when the number of the second code point pixels included in the reference code point block of the target code point image is not 0.
And the gray filling unit is used for setting the gray value at the coordinate corresponding to the pixel in the preset code point-pixel corresponding relation table as 0 when the number of the pixels in the preset code point-pixel corresponding relation table is less than or equal to the number of the second code point pixels included in the reference code point block of the target code point image, and the coordinate corresponding to the pixel in the preset code point-pixel corresponding relation table is an offset coordinate along the two-dimensional XY axis direction relative to the first code point coordinate of the target first code point pixel.
The third embodiment of the invention has the beneficial effects that by providing the device for modeling the coded image, the size of the code point image is adjusted by amplitude modulation along with the pixel value change of the gray level image, the problems that the code point sizes of the coded identification images are consistent and only a single gray level appearance effect can be realized in the prior art are solved, and the technical effect that the coded image can show more vivid gray level change on the premise of realizing information tracing and product anti-counterfeiting and not damaging the integrity and the attractiveness of the original packaging pattern design is achieved.
Example four
Fig. 11 is a schematic structural diagram of a server according to a fifth embodiment of the present invention, as shown in fig. 11, the server includes a processor 410, a memory 420, an input device 430, and an output device 440; the number of the processors 410 in the server may be one or more, and one processor 410 is taken as an example in fig. 4; the processor 410, the memory 420, the input device 430 and the output device 440 in the server may be connected by a bus or other means, and the bus connection is exemplified in fig. 4.
The memory 410 is used as a computer-readable storage medium for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the method for modeling a coded image in the embodiment of the present invention (for example, a first obtaining module, a first code point generating module, a matching module, and a second code point generating module in a device for modeling a coded image). The processor 410 executes various functional applications of the server and data processing by executing software programs, instructions and modules stored in the memory 420, that is, the method for modeling the encoded image is implemented.
Namely:
acquiring a gray level image with a preset resolution, wherein a first pixel of the gray level image is positioned at a plurality of pixel coordinates;
generating a first code point image according to a preset resolution, wherein the first code point image comprises a plurality of first code point pixels which are mutually spaced, and the first code point pixels are positioned at a plurality of first code point coordinates;
matching the pixel coordinates and the first code point coordinates to set the gray value of the first code point pixel as the gray value of the first pixel at the corresponding coordinate position;
and generating a corresponding target code point image according to the first code point coordinates of the first code point pixels and the gray value of the first pixels at the corresponding positions.
The memory 420 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 420 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 420 may further include memory located remotely from processor 410, which may be connected to a server over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 430 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the server. The output device 440 may include a display device such as a display screen.
EXAMPLE five
An embodiment of the present invention further provides a storage medium including computer-executable instructions, which when executed by a computer processor, perform a method of modeling an encoded image, the method including:
acquiring a gray level image with a preset resolution, wherein a first pixel of the gray level image is positioned at a plurality of pixel coordinates;
generating a first code point image according to a preset resolution, wherein the first code point image comprises a plurality of first code point pixels which are mutually spaced, and the first code point pixels are positioned at a plurality of first code point coordinates;
matching the pixel coordinates and the first code point coordinates to set the gray value of the first code point pixel as the gray value of the first pixel at the corresponding coordinate position;
and generating a corresponding target code point image according to the first code point coordinates of the first code point pixels and the gray value of the first pixels at the corresponding positions.
Of course, the storage medium provided by the embodiments of the present invention includes computer-executable instructions, and the computer-executable instructions are not limited to the above method operations, and may also perform related operations in a method for modeling an encoded image provided by any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods of the embodiments of the present invention.
It should be noted that, in the above embodiment of the apparatus for modeling a coded image, the included units and modules are only divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments illustrated herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (5)

1. A method of modelling an encoded image, comprising the steps of:
acquiring a gray level image with a preset resolution, wherein a first pixel of the gray level image is positioned at a plurality of pixel coordinates;
generating a first code point image according to the preset resolution, wherein the first code point image comprises a plurality of first code point pixels which are mutually spaced, and the first code point pixels are positioned at a plurality of first code point coordinates;
matching the pixel coordinates with the first code point coordinates to set the gray value of the first code point pixel as the gray value of the first pixel at the corresponding coordinate position;
generating a corresponding target code point image according to the first code point coordinates of the first code point pixels and the gray value of the first pixels at the corresponding positions, wherein the method comprises the following steps: generating a second code point image with a gray value set to 255 according to the preset resolution; confirming the coordinates of first code point pixels corresponding to the current reference code point block to be filled and the number of second code point pixels included by the current reference code point block; judging whether the number of second code point pixels included in each reference code point chunk of the target code point image is 0 or not; when the number of second code point pixels included in the reference code point chunk of the target code point image is not 0, traversing a preset code point-pixel corresponding relation table along the two-dimensional XY direction; when the number of pixels in the preset code point-pixel corresponding relation table is less than or equal to the number of second code point pixels included in a reference code point block of the target code point image, setting the gray value at the coordinate position corresponding to the pixels in the preset code point-pixel corresponding relation table as 0, wherein the coordinate corresponding to the pixels in the preset code point-pixel corresponding relation table is an offset coordinate along the two-dimensional XY axis direction relative to the first code point coordinate of the first code point pixels; the target code point image comprises a plurality of reference code point chunks, each reference code point chunk at least has one second code point pixel corresponding to the first code point pixel, and the number of the second code point pixels of each reference code point chunk is in negative correlation with the gray value of the first pixel at the position corresponding to the first code point coordinate.
2. The method of claim 1, wherein matching the pixel coordinates with the first code point coordinates to set the gray scale value of the first code point pixel to the gray scale value of the first pixel at the corresponding coordinate position comprises:
traversing the first code point image along the two-dimensional XY direction;
judging the gray value of a first code point pixel of the first code point image;
and when the gray value of a first code point pixel of the first code point image is 0, acquiring a first code point coordinate of the first code point pixel.
3. An apparatus for modeling an encoded image, comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a gray level image with a preset resolution, and a first pixel of the gray level image is positioned at a plurality of pixel coordinates;
a first code point generating module, configured to generate a first code point image according to the preset resolution, where the first code point image includes a plurality of first code point pixels spaced from each other, and the first code point pixels are located at a plurality of first code point coordinates;
the matching module is used for matching the pixel coordinates and the first code point coordinates so as to set the gray value of the first code point pixel as the gray value of the first pixel at the corresponding coordinate position;
a second code point generating module, configured to generate a corresponding target code point image according to the first code point coordinates of the first code point pixels and the gray scale value of the first pixels at the corresponding positions, including: generating a second code point image with a gray value set to 255 according to the preset resolution; confirming the coordinates of first code point pixels corresponding to the current reference code point block to be filled and the number of second code point pixels included by the current reference code point block; judging whether the number of second code point pixels included in each reference code point chunk of the target code point image is 0 or not; when the number of second code point pixels included in the reference code point chunk of the target code point image is not 0, traversing a preset code point-pixel corresponding relation table along the two-dimensional XY direction; when the number of pixels in the preset code point-pixel corresponding relation table is less than or equal to the number of second code point pixels included in a reference code point block of the target code point image, setting the gray value at the coordinate position corresponding to the pixels in the preset code point-pixel corresponding relation table as 0, wherein the coordinate corresponding to the pixels in the preset code point-pixel corresponding relation table is an offset coordinate along the two-dimensional XY axis direction relative to the first code point coordinate of the first code point pixels; the target code point image comprises a plurality of reference code point chunks, each reference code point chunk at least has one second code point pixel corresponding to the first code point pixel, and the number of the second code point pixels of each reference code point chunk is in negative correlation with the gray value of the first pixel at the position corresponding to the first code point coordinate.
4. A server comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method for modelling an encoded image according to any of claims 1-2 are implemented by the processor when executing the computer program.
5. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method for modelling an encoded image according to any one of claims 1-2.
CN201911150062.XA 2019-11-21 2019-11-21 Method, device, server and storage medium for modeling coded image Active CN110909843B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911150062.XA CN110909843B (en) 2019-11-21 2019-11-21 Method, device, server and storage medium for modeling coded image
PCT/CN2020/081547 WO2021098111A1 (en) 2019-11-21 2020-03-27 Method for coded image shaping, device, server, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911150062.XA CN110909843B (en) 2019-11-21 2019-11-21 Method, device, server and storage medium for modeling coded image

Publications (2)

Publication Number Publication Date
CN110909843A CN110909843A (en) 2020-03-24
CN110909843B true CN110909843B (en) 2021-05-14

Family

ID=69818439

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911150062.XA Active CN110909843B (en) 2019-11-21 2019-11-21 Method, device, server and storage medium for modeling coded image

Country Status (2)

Country Link
CN (1) CN110909843B (en)
WO (1) WO2021098111A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110909843B (en) * 2019-11-21 2021-05-14 程烨 Method, device, server and storage medium for modeling coded image

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201628690U (en) * 2010-03-15 2010-11-10 深圳市天朗时代科技有限公司 Print detecting system
CN103517070A (en) * 2013-07-19 2014-01-15 清华大学 Method and device for coding and decoding image
CN105005806A (en) * 2015-08-31 2015-10-28 立德高科(北京)数码科技有限责任公司 Matrix point diagram, generation method, generation system and anti-counterfeit label
CN105426944A (en) * 2015-11-17 2016-03-23 立德高科(北京)数码科技有限责任公司 Square lattice anti-counterfeit label group, and method and system for reading square lattice anti-counterfeit label group
CN105718979A (en) * 2016-01-14 2016-06-29 厦门纳纬信息技术有限公司 Method for generating two-dimensional code picture
CN105760919A (en) * 2016-02-06 2016-07-13 深圳市天朗时代科技有限公司 Dot matrix two-dimensional code coding and recognition method
CN105760915A (en) * 2016-02-02 2016-07-13 程烨 Anti-fake image generation method and device
US9544467B2 (en) * 2014-10-20 2017-01-10 National Taipei University Of Technology Halftone data-bearing encoding system and halftone data-bearing decoding system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463067B (en) * 2014-12-04 2017-03-22 四川大学 Method for extracting macro blocks of Grid Matrix two-dimensional bar code
WO2016119196A1 (en) * 2015-01-30 2016-08-04 富士通株式会社 Image coding method, apparatus, and image processing device
CN104933386B (en) * 2015-06-12 2017-10-27 矽图(厦门)科技有限公司 The recognition methods of many GTG invisible two-dimensional codes
CN105760904B (en) * 2016-03-18 2018-11-09 上海矽感科技有限公司 The GM code marks of compatible all laser marking machines generate control method
WO2019043695A1 (en) * 2017-08-31 2019-03-07 Twine Solutions Ltd. Color detection algorithm
CN110909843B (en) * 2019-11-21 2021-05-14 程烨 Method, device, server and storage medium for modeling coded image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201628690U (en) * 2010-03-15 2010-11-10 深圳市天朗时代科技有限公司 Print detecting system
CN103517070A (en) * 2013-07-19 2014-01-15 清华大学 Method and device for coding and decoding image
US9544467B2 (en) * 2014-10-20 2017-01-10 National Taipei University Of Technology Halftone data-bearing encoding system and halftone data-bearing decoding system
CN105005806A (en) * 2015-08-31 2015-10-28 立德高科(北京)数码科技有限责任公司 Matrix point diagram, generation method, generation system and anti-counterfeit label
CN105426944A (en) * 2015-11-17 2016-03-23 立德高科(北京)数码科技有限责任公司 Square lattice anti-counterfeit label group, and method and system for reading square lattice anti-counterfeit label group
CN105718979A (en) * 2016-01-14 2016-06-29 厦门纳纬信息技术有限公司 Method for generating two-dimensional code picture
CN105760915A (en) * 2016-02-02 2016-07-13 程烨 Anti-fake image generation method and device
CN105760919A (en) * 2016-02-06 2016-07-13 深圳市天朗时代科技有限公司 Dot matrix two-dimensional code coding and recognition method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《点阵防伪码的设计与实现》;周劲松;《中国优秀硕士学位论文全文数据库(电子期刊)》;20160730;全文 *

Also Published As

Publication number Publication date
CN110909843A (en) 2020-03-24
WO2021098111A1 (en) 2021-05-27

Similar Documents

Publication Publication Date Title
CN106778995B (en) Artistic two-dimensional code generation method and device fused with image
US10489970B2 (en) 2D image processing for extrusion into 3D objects
CN108615253B (en) Image generation method, device and computer readable storage medium
US10204447B2 (en) 2D image processing for extrusion into 3D objects
CN108345888A (en) A kind of connected domain extracting method and device
CN110909843B (en) Method, device, server and storage medium for modeling coded image
CN114998922B (en) Electronic contract generating method based on format template
CN114777792A (en) Path planning method and device, computer readable medium and electronic equipment
CN110532938B (en) Paper job page number identification method based on fast-RCNN
JP2019057066A (en) Line drawing automated coloring program, line drawing automated coloring device, and line drawing automated coloring method
CN104778657B (en) Two-dimensional image code fusion method and device
CN107621929B (en) Gray scale thermal printing method, thermal printer and readable storage medium
CN110557622B (en) Depth information acquisition method and device based on structured light, equipment and medium
CN110110829A (en) A kind of two dimensional code processing method and processing device
CN110824451A (en) Processing method and device of radar echo map, computer equipment and storage medium
JP3061812B2 (en) Pattern recognition method and apparatus
CN104504429A (en) Two-dimensional code generation method and device
CN110969678B (en) Drawing method, device, terminal equipment and storage medium for tiled circles
CN110009082B (en) Three-dimensional code optimization method, medium, computer device and apparatus
CN107481376B (en) Three-dimensional code unlocking method based on intelligent application
CN111669477A (en) Image processing method, system, device, equipment and computer storage medium
JP3957735B1 (en) Program, information storage medium, 2D code generation system, 2D code
CN110751053B (en) Vehicle color identification method, device, equipment and storage medium
RU2758668C1 (en) Method and system for protection of digital information displayed on the screen of electronic devices
CN108564659A (en) The expression control method and device of face-image, computing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: Room 402, block a, TCL building, no.6, Gaoxin South 1st Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Patentee after: Cheng Ye

Address before: Room 702, Science and Technology Building, International Electronic Commerce Industrial Park, Futian District, Shenzhen City, Guangdong Province, 518000

Patentee before: Cheng Ye

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240304

Address after: 350000 unit 03, 15 / F, building 1, Fumin Times Square, No. 66, Aofeng Road, Aofeng street, Taijiang District, Fuzhou City, Fujian Province

Patentee after: Quantum cloud code (Fujian) Technology Co.,Ltd.

Country or region after: China

Address before: Room 402, block a, TCL building, no.6, Gaoxin South 1st Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Patentee before: Cheng Ye

Country or region before: China