CN116912831A - Method and system for processing acquired information of letter code anti-counterfeiting printed matter - Google Patents

Method and system for processing acquired information of letter code anti-counterfeiting printed matter Download PDF

Info

Publication number
CN116912831A
CN116912831A CN202311190343.4A CN202311190343A CN116912831A CN 116912831 A CN116912831 A CN 116912831A CN 202311190343 A CN202311190343 A CN 202311190343A CN 116912831 A CN116912831 A CN 116912831A
Authority
CN
China
Prior art keywords
image
label
tag
semantic
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311190343.4A
Other languages
Chinese (zh)
Inventor
何岗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongguan Jiangwei Anti Counterfeiting Technology Co ltd
Original Assignee
Dongguan Jiangwei Anti Counterfeiting Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongguan Jiangwei Anti Counterfeiting Technology Co ltd filed Critical Dongguan Jiangwei Anti Counterfeiting Technology Co ltd
Priority to CN202311190343.4A priority Critical patent/CN116912831A/en
Publication of CN116912831A publication Critical patent/CN116912831A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/018Certifying business or products
    • G06Q30/0185Product, service or business identity fraud
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Finance (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

The application discloses a method and a system for processing acquired information of a letter code anti-counterfeit printed matter, wherein a label image of an anti-counterfeit label is firstly acquired, then, image analysis is carried out on the label image to obtain image semantic features of the label image, and then, based on the image semantic features of the label image, wrinkle removal processing is carried out on the label image to generate a wrinkle-removed label image; in this way, the de-wrinkling process can be performed based on semantic information in the label image of the security label.

Description

Method and system for processing acquired information of letter code anti-counterfeiting printed matter
Technical Field
The application relates to the field of letter code anti-counterfeiting, in particular to a method and a system for processing acquired information of a letter code anti-counterfeiting printed matter.
Background
Along with the upgrading iteration of the anti-counterfeiting technology of the code, anti-counterfeiting printed matters of the code, such as anti-counterfeiting labels and the like, which are suitable for various scenes and applications, also appear. However, wrinkles are likely to occur during the manufacturing or use due to the influence of factors such as the material of the printed matter, the manner of adhesion, and the ambient light.
Therefore, an acquisition information processing scheme of the letter code anti-counterfeiting printed matter is expected to solve the problem of wrinkling in the acquisition process.
Disclosure of Invention
The present application has been made to solve the above-mentioned technical problems. The embodiment of the application provides a method and a system for processing acquired information of a letter code anti-counterfeit printed matter, which can carry out wrinkle removal processing based on semantic information in a label image of an anti-counterfeit label.
According to one aspect of the present application, there is provided a method for processing acquired information of a letter code anti-counterfeit printed matter, comprising:
acquiring a label image of an anti-counterfeit label;
performing image analysis on the tag image to obtain image semantic features of the tag image; and
based on the image semantic features of the label image, performing de-wrinkling processing on the label image to generate a de-wrinkling label image.
According to another aspect of the present application, there is provided an acquisition information processing system of a letter code anti-counterfeit printed matter, comprising:
the image acquisition module is used for acquiring a label image of the anti-counterfeit label;
the image analysis module is used for carrying out image analysis on the tag image so as to obtain image semantic features of the tag image; and
the de-wrinkling processing module is used for performing de-wrinkling processing on the label image based on the image semantic features of the label image so as to generate a de-wrinkling label image.
Compared with the prior art, the acquired information processing method and system for the letter code anti-counterfeiting printed matter provided by the application have the advantages that firstly, the label image of the anti-counterfeiting label is acquired, then, the image analysis is carried out on the label image to obtain the image semantic features of the label image, and then, based on the image semantic features of the label image, the label image is subjected to wrinkle removal processing to generate a wrinkle-removed label image; in this way, the de-wrinkling process can be performed based on semantic information in the label image of the security label.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly introduced below, the following drawings not being drawn to scale with respect to actual dimensions, emphasis instead being placed upon illustrating the gist of the present application.
Fig. 1 is a flowchart of a method for processing acquired information of a letter code anti-counterfeit printed matter according to an embodiment of the present application.
Fig. 2 is a schematic diagram of an architecture of an information processing method for collecting information of a letter code anti-counterfeit printed material according to an embodiment of the application.
Fig. 3 is a flowchart of substep S120 of the method for processing acquired information of a letter code anti-counterfeit printed material according to an embodiment of the present application.
Fig. 4 is a flowchart of sub-step S121 of the method for processing acquired information of a letter code anti-counterfeit printed material according to an embodiment of the present application.
Fig. 5 is a flowchart of substep S122 of the method for processing the acquired information of the letter code anti-counterfeit printed material according to the embodiment of the application.
Fig. 6 is a flowchart of substep S130 of the method for processing the acquired information of the letter code anti-counterfeit printed material according to the embodiment of the application.
Fig. 7 is a block diagram of an information processing system for collecting information of a letter code anti-counterfeit printed matter according to an embodiment of the present application.
Fig. 8 is an application scenario diagram of an information processing method for collecting information of a letter code anti-counterfeit printed matter according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are also within the scope of the application.
As used herein, the terms "a," "an," "the," and/or "the" are not specific to the singular, but may include the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
Although the present application makes various references to certain modules in a system according to embodiments of the present application, any number of different modules may be used and run on a user terminal and/or server. The modules are merely illustrative, and different aspects of the systems and methods may use different modules.
A flowchart is used in the present application to describe the operations performed by a system according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in order precisely. Rather, the various steps may be processed in reverse order or simultaneously, as desired. Also, other operations may be added to or removed from these processes.
Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Aiming at the technical problems, the technical idea of the application is to carry out wrinkle removal treatment based on semantic information in a label image of an anti-counterfeit label. It should be appreciated that the semantic information of an image provides important clues about the content of the image. In security label prints, wrinkling often results in blurring, distortion, and unreadability of the information. By utilizing semantic information in the image, such as main elements and structures in the label image, information such as characters, patterns, boundaries and the like is obtained, so that the content of the label can be better understood and repaired.
Fig. 1 is a flowchart of a method for processing acquired information of a letter code anti-counterfeit printed matter according to an embodiment of the present application. Fig. 2 is a schematic diagram of an architecture of an information processing method for collecting information of a letter code anti-counterfeit printed material according to an embodiment of the application. As shown in fig. 1 and fig. 2, the method for processing the acquired information of the letter code anti-counterfeiting printed matter according to the embodiment of the application comprises the following steps: s110, acquiring a label image of an anti-counterfeit label; s120, performing image analysis on the label image to obtain image semantic features of the label image; and S130, performing de-wrinkling processing on the label image based on the image semantic features of the label image to generate a de-wrinkling label image.
Specifically, in the technical scheme of the application, firstly, a label image of an anti-counterfeit label is obtained. It should be understood that, the tag image of the anti-counterfeit tag can be obtained by: the method comprises the steps of manually shooting, shooting an anti-counterfeiting label by using equipment such as a camera or a mobile phone, and storing a label image into a computer or other storage equipment; the method comprises the steps of obtaining in a scanning mode of a scanner, scanning the anti-counterfeiting label by using professional scanner equipment, and storing a label image in a digital form into a computer or other storage equipment. The specific selection of which method to acquire the tag image can be determined according to actual situations and requirements.
Consider that in a real scene, a portion of the tag image may be shadowed. This phenomenon may be caused by uneven light sources or the presence of a shade during photographing. Or part of the anti-counterfeiting label uses a material with a light reflection characteristic, so that the material on the surface of the label can cause uneven light reflection, thereby forming shadows. In any case, the existence of shadows can adversely affect semantic understanding and analysis of the tag image, so that in the technical scheme of the application, the shadow part in the tag image is removed to obtain a shadow-removed tag image.
And then, carrying out image blocking processing on the shadow-removed tag image to obtain a sequence of tag local image blocks, enabling the sequence of tag local image blocks to pass through a ViT model containing an embedded layer to obtain a plurality of context tag image block semantic feature vectors, and arranging the context tag image block semantic feature vectors into a global tag image block semantic feature map according to the image blocking processing position. That is, the global semantic understanding of the respective local regions of the demasked tag image is performed by the ViT model containing the embedded layer to obtain therefrom the high-dimensional implicitly associated semantic feature distribution of the demasked tag image. It is worth mentioning that, arranging the semantic feature vectors of the plurality of context label image blocks according to the positions of the image blocking process can restore the spatial position information of the original label image to a certain extent, which has important significance for the subsequent wrinkle removal process.
And then, passing the global label image block semantic feature map through a de-wrinkling generator based on an countermeasure generation network to obtain a de-wrinkling label image.
Accordingly, as shown in fig. 3, in step S120, image analysis is performed on the label image to obtain image semantic features of the label image, including: s121, performing image preprocessing on the label image to obtain a sequence of label local image blocks; and S122, carrying out semantic understanding on the sequence of the label local image blocks to obtain the image semantic features.
More specifically, as shown in fig. 4, in step S121, the image preprocessing is performed on the label image to obtain a sequence of label partial image blocks, including: s1211, removing a shadow part in the label image to obtain a shadow-removed label image; and S1212, performing image blocking processing on the shadow-removed label image to obtain a sequence of label local image blocks. It should be noted that, the following method may be used to remove the shadow portion in the tag image to obtain the shadow-removed tag image: 1. threshold segmentation, namely setting a threshold value, regarding pixel points with gray values lower than the threshold value in an image as shadow parts, and setting the values of the pixel points as background colors or other fixed values so as to remove the shadow parts; 2. color space conversion, converting an image from an RGB color space to an HSV color space or other color space, then judging a shadow part by utilizing brightness components or other features in the color space, and setting the shadow part as a background color or other fixed value; 3. morphological operations, in which shadow portions are removed by morphological operations such as erosion, dilation, open operation, close operation, etc., an appropriate morphological operation may be selected according to the shape and size of the shadow to process the image; 4. in the deep learning method, a model is trained to identify and remove shadow parts by using a deep learning technology such as Convolutional Neural Network (CNN), and shadow detection and segmentation can be performed by using an existing deep learning model, and then the shadow parts are removed from an original image. Image blocking processing is carried out on the shadow-removed tag image to obtain a sequence of tag local image blocks, and the processing can be carried out according to the following steps: 1. determining the size of a partition, firstly determining the size of each local image block of a label, determining according to actual requirements and application scenes, selecting a fixed block size, or performing self-adaptive adjustment according to the size and proportion of the label image; 2. the method comprises the steps of partitioning a sliding window, namely partitioning a shadow-removed tag image by using a sliding window method, wherein the sliding window is a rectangular window with a fixed size, sliding on the image with a fixed step length, and taking an image block in each window as a local image block; 3. extracting local image blocks, wherein the image blocks in each sliding window are taken as a local image block, and each local image block can be stored in a sequence for subsequent processing and analysis; 4. the boundary processing is adjusted, and at the image boundary, there may be a case where the window is not completely covered, and the boundary processing may be performed as needed, for example, discarding the boundary portion or expanding the image boundary using the filler pixels. Through the above steps, the de-shadowed label image may be segmented into a plurality of label local image blocks and a sequence is obtained, each element representing a local image block. This may facilitate subsequent further processing and analysis of each local image block.
More specifically, as shown in fig. 5, in step S122, semantic understanding is performed on the sequence of the tagged local image blocks to obtain the image semantic feature, including: s1221, extracting a plurality of context label image block semantic feature vectors from the sequence of label local image blocks based on a deep neural network model; and S1222, arranging the semantic feature vectors of the plurality of context label image blocks into a global label image block semantic feature map as the image semantic features according to the positions of the image blocking processing. It should be appreciated that in step S122, semantic understanding of the sequence of tagged local image blocks and extracting image semantic features has the following benefits: context information capturing, namely extracting semantic feature vectors of a plurality of context tag image blocks from a sequence of tag local image blocks by using a deep neural network model, so that context relations and semantic information among the tag image blocks can be captured, and the overall content of the tag image can be better understood; the global feature representation is characterized in that semantic feature vectors of a plurality of context tag image blocks are arranged into a global tag image block semantic feature map according to the image blocking processing positions, so that the semantic feature representation of the whole tag image can be obtained, the global feature representation can reflect the features of the tag image more comprehensively and comprises information such as the structure, the texture and the edge of the image, and the accuracy and the reliability of subsequent processing tasks are improved; feature distribution optimization can be performed by arranging semantic feature vectors of a plurality of context label image blocks into a global label image block semantic feature map, which means that adjacent semantic features are closer in the global feature map, and feature representation with better discrimination is facilitated to be extracted, so that the recognition performance of the letter code anti-counterfeiting printed matter is improved. By performing semantic understanding and image semantic feature extraction in step S122, the context information can be fully utilized, global feature representation can be obtained, and feature distribution optimization can be performed, so that the recognition accuracy and reliability of the letter code anti-counterfeit printed matter can be improved.
Wherein the deep neural network model is a ViT model comprising an embedded layer. It should be appreciated that the ViT (Vision Transformer) model is a deep neural network model based on a transducer architecture for image processing tasks. The traditional Convolutional Neural Network (CNN) has been successful in image processing, but the ViT model provides a brand new thought, and introduces a self-attention mechanism of a transducer into the image field. The ViT model segments the input image into a series of local image blocks, which are then processed through the transducer model. Specifically, in the ViT model, an image is divided into image blocks of a fixed size, and the pixel value of each image block is taken as an input, and then the pixel value of each image block is converted into embedded vectors representing semantic features of the image block by an embedding layer. The ViT model uses a self-attention mechanism to capture the relationships between image tiles. The self-attention mechanism can learn interactions between image blocks and use this interaction information to extract global image semantic features. By arranging the semantic feature vectors of the context label image blocks according to the positions of image blocking processing, the semantic feature map of the global label image block can be obtained and used as the semantic feature of the image. The ViT model can process images of different sizes and can capture global image semantic information, and by using the ViT model, semantic feature vectors of multiple contextual tag image blocks can be extracted from a sequence of tag local image blocks for further image semantic understanding and processing.
In one specific example, extracting a plurality of context label image block semantic feature vectors from the sequence of label local image blocks based on a deep neural network model comprises: and passing the sequence of the tag local image blocks through the ViT model containing the embedded layer to obtain semantic feature vectors of the plurality of context tag image blocks. It is worth mentioning that the purpose of extracting a plurality of context label image block semantic feature vectors from a sequence of label local image blocks is to obtain more global and rich image semantic information. These semantic feature vectors can be used to: the image classification, by extracting the semantic feature vectors of the label image blocks, can classify the image, input the feature vectors into a classifier, and can judge which category the image belongs to; the semantic feature vectors can be used for target detection tasks, and by detecting the positions and types of objects in the image, the feature vectors can be input into a target detection model to help identify and locate the target objects in the image; image generation, namely, by learning semantic feature vectors of the tag image blocks, a new image similar to the original image but with a certain difference can be generated, which is very useful in tasks such as image generation, style migration and the like; the image understanding can be performed more deeply by extracting semantic feature vectors of the tag image blocks, and the feature vectors can be used for tasks such as image segmentation, image description, image question-answering and the like, so that the more comprehensive understanding of the image content is realized. Extracting a plurality of context label image block semantic feature vectors from the label local image block can provide richer image semantic information and provide better foundation and support for various image processing tasks.
Further, as shown in fig. 6, based on the image semantic features of the label image, performing a de-wrinkling process on the label image to generate a de-wrinkled label image, including: s131, performing feature distribution optimization on the global tag image block semantic feature map to obtain an optimized global tag image block semantic feature map; and S132, enabling the optimized global label image block semantic feature map to pass through a de-wrinkling generator based on a countermeasure generation network so as to obtain the de-wrinkling label image.
It should be appreciated that performing the de-wrinkling process based on the image semantic features of the label image may improve the sharpness and quality of the label image, thereby improving the recognition accuracy and reliability of the letter code anti-counterfeit printed matter. Specifically, the feature distribution optimization in step S131 may enable the features of the label image to be more concentrated and accurate by adjusting the distribution of the semantic feature map of the global label image block, so that noise, blur and distortion in the image may be eliminated, and the sharpness and readability of the image may be improved. The challenge-generating network (Generative Adversarial Network, GAN) is a machine learning model, consisting of two parts, a Generator (Generator) and a Discriminator (Discriminator), which are trained by means of challenge to generate realistic sample data. The main purpose of the countermeasure generation network is to generate synthetic data with fidelity, such as images, audio, texts and the like, the generator is responsible for generating forged data samples, the discriminator is responsible for distinguishing real data from the generated forged data, and the real data and the generated forged data gradually promote the capability of the generator for generating the vivid samples through repeated countermeasure training processes, so that the generated samples are more and more close to real data distribution. The challenge generation network has the advantage that the challenge generation network can generate highly realistic and diversified data samples, and by learning the distribution characteristics of real data, the generator can generate similar samples, so that the challenge generation network has strong creativity and generation capability. In the de-wrinkling process, the de-wrinkling generator based on the countermeasure generation network can learn the texture, detail and structural characteristics of the label image, so that a clearer and real label image is generated, the de-wrinkling label image can be restored by learning the distribution characteristics of the real label image, the quality and the readability of the image are improved, and the countermeasure generation network has important application value in the de-wrinkling process.
Further, in the technical solution of the present application, when the context label image block semantic feature vectors are arranged into the global label image block semantic feature map according to the image blocking processing positions, the context label image block semantic feature vectors respectively represent context semantic association features of each image block of the label image, so that each feature matrix of the global label image block semantic feature map along the channel dimension includes local feature values of all image blocks of the label image, which may cause each feature matrix of the global label image block semantic feature map along the channel dimension to have manifold geometric differences on the overall feature manifold expression. In this way, when the global label image block semantic feature map is passed through a de-wrinkling generator based on an countermeasure generation network to obtain a de-wrinkling label image, the countermeasure generation network can be difficult to perform feature convergence and fitting effectively along a channel dimension during countermeasure generation, so as to reduce the de-wrinkling effect of the generated de-wrinkling label image. Thus, for each feature matrix along the channel dimension of the global label image block semantic feature map, e.g., denoted asAnd performing channel dimension traversal flow form convex optimization of the feature map.
Accordingly, in a specific example, performing feature distribution optimization on the global tag image block semantic feature map to obtain an optimized global tag image block semantic feature map, including: performing channel dimension traversing flow form convex optimization of the feature map on each feature matrix of the global tag image block semantic feature map along the channel dimension by using the following optimization formula to obtain the optimized global tag image block semantic feature map; wherein, the optimization formula is:
wherein , and />Column vectors and row vectors respectively obtained by linear transformation based on the global average pooling vectors of the feature matrices of the global label image block semantic feature map,representing the spectral norms of the matrix, i.e.>Square root of the maximum eigenvalue of +.>A +.th along channel dimension for the global tag image block semantic feature map>Characteristic matrix->Representing multiplication by location +.>Represents matrix multiplication, andto the optimized saidOptimizing the +.th of the global tag image block semantic feature map along the channel dimension>And (3) feature matrices.
Here, the channel dimension traversal manifold optimization of the global tag image block semantic feature map determines a base dimension of a feature matrix manifold by structuring a maximum distribution density direction of the modulated feature matrices, and traverses the feature matrix manifold along the channel direction of the global tag image block semantic feature map to constrain each feature matrix by stacking the base dimension of the traversal manifold along the channel directionConvex optimization of the continuity of the represented traversing manifold, thereby realizing the optimization of the feature matrix +.>The geometric continuity of the high-dimensional feature manifold of the global label image block semantic feature map composed of the traversing manifold to promote the convergence effect of the global label image block semantic feature map through the de-wrinkling generator based on the countermeasure generation network, namely, the de-wrinkling effect of the de-wrinkling label image is promoted.
It should be noted that the global average pooling vector refers to a column vector obtained by performing an average pooling operation on each feature matrix in the semantic feature map of the global label image block. Specifically, for each feature matrix, a single value is obtained by calculating the average value of all its elements, and then these values are arranged in the order of the feature matrix into a column vector, which is the global average pooling vector. In the optimization process, the global average pooling vector obtains two vectors through linear transformation and />These vectors are used to compute the channel dimension pass of the feature matrix in the optimization formulaCalendar flow pattern convex optimization. The global average pooling vector is a column vector obtained by carrying out average pooling operation on the semantic feature map of the global tag image block and is used for optimizing the feature distribution of the semantic feature map of the global tag image block.
It is worth mentioning that linear transformation refers to the operation of transforming one vector or matrix into another vector or matrix through linear mapping. In mathematics, linear transformation refers to a transformation that satisfies two properties: addition closures and scalar multiplication closures. In particular, for vectors or matrices in a vector space, the linear transformation may be represented by matrix multiplication. Given a linear transformation matrix a and a vector or matrix X, the result Y of the linear transformation can be calculated by the following formula:
where A is a linear transformation matrix, X is an input vector or matrix, and Y is an output vector or matrix. Linear transforms have several important properties including maintaining vector space additions and scalar multiplications, maintaining zero vectors unchanged, maintaining linear combinations between vectors, etc. The global average pooling vector is linearly transformed to obtain two vectorsAndthese vectors are used to optimize the feature distribution of the global tagged image block semantic feature map.
It is worth mentioning that the Spectral Norm (Spectral Norm) of a matrix is one of the norms used to measure the maximum singular value of the matrix. For a real matrixIts spectral norms can be calculated by:
wherein ,is a matrix->Singular Value(s). The spectral norms can be understood as the matrix +.>The corresponding linear transformation is the maximum of the magnification over all vectors. The spectral norms have: the non-negative property of the material is that,and only when +>When the matrix is zero, the equal sign is established; homogeneity, for arbitrary scalar +.>There isThe method comprises the steps of carrying out a first treatment on the surface of the Triangle inequality for any two matrices +.> and />There isThe method comprises the steps of carrying out a first treatment on the surface of the Multiplicative, for any two matrices +.> and />There is. Spectral norms have wide application in the fields of matrix theory, numerical analysis, machine learning, etc., such as estimation of matrix condition numbers, matricesConvergence analysis of the matrix, regularization of the matrix, etc.
In summary, the method for processing the acquired information of the letter code anti-counterfeit printed matter according to the embodiment of the application is explained, and the method can perform wrinkle removal processing based on semantic information in a label image of an anti-counterfeit label.
Fig. 7 is a block diagram of an information processing system 100 for collecting information about a letter code anti-counterfeit printed material according to an embodiment of the present application. As shown in fig. 7, an information processing system 100 for collecting information of a letter code anti-counterfeit printed matter according to an embodiment of the present application includes: an image acquisition module 110, configured to acquire a label image of an anti-counterfeit label; the image analysis module 120 is configured to perform image analysis on the tag image to obtain image semantic features of the tag image; and a de-wrinkling processing module 130, configured to perform de-wrinkling processing on the label image based on the image semantic features of the label image, so as to generate a de-wrinkling label image.
In one example, in the information processing system 100 for collecting the anti-counterfeit printed matter of the above-mentioned letter code, the image analysis module 120 includes: the image preprocessing unit is used for carrying out image preprocessing on the tag image so as to obtain a sequence of tag local image blocks; and the semantic understanding unit is used for carrying out semantic understanding on the sequence of the label local image blocks so as to obtain the image semantic features.
Here, it will be understood by those skilled in the art that the specific functions and operations of the respective modules in the above-described information processing system 100 for collecting information on a letter anti-counterfeit printed matter have been described in detail in the above description of the information processing method for collecting information on a letter anti-counterfeit printed matter with reference to fig. 1 to 6, and thus, repetitive descriptions thereof will be omitted.
As described above, the system 100 for processing the acquired information of the letter code anti-counterfeit printed matter according to the embodiment of the present application may be implemented in various wireless terminals, for example, a server or the like having an acquired information processing algorithm of the letter code anti-counterfeit printed matter. In one example, the acquired information processing system 100 of the letter code anti-counterfeit printed matter according to an embodiment of the present application may be integrated into the wireless terminal as a software module and/or a hardware module. For example, the information processing system 100 for collecting the code anti-counterfeit printed matter may be a software module in the operating system of the wireless terminal, or may be an application developed for the wireless terminal; of course, the information processing system 100 for collecting the anti-counterfeit printed matter of the letter code can be one of a plurality of hardware modules of the wireless terminal.
Alternatively, in another example, the information processing system 100 for collecting the letter code anti-counterfeit printed matter and the wireless terminal may be separate devices, and the information processing system 100 for collecting the letter code anti-counterfeit printed matter may be connected to the wireless terminal through a wired and/or wireless network, and transmit the interactive information according to the agreed data format.
Fig. 8 is an application scenario diagram of an information processing method for collecting information of a letter code anti-counterfeit printed matter according to an embodiment of the present application. As shown in fig. 8, in this application scenario, first, a label image of an anti-counterfeit label (e.g., D illustrated in fig. 8) is acquired, and then, the label image is input to a server (e.g., S illustrated in fig. 8) in which an acquisition information processing algorithm of a letter anti-counterfeit printed matter is deployed, wherein the server can process the label image using the acquisition information processing algorithm of the letter anti-counterfeit printed matter to generate a de-wrinkling label image.
Furthermore, those skilled in the art will appreciate that the various aspects of the application are illustrated and described in the context of a number of patentable categories or circumstances, including any novel and useful procedures, machines, products, or materials, or any novel and useful modifications thereof. Accordingly, aspects of the application may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.) or by a combination of hardware and software. The above hardware or software may be referred to as a "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the application may take the form of a computer product, comprising computer-readable program code, embodied in one or more computer-readable media.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The foregoing is illustrative of the present application and is not to be construed as limiting thereof. Although a few exemplary embodiments of this application have been described, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of this application. Accordingly, all such modifications are intended to be included within the scope of present application. It is to be understood that the foregoing is illustrative of the present application and is not to be construed as limited to the specific embodiments disclosed, and that modifications to the disclosed embodiments, as well as other embodiments, are intended to be included within the scope of the application.

Claims (8)

1. The method for processing the acquired information of the letter code anti-counterfeiting printed matter is characterized by comprising the following steps of:
acquiring a label image of an anti-counterfeit label;
performing image analysis on the tag image to obtain image semantic features of the tag image; and
based on the image semantic features of the label image, performing wrinkle removal processing on the label image to generate a wrinkle-removed label image;
the image analysis is performed on the tag image to obtain the image semantic feature of the tag image, including:
performing image preprocessing on the tag image to obtain a sequence of tag local image blocks; and
carrying out semantic understanding on the sequence of the label local image blocks to obtain the image semantic features;
the image preprocessing is performed on the label image to obtain a sequence of label local image blocks, including:
removing shadow parts in the tag image to obtain a shadow-removed tag image; and
and performing image blocking processing on the shadow-removed tag image to obtain a sequence of the tag local image blocks.
2. The method for processing the acquired information of the letter code anti-counterfeit printed matter according to claim 1, wherein the semantic understanding of the sequence of the tag partial image blocks to obtain the image semantic features comprises:
extracting a plurality of context label image block semantic feature vectors from the sequence of label local image blocks based on a deep neural network model; and
and arranging the semantic feature vectors of the plurality of context label image blocks into a global label image block semantic feature map as the image semantic features according to the image blocking processing positions.
3. The method for processing information collected from a letter code anti-counterfeit printed matter according to claim 2, wherein the deep neural network model is a ViT model including an embedded layer.
4. The method for processing the acquired information of the code anti-counterfeit printed matter according to claim 3, wherein extracting a plurality of context label image block semantic feature vectors from the sequence of label partial image blocks based on a deep neural network model comprises:
and passing the sequence of the tag local image blocks through the ViT model containing the embedded layer to obtain semantic feature vectors of the plurality of context tag image blocks.
5. The method for processing the acquired information of the letter code anti-counterfeit printed matter according to claim 4, wherein the performing the de-wrinkling process on the label image based on the image semantic features of the label image to generate a de-wrinkled label image comprises:
performing feature distribution optimization on the global tag image block semantic feature map to obtain an optimized global tag image block semantic feature map; and
and the optimized global label image block semantic feature map passes through a de-wrinkling generator based on an antagonism generation network to obtain the de-wrinkling label image.
6. The method for processing the acquired information of the security printed matter according to claim 5, wherein the optimizing the feature distribution of the global tag image block semantic feature map to obtain the optimized global tag image block semantic feature map includes:
performing channel dimension traversing flow form convex optimization of the feature map on each feature matrix of the global tag image block semantic feature map along the channel dimension by using the following optimization formula to obtain the optimized global tag image block semantic feature map;
wherein, the optimization formula is:
wherein , and />Column vectors and row vectors which are respectively obtained by linear transformation of global average pooling vectors of all feature matrixes based on the global label image block semantic feature map, < >>Representing the spectral norms of the matrix +.>A +.th along channel dimension for the global tag image block semantic feature map>Characteristic matrix->Representing multiplication by location +.>Represents matrix multiplication, and->The +.f. along the channel dimension for the optimized global label image block semantic feature map after optimization>And (3) feature matrices.
7. An information processing system for collecting anti-counterfeiting printed matters of a letter code is characterized by comprising the following components:
the image acquisition module is used for acquiring a label image of the anti-counterfeit label;
the image analysis module is used for carrying out image analysis on the tag image so as to obtain image semantic features of the tag image; and
the de-wrinkling processing module is used for performing de-wrinkling processing on the label image based on the image semantic features of the label image so as to generate a de-wrinkling label image.
8. The system for processing information collected from a letter code anti-counterfeit printed matter according to claim 7, wherein said image analysis module comprises:
the image preprocessing unit is used for carrying out image preprocessing on the tag image so as to obtain a sequence of tag local image blocks; and
the semantic understanding unit is used for carrying out semantic understanding on the sequence of the label local image blocks so as to obtain the image semantic features.
CN202311190343.4A 2023-09-15 2023-09-15 Method and system for processing acquired information of letter code anti-counterfeiting printed matter Pending CN116912831A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311190343.4A CN116912831A (en) 2023-09-15 2023-09-15 Method and system for processing acquired information of letter code anti-counterfeiting printed matter

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311190343.4A CN116912831A (en) 2023-09-15 2023-09-15 Method and system for processing acquired information of letter code anti-counterfeiting printed matter

Publications (1)

Publication Number Publication Date
CN116912831A true CN116912831A (en) 2023-10-20

Family

ID=88360761

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311190343.4A Pending CN116912831A (en) 2023-09-15 2023-09-15 Method and system for processing acquired information of letter code anti-counterfeiting printed matter

Country Status (1)

Country Link
CN (1) CN116912831A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767270A (en) * 2021-01-19 2021-05-07 中国科学技术大学 Fold document image correction system
US20220237832A1 (en) * 2019-10-11 2022-07-28 Swimc Llc Augmentation of digital images with simulated surface coatings
CN116168352A (en) * 2023-04-26 2023-05-26 成都睿瞳科技有限责任公司 Power grid obstacle recognition processing method and system based on image processing
CN116403226A (en) * 2023-04-13 2023-07-07 中国科学技术大学 Unconstrained fold document image correction method, system, equipment and storage medium
CN116664961A (en) * 2023-07-31 2023-08-29 东莞市将为防伪科技有限公司 Intelligent identification method and system for anti-counterfeit label based on signal code

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220237832A1 (en) * 2019-10-11 2022-07-28 Swimc Llc Augmentation of digital images with simulated surface coatings
CN112767270A (en) * 2021-01-19 2021-05-07 中国科学技术大学 Fold document image correction system
CN116403226A (en) * 2023-04-13 2023-07-07 中国科学技术大学 Unconstrained fold document image correction method, system, equipment and storage medium
CN116168352A (en) * 2023-04-26 2023-05-26 成都睿瞳科技有限责任公司 Power grid obstacle recognition processing method and system based on image processing
CN116664961A (en) * 2023-07-31 2023-08-29 东莞市将为防伪科技有限公司 Intelligent identification method and system for anti-counterfeit label based on signal code

Similar Documents

Publication Publication Date Title
CN109299274B (en) Natural scene text detection method based on full convolution neural network
CN109583483B (en) Target detection method and system based on convolutional neural network
CN107480585B (en) Target detection method based on DPM algorithm
CN111291629A (en) Method and device for recognizing text in image, computer equipment and computer storage medium
US20060062460A1 (en) Character recognition apparatus and method for recognizing characters in an image
CN111369550A (en) Image registration and defect detection method, model, training method, device and equipment
CN111223065A (en) Image correction method, irregular text recognition device, storage medium and equipment
CN108664970A (en) A kind of fast target detection method, electronic equipment, storage medium and system
CN111461101A (en) Method, device and equipment for identifying work clothes mark and storage medium
CN111553422A (en) Automatic identification and recovery method and system for surgical instruments
CN110610230A (en) Station caption detection method and device and readable storage medium
CN113297420A (en) Video image processing method and device, storage medium and electronic equipment
CN115424288A (en) Visual Transformer self-supervision learning method and system based on multi-dimensional relation modeling
CN111259792A (en) Face living body detection method based on DWT-LBP-DCT characteristics
CN113052234A (en) Jade classification method based on image features and deep learning technology
CN116912831A (en) Method and system for processing acquired information of letter code anti-counterfeiting printed matter
CN113065407B (en) Financial bill seal erasing method based on attention mechanism and generation countermeasure network
Rani et al. Object Detection in Natural Scene Images Using Thresholding Techniques
CN113705571A (en) Method and device for removing red seal based on RGB threshold, readable medium and electronic equipment
CN112364856A (en) Method and device for identifying copied image, computer equipment and storage medium
CN116704526B (en) Staff scanning robot and method thereof
Sulaiman et al. Image tampering detection using extreme learning machine
CN115063813B (en) Training method and training device of alignment model aiming at character distortion
Bogdan et al. DDocE: Deep Document Enhancement with Multi-scale Feature Aggregation and Pixel-Wise Adjustments
CN117689550A (en) Low-light image enhancement method and device based on progressive generation countermeasure network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination