CN116305223B - Method and system for real-time making of self-service bottled water label data - Google Patents

Method and system for real-time making of self-service bottled water label data Download PDF

Info

Publication number
CN116305223B
CN116305223B CN202310579967.9A CN202310579967A CN116305223B CN 116305223 B CN116305223 B CN 116305223B CN 202310579967 A CN202310579967 A CN 202310579967A CN 116305223 B CN116305223 B CN 116305223B
Authority
CN
China
Prior art keywords
input layer
neurons
label data
self
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310579967.9A
Other languages
Chinese (zh)
Other versions
CN116305223A (en
Inventor
石原林
王雪峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Solanum Network Technology Co ltd
Original Assignee
Beijing Solanum Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Solanum Network Technology Co ltd filed Critical Beijing Solanum Network Technology Co ltd
Priority to CN202310579967.9A priority Critical patent/CN116305223B/en
Publication of CN116305223A publication Critical patent/CN116305223A/en
Application granted granted Critical
Publication of CN116305223B publication Critical patent/CN116305223B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/149Data rate or code amount at the encoder output by estimating the code amount by means of a model, e.g. mathematical model or statistical model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32267Methods relating to embedding, encoding, decoding, detection or retrieval operations combined with processing of the image
    • H04N1/32277Compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Pure & Applied Mathematics (AREA)
  • Algebra (AREA)
  • Mathematical Optimization (AREA)
  • Bioethics (AREA)
  • Mathematical Analysis (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the technical field of data encryption transmission, and provides a method and a system for making self-service bottled water label data in real time, wherein the method comprises the following steps: collecting initial label data; constructing a first transmission model of the label data, acquiring a first loss coefficient, and acquiring a first similarity and a second similarity; acquiring connection consistency of any two input units, acquiring connection relative deviation of any two input units, and acquiring information loss degree of each input unit according to the connection relative deviation, the first similarity and the second similarity; obtaining a second loss coefficient of the first transmission model according to the information loss degree, obtaining a comprehensive loss function, constructing a second transmission model, and obtaining received label data by inputting initial label data into the second transmission model; and completing the manufacture of the self-service label according to the received label data. The invention aims to solve the problems that the data transmission efficiency of the label is low and information is easy to leak.

Description

Method and system for real-time making of self-service bottled water label data
Technical Field
The invention relates to the technical field of data encryption transmission, in particular to a method and a system for making self-service bottled water label data in real time.
Background
In the process of real-time production of bottled water label data, a self-service label production center needs to receive a large amount of label data transmitted by a mobile phone end, and the transmission of the large amount of label data has higher requirements on network bandwidth, so that the label data needs to be compressed, and the transmission efficiency of the label data can be greatly improved by transmitting the compressed data; the label data is generated by personalized design of a user at a mobile phone end, and belongs to sensitive data, so that the safe storage and transmission of the data are required to be ensured in the label data transmission process, namely the label data are required to be encrypted, and the information security of the label data is ensured.
Because the label data is in the form of image data, in the prior art, run-length encoding is generally adopted to compress the label data, and meanwhile, an AES encryption algorithm is adopted to encrypt the label data, however, the two technologies need to compress and then encrypt the label data, so that the processing process of the label data before transmission is relatively complicated, the time consumption is relatively long, and the real-time transmission of the label data and the transmission of a large amount of label data are not facilitated; the self-coding network is a network structure capable of coding and decoding data, the label data is input into the self-coding network, the compression of the label data is realized through the coding part of the network, after the compressed data is transmitted to the receiving end, the receiving end realizes the reconstruction and decoding of the label data through the decoding part of the network; because the network structure is integrated, the network structure also serves as a key part when encoding compression and decoding reconstruction are carried out, namely, compression transmission of the label data is finished through the self-encoding network, and encryption of the label data transmission process is finished.
Disclosure of Invention
The invention provides a method and a system for manufacturing self-service bottled water label data in real time, which aim to solve the problems that the existing label data is low in transmission efficiency and easy to leak information, and the adopted technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a method for real-time making self-service bottled water label data, the method including the steps of:
collecting initial label data;
training a first self-coding network according to initial label data, inputting the initial label data into the trained first self-coding network, outputting reconstruction data of the initial label data, acquiring a first loss coefficient of the first self-coding network according to the initial label data and the reconstruction data, acquiring a coding vector of each input layer neuron and a reconstruction vector of each output layer neuron, acquiring a first similarity between any two input layer neurons according to the coding vectors of different input layer neurons, and acquiring a second similarity of each input layer neuron according to the coding vectors of the input layer neurons and the reconstruction vectors of the corresponding output layer neurons;
acquiring connection consistency of any two input layer neurons according to the connection relation between any two input layer neurons and hidden layer neurons, marking the ratio of the connection consistency between any two input layer neurons and the first similarity as the relative connection relation between the two input layer neurons, acquiring the connection relative deviation of any two input layer neurons according to the relative connection relation, and acquiring the information loss degree of each input layer neuron according to the connection relative deviation, the first similarity and the second similarity;
The sum of the information loss degrees of all the input layer neurons is recorded as a second loss coefficient, the sum of the first loss coefficient and the second loss coefficient is recorded as a comprehensive loss function, the loss function of the first self-coding network is modified into the comprehensive loss function, the modified first self-coding network is recorded as a second self-coding network, and the second self-coding network is trained through initial label data and the comprehensive loss function; inputting the initial label data into a trained second self-coding network, and marking the obtained output result as received label data;
and the self-help label manufacturing center completes the manufacturing of the self-help label according to the received label data.
Optionally, the method for obtaining the first loss coefficient of the first self-coding network according to the initial label data and the reconstruction data includes the following specific steps:
wherein,,a first loss factor representing a first self-encoding network,/->Representing a first self-encoding networkThe number of neurons of the input layer, +.>Indicate->The number of pixels encoding the input image by the input layer neurons,/a>Indicate->The number of pixels containing user design information in the part of the input layer neuron encoding the input image, < > >Indicate->The input layer neurons encode the input image +.>The number of pixels in the neighborhood of pixels containing user design information, < >>Representing the number of neighborhood pixels of a pixel, < +.>Indicate->The input layer neurons encode the input image +.>Pixel value of each pixel, +.>Indicate->Individual transportThe corresponding +.>The corresponding reconstructed output image part of the output layer neurons is +.>Pixel value of each pixel, +.>Representing absolute value;
and labeling the pixel points containing the user design information in the initial labeling data acquisition process.
Optionally, the method for obtaining the coding vector of each input layer neuron and the reconstruction vector of each output layer neuron includes the following specific steps:
marking any input layer neuron as a target input layer neuron, acquiring a part of the target input layer neuron for encoding the initial label data, connecting pixel values of each pixel point of the encoding part line by line end to obtain a vector corresponding to the target input layer neuron, and marking the vector as an encoding vector of the target input layer neuron for the initial label data; acquiring a coding vector of each input layer neuron for initial label data;
Marking any output layer neuron as a target output layer neuron, acquiring a decoded part of reconstruction data corresponding to the target output layer neuron, obtaining a vector corresponding to the target output layer neuron, and marking the vector as a reconstruction vector of the target output layer neuron for the initial label data; a reconstruction vector for each output layer neuron for the initial label data is obtained.
Optionally, the method for obtaining the connection consistency of any two input layer neurons includes the following specific steps:
wherein,,indicate->Input layer neurons and->Connection consistency of individual input layer neurons, < >>Indicate->Input layer neurons->Indicate->Input layer neurons->Indicate->The number of hidden layer neurons connected by the input layer neurons, < >>Indicate->Input layer neurons and->The number of hidden layer neurons to which the individual input layer neurons are commonly connected, < >>Represents the first part of the common connection>Hidden layer neurons and->The connection weights of the neurons of the individual input layers,represents the first part of the common connection>Hidden layer neurons and->Connection weights of neurons of the individual input layers, +.>Representing absolute value>A minimum value for avoiding a denominator of 0 is indicated.
Optionally, the method for obtaining the connection relative deviation of any two input layer neurons according to the relative connection relation includes the following specific steps:
taking any one relative connection relation as a target relative connection relation, acquiring a preset number of relative connection relations with the smallest difference with the target relative connection relation, and taking the reciprocal of the absolute value of the difference between the preset number of the relative connection relations with the smallest difference and the target relative connection relation as the relative density of the target relative connection relation; acquiring the relative density of each relative connection relation, and marking the relative connection relation with the maximum relative density as a standard connection relation, wherein the relative connection relation is expressed asThe method comprises the steps of carrying out a first treatment on the surface of the First->Input layer neurons and->Relative deviation of the connections of the neurons of the individual input layers +.>The calculation method of (1) is as follows:
wherein,,indicate->Input layer neurons and->The consistency of the connections of the neurons of the individual input layers,indicate->Input layer neurons and->First similarity of neurons of the input layer, < >>Representing standard connection relations,/-, and>representing absolute value>A minimum value for avoiding a denominator of 0 is indicated.
Optionally, the obtaining the information loss degree of each input layer neuron according to the connection relative deviation, the first similarity and the second similarity includes the following specific methods:
Acquisition and the firstFirst similarity of neurons of the input layer is maximum +.>Input layer neurons, noteIs->Similar input layer neurons of the individual input layer neurons, th->Information loss degree of neurons of the individual input layers +.>The calculation method of (1) is as follows:
wherein,,representing the number of neurons of a similar input layer, +.>Indicate->Input layer neurons->Indicate->Output layer neurons->Indicate->The +.o of the neurons of the input layer>A similar input layer neuron->Indicate->The +.o of the neurons of the input layer>Output layer neurons corresponding to the similar input layer neurons,/->Indicate->Input layer neurons and->The +.o of the neurons of the input layer>Relative deviation of the connections of neurons of the individual similar input layers, < >>Indicate->The output layer neurons and->The +.o of the neurons of the input layer>The connection relative deviation of the output layer neurons corresponding to the similar input layer neurons, +.>Indicate->The +.o of the neurons of the input layer>Second similarity of neurons of the similar input layer, < ->The normalization process is represented by the process of normalization,the normalized object is the degree of information loss for all input layer neurons.
In a second aspect, another embodiment of the present invention provides a system for real-time production of self-service bottled water label data, the system comprising:
The label data acquisition module acquires initial label data;
and the label data transmission module: training a first self-coding network according to initial label data, inputting the initial label data into the trained first self-coding network, outputting reconstruction data of the initial label data, acquiring a first loss coefficient of the first self-coding network according to the initial label data and the reconstruction data, acquiring a coding vector of each input layer neuron and a reconstruction vector of each output layer neuron, acquiring a first similarity between any two input layer neurons according to the coding vectors of different input layer neurons, and acquiring a second similarity of each input layer neuron according to the coding vectors of the input layer neurons and the reconstruction vectors of the corresponding output layer neurons;
acquiring connection consistency of any two input layer neurons according to the connection relation between any two input layer neurons and hidden layer neurons, marking the ratio of the connection consistency between any two input layer neurons and the first similarity as the relative connection relation between the two input layer neurons, acquiring the connection relative deviation of any two input layer neurons according to the relative connection relation, and acquiring the information loss degree of each input layer neuron according to the connection relative deviation, the first similarity and the second similarity;
The sum of the information loss degrees of all the input layer neurons is recorded as a second loss coefficient, the sum of the first loss coefficient and the second loss coefficient is recorded as a comprehensive loss function, the loss function of the first self-coding network is modified into the comprehensive loss function, the modified first self-coding network is recorded as a second self-coding network, and the second self-coding network is trained through initial label data and the comprehensive loss function; inputting the initial label data into a trained second self-coding network, and marking the obtained output result as received label data;
and the self-service label manufacturing center completes the manufacturing of the self-service label according to the received label data.
The beneficial effects of the invention are as follows: the invention obtains a first loss coefficient reflecting information loss through the difference between input data and output data in the existing first transmission model, respectively obtains first similarity among input units to reflect the similarity of different input units on coding characteristics, and simultaneously obtains second similarity between the input units and corresponding output units to reflect the information loss of each input unit; obtaining connection consistency between the input units according to connection relations between different input units and the hidden units, wherein a relative relation between the connection consistency and the first similarity represents connection relative deviation between the input units, and further quantification of information loss degree of each input unit is completed according to the connection relative deviation and the combination of the first similarity and the second similarity; the comprehensive loss function is obtained by combining the information loss degree with the first loss coefficient, a second transmission model is constructed by the comprehensive loss function, compression and transmission of the initial tag data are completed according to the second transmission model, so that the received tag data which are finally used for manufacturing cannot have more information loss compared with the initial tag data, the requirement of personalized design of a user is guaranteed, the compression and transmission of the initial tag data are carried out, meanwhile, the safe encryption of the tag data transmission process is guaranteed through the transmission model, the transmission of the real-time tag data is guaranteed, and meanwhile, the information safety of the tag data is protected.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
Fig. 1 is a flowchart of a method for real-time making of self-service bottled water label data according to an embodiment of the present invention;
fig. 2 is a block diagram of a system for real-time production of self-service bottled water label data according to a third embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The self-coding network is a network structure capable of coding and decoding data, the label data is input into the self-coding network, the compression of the label data is realized through the coding part of the network, after the compressed data is transmitted to the receiving end, the receiving end realizes the reconstruction and decoding of the label data through the decoding part of the network; because the network structure is integrated, the network structure also serves as a key part when encoding compression and decoding reconstruction are carried out, namely, compression transmission of the label data is finished through the self-encoding network, and encryption of the label data transmission process is finished.
Referring to fig. 1, a flowchart of a method for real-time making of self-service bottled water label data according to an embodiment of the invention is shown, the method includes the following steps:
and S001, collecting initial label data.
And collecting labels manufactured by users and marking the labels as initial label data for subsequent label data transmission.
Step S002, a first transmission model of the label data is constructed, a first loss coefficient of the first transmission model is obtained according to the difference between the input data and the output data, a first similarity between different input units is obtained, and meanwhile, a second similarity of each input unit is obtained according to the corresponding relation between the input units and the output units.
It should be noted that, in the existing first transmission model, the first loss coefficient reflecting the information loss is only obtained from the difference between the input data and the output data, the first loss coefficient does not consider the difference of the expression degrees of different input units and output units on the information in the encoding and decoding process, and the confidentiality degree of the label data transmission process is not enough, so that the label data is easy to be violently cracked to generate information leakage; therefore, the similarity between the input units needs to be analyzed, meanwhile, the similarity between the input units and the output units is analyzed, the information loss of different input units of the transmission model is reflected according to the similarity, and further, the loss function of the transmission model is further adjusted according to the information loss, so that the transmission model is more private and is difficult to crack, and information loss and leakage are caused.
Step S003, the connection consistency of any two input units is obtained according to the connection relation between any two input units and the hidden units, the connection relative deviation of any two input units is obtained according to the connection consistency and the first similarity between the input units, and the information loss degree of each input unit is obtained according to the connection relative deviation, the first similarity and the second similarity.
After the first similarity and the second similarity are obtained, connection consistency between the input units is obtained according to connection relations between different input units and the hidden units, the connection consistency is used for reflecting the similarity relation between the input units, the information loss degree in each input unit is obtained through mutual verification of the similar input units, a subsequent loss function is adjusted through the information loss degree, the transmission model is enabled to be attached to the label data more, and privacy of the label data transmission process is further improved through information loss quantification.
Step S004, a second loss coefficient of the first transmission model is obtained according to the information loss degree of each input unit, a comprehensive loss function is obtained according to the first loss coefficient and the second loss coefficient, a second transmission model is built, and the received label data is obtained by inputting the initial label data into the second transmission model.
It should be noted that, the comprehensive loss function is obtained by combining the first loss coefficient with the information loss degree of each input unit, and the second transmission model is trained by the comprehensive loss function and the initial label data, so that the information loss can be reduced when the second transmission model compresses and transmits the initial label data, meanwhile, the relationship between the second transmission model and the initial label data can be further ensured by acquiring the comprehensive loss function, the privacy of the second transmission model is improved, and the safe transmission in the label data transmission process is ensured.
And step S005, the self-help label manufacturing center completes the manufacturing of the self-help label according to the received label data.
After the self-help label making center obtains the received label data, the received label data is made, and the made label is pasted on bottled water through a self-help labeling machine, so that the user personalized design label is made.
The invention provides a second embodiment of a method for making self-service bottled water label data in real time, which specifically comprises the following steps:
and S101, collecting initial label data.
The purpose of the embodiment is to make and paste the personalized label made by the user, the user enters a self-help labeling machine label making program at the mobile phone end through code scanning, the personalized label is made, the label data made by the user are transmitted to a self-help label making center for making, and the made personalized label is pasted into bottled water; therefore, the personalized label manufactured by the user needs to be obtained firstly and can be directly obtained through a self-help title machine label manufacturing program, the personalized label manufactured by the user is marked as initial label data, and the initial label data is transmitted to a self-help label manufacturing center and manufactured later.
Thus, the initial tag data to be transmitted is obtained.
Step S102, a first self-coding network of the label data is constructed, a first loss coefficient of the first self-coding network is obtained according to the difference between the input data and the output data, a first similarity between neurons of different input layers is obtained, and meanwhile, a second similarity of each neuron of the input layers is obtained according to the corresponding relation between the neurons of the input layers and the neurons of the output layers.
It should be noted that, the loss function of the self-coding network mainly reflects the difference between the input image and the reconstructed output image, the neurons of different input layers have different expressions on the image, and the coding relationship has the difference, and the larger the difference between the coding relationships is, the larger the loss on the image information is; the method comprises the steps of utilizing the expression degree of input layer neurons and output layer neurons on image information and the deviation between corresponding coding relations of the input layer neurons to reflect the loss of information before the input image and the reconstruction of the output image, and further obtaining a comprehensive loss function; constructing and training a second self-coding network according to the comprehensive loss function, and completing transmission of initial label data through the second self-coding network; therefore, it is first necessary to construct a first self-encoding network based on the existing self-encoding network structure, obtain a first loss coefficient of the first self-encoding network, and obtain the similarity between the neurons of the input layer and the neurons of the output layer.
Specifically, a first self-coding network is firstly constructed, and the first self-coding network is constructed by adopting the existing VGG-net structure, wherein the first self-coding network is the first transmission model in the first embodiment; inputting the initial label data into a first self-coding network to obtain output data, wherein a loss function adopts a mean square error function between the input data and the output data; completing the construction and training of the first self-coding network by converging the loss function; inputting the initial label data into a trained first self-coding network, and outputting reconstruction data of the initial label data; it should be noted that, the number of the input layer neurons and the number of the output layer neurons in the trained first self-coding network are equal, the number of the hidden layer neurons is smaller than the number of the input layer neurons and the number of the output layer neurons, the input layer neurons are the input units in the first embodiment, the output layer neurons are the output units in the first embodiment, and the hidden layer neurons are the hidden units in the first embodiment.
It should be further noted that, the self-coding network training basis is to perform feedback adjustment on the weight of the network structure according to the loss function, so that the quality of the loss function affects the compression and reconstruction effects of the self-coding network on the image; the loss function of a general self-coding neural network is mainly used for representing the information loss of a reconstructed image according to the difference between an input image and an output image, but in practice, the expression degree of neurons of different input layers on image information is different, so that the influence degree of neurons of different output layers on the loss function is different; it is therefore necessary to obtain a first loss factor of the first self-encoding network from the initial tag data and the reconstructed initial tag data.
Specifically, a first loss coefficient of a first self-coding network is obtained according to the initial label data and the reconstructed initial label dataThe specific calculation method of (a) is as follows:
wherein,,representing the number of input layer neurons in the first self-encoding network,/and>indicate->The number of pixels encoding the input image by the input layer neurons,/a>Indicate->The number of pixels containing user design information in the part of the input layer neuron encoding the input image, < >>Indicate->The input layer neurons encode the input image +.>The number of pixels in the neighborhood of pixels containing user design information, < >>Representing the number of neighborhood pixels of a pixel, the present embodiment employs +.>Calculating, namely analyzing eight neighborhood pixel points of the pixel points; />Indicate->The input layer neurons encode the input image +.>Pixel value of each pixel, +.>Indicate->The corresponding +.>The corresponding reconstructed output image part of the output layer neurons is +.>Pixel value of each pixel, +.>Representing absolute value; the pixel points containing the user design information are already marked in the process of obtaining the initial label data, in this embodiment, the user performs personalized label design on a white area, wherein the pixel value corresponding to white is 255, and then the pixel value in the initial label data is not equal to 255 The pixel points of the image sensor are the pixel points containing the user design information; the more the number of pixels containing user design information in the input layer neuron coding portion, the greater the impact on the loss; the pixel points containing user design information in the neighborhood, the larger the pixel value difference between the input layer neuron and the output layer neuron is, the larger the information loss of the output data is, so that a first loss coefficient of the first self-coding network is constructed.
Further, for any input layer neuron, acquiring a part of the input layer neuron for encoding an input image, wherein the input image is initial label data, connecting pixel values of each pixel point of the encoding part line by line end to obtain a vector corresponding to the input layer neuron, and marking the vector as an encoding vector of the input layer neuron for the initial label data; acquiring the coding vector of each input layer neuron for the initial label data according to the method; for any output layer neuron, the decoded part in the reconstructed image corresponding to the output layer neuron is also obtained, and then a vector corresponding to the output layer neuron is obtained and is recorded as a reconstructed vector of the output layer neuron for the initial label data; and obtaining a reconstruction vector of each output layer neuron for the initial label data according to the method.
Further, for any two input layer neurons, calculating cosine similarity between the coding vectors of the two input layer neurons for the initial label data, and recording the cosine similarity as first similarity between the two input layer neurons; since the number of the input layer neurons is equal to that of the output layer neurons, a corresponding relation exists between the input layer neurons and the output layer neurons with the same ordinal number, and therefore, for any one input layer neuron and the corresponding output layer neuron, the cosine similarity between the coding vector of the input layer neuron and the reconstruction vector of the output layer neuron is calculated, and the cosine similarity is recorded as the second similarity of the input layer neuron; the first similarity between any two input layer neurons and the second similarity of each input layer neuron are obtained according to the method.
Thus, according to the initial label data and the first self-coding network, a first loss coefficient, a first similarity between the neurons of the input layer and a second similarity of each neuron of the input layer are obtained.
Step S103, obtaining connection consistency of any two input layer neurons according to the connection relation between any two input layer neurons and hidden layer neurons, obtaining connection relative deviation of any two input layer neurons according to the connection consistency and first similarity between the input layer neurons, and obtaining information loss degree of each input layer neuron according to the connection relative deviation, the first similarity and the second similarity.
It should be noted that, after the first loss coefficient is obtained, the whole white area is not filled in the user design process, so that the input image part encoded by part of the input layer neurons has similar appearance, that is, the first similarity between the input layer neurons is larger, and the encoding relationship between the input layer neurons with larger first similarity also has similar characteristics; in the self-coding network, the coding characteristics of the input layer neurons depend on the connection relation between the input layer neurons and the hidden layer neurons, and the coding relation between the similar input layer neurons can be represented by using the connection relation between the input layer neurons and the hidden layer neurons; the difference of the connection relations reflects the difference of the coding relations, the larger the difference is, the more inconsistent the coding relations are, and the worse the similar input layer neuron coding effect is, namely the larger the loss of information in the coding process is, so that the larger the loss function reflected by the corresponding input layer neuron is.
Specifically, for the firstInput layer neurons and->A plurality of input layer neurons, wherein->Connection consistency of neurons of two input layers +.>The calculation method of (1) is as follows:
wherein,,indicate- >Input layer neurons->Indicate->Input layer neurons->Indicate->The number of hidden layer neurons connected by the input layer neurons, < >>Indicate->Input layer neurons and->The number of hidden layer neurons to which the individual input layer neurons are commonly connected, < >>Represents the first part of the common connection>Hidden layer neurons and->Individual input layer neuronsConnection weight of->Represents the first part of the common connection>Hidden layer neurons and->Connection weights of neurons of the individual input layers, +.>Representing absolute value>Representing a minimum value for avoiding denominator 0, the present embodiment employs +.>Calculating; it should be noted that->Is based on->And->Calculate connection consistency, th->Input layer neurons and->The connection consistency of the neurons of the individual input layers is denoted +.>That is, the consistency of the connection between two input layer neurons varies with the reference of the input layer neurons; the greater the number of commonly connected hidden layer neurons, and the smaller the difference between the connection weights of commonly connected hidden layer neurons corresponding to different input layer neuronsThe smaller the difference of the connection relation is, the larger the connection consistency is; and obtaining the connection consistency between each input layer neuron and each other input layer neuron according to the method.
It should be further noted that, the greater the first similarity between the neurons of the input layer, the greater the connection consistency, the first similarity and the connection consistency are in a proportional relationship, however, there are hidden neurons which are not commonly connected between the neurons of the input layer and have the greater first similarity, and the smaller the connection consistency is, the greater the deviation of the coding feature is, the more serious the image information loss is caused, so the connection relative deviation can be obtained through the quantification of the first similarity and the connection consistency.
Specifically, the ratio of the connection consistency between any two input layer neurons and the first similarity is recorded as the relative connection relation between the two input layer neurons, and all the relative connection relations are obtained; it should be noted that, in the calculation process of the relative connection relationship, since the first similarity is 0, in the calculation process of the actual relative connection relationship, the first similarity and the minimum value are obtained firstThe result of the ratio of the connection consistency to the sum value is recorded as the relative connection relation, wherein +.>The method comprises the steps of carrying out a first treatment on the surface of the Taking any one relative connection relation as an example, obtaining the preset number of the relative connection relations with the smallest difference with the relative connection relation, calculating the preset number by adopting 4 in the embodiment, and taking the reciprocal of the absolute value of the difference between the four relative connection relations with the smallest difference and the relative connection relation as the relative density of the relative connection relation; obtaining the relative density of each relative connection relation, and marking the relative connection relation with the maximum relative density as a standard connection relation, wherein the standard connection relation is expressed as +. >The method comprises the steps of carrying out a first treatment on the surface of the Then->Input layer neurons and->Relative deviation of the connections of the neurons of the individual input layers +.>The calculation method of (1) is as follows:
wherein,,indicate->Input layer neurons and->The consistency of the connections of the neurons of the individual input layers,indicate->Input layer neurons and->First similarity of neurons of the input layer, < >>Representing standard connection relations,/-, and>representing absolute value>Representing a minimum value for avoiding denominator 0, the present embodiment employs +.>Calculating; the larger the difference between the relative connection relation and the standard connection relation of the two input layer neurons is, the larger the coding characteristic deviation of the two input layer neurons is, the larger the connection relative deviation is, and the larger the information loss of the image is; acquiring the connection relative deviation of any two input layer neurons according to the method, and acquiring the connection relative deviation of any two output layer neurons according to the method; it should be noted that, the connection relative deviation of the output layer neurons is obtained based on the cosine similarity between the reconstructed vectors of the output layer neurons and the connection relationship between the output layer neurons and the hidden layer neurons.
It should be further noted that, the input layer neurons and the output layer neurons of the corresponding relationship represent the information loss of the image, and the difference relationship between the neurons of different similarities reflects the information loss of the image, so that the information loss degree of each input layer neuron is obtained through the second similarity of the input layer neurons and the combination of the first similarity and the connection relative deviation in a comprehensive quantization manner.
Specifically, by the firstFor example, the input layer neurons are obtained with the greatest degree of similarity to their first degree of similarity +.>The input layer neurons, denoted by->Similar input layer neurons of the input layer neurons, which are similar input units in embodiment one, this embodiment adopts +.>Calculation is performed, then->Information loss degree of neurons of the individual input layers +.>The calculation method of (1) is as follows:
wherein,,representing the number of neurons of a similar input layer, +.>Indicate->Input layer neurons->Indicate->Output layer neurons->Indicate->The +.o of the neurons of the input layer>A similar input layer neuron->Indicate->The +.o of the neurons of the input layer>Output layer neurons corresponding to the similar input layer neurons,/->Indicate->Input layer neurons and->The +.o of the neurons of the input layer>Relative deviation of the connections of neurons of the individual similar input layers, < >>Indicate->The output layer neurons and->The +.o of the neurons of the input layer>The connection relative deviation of the output layer neurons corresponding to the similar input layer neurons, +.>Indicate->The +.o of the neurons of the input layer>Second similarity of neurons of the similar input layer, < ->Representing normalization processing, wherein linear normalization is adopted, and the normalization object is the information loss degree of all the neurons of the input layer; the larger the relative deviation of the connections of the input layer neurons and the output layer neurons, the larger the deviation between the coding features and the decoding features between the two input layer neurons and the corresponding output layer neurons, the more for the +. >The greater the likelihood of information loss for the individual input layer neurons, the greater the degree of information loss; the smaller the second similarity of the neurons of the similar input layer, the greater the information loss of the neurons of the similar input layer themselves, for the +.>The greater the probability of information loss caused by the neurons of the input layer, the greater the degree of information loss; the information loss degree of each input layer neuron is obtained according to the method.
Therefore, the quantification of the information loss degree of each input layer neuron is completed by acquiring the connection consistency, the relative connection relation and the connection relative deviation among the input layer neurons and combining the first similarity and the second similarity.
Step S104, obtaining a second loss coefficient of the first self-coding network according to the information loss degree of each input layer neuron, obtaining a comprehensive loss function according to the first loss coefficient and the second loss coefficient, constructing a second self-coding network, and obtaining received label data by inputting initial label data into the second self-coding network.
It should be noted that, at this time, the first loss coefficient of the first self-encoding network and the information loss degree of each input layer neuron have been obtained, a loss function is comprehensively constructed by combining the first loss coefficient with the information loss degree, the first self-encoding network is adjusted according to the comprehensive loss function to obtain the second self-encoding network, and then the transmission of the initial label data is completed by converging the comprehensive loss function.
Specifically, the sum of the information loss degrees of all the neurons of the input layer is recorded as a second loss coefficient, the sum of the first loss coefficient and the second loss coefficient is recorded as a comprehensive loss function, the loss function of the first self-coding network is modified into the comprehensive loss function, the modified first self-coding network is recorded as a second self-coding network, initial label data are input into the second self-coding network to obtain an output result, the comprehensive loss function is converged through the initial label data and the output result, and the training of the second self-coding network is completed; inputting the initial label data into a trained second self-coding network, and marking the obtained output result as received label data, wherein the initial label data is input into the trained second self-coding network, and an image corresponding to a hidden layer positioned at the center in the network structure is compressed data of the initial label data; the self-service label making center obtains the received label data according to the trained decoding part in the second self-coding network by transmitting the compressed data from the mobile phone end to the self-service label making center.
So far, the transmission of the initial label data is completed, and the self-service label manufacturing center receives the label data.
Step S105, the self-service label manufacturing center completes manufacturing of the self-service label according to the received label data.
After the self-help label making center obtains the received label data, the received label data is made, and the made label is pasted on bottled water through a self-help labeling machine, so that the user personalized design label is made.
Referring to fig. 2, a block diagram of a system for real-time making of self-service bottled water label data according to a third embodiment of the present invention is shown, where the system includes:
the label data acquisition module S201 acquires initial label data.
The tag data transmission module S202:
(1) Constructing a first transmission model of the label data, acquiring a first loss coefficient of the first transmission model according to the difference between the input data and the output data, acquiring a first similarity between different input units, and acquiring a second similarity of each input unit according to the corresponding relation between the input unit and the output unit;
(2) Acquiring connection consistency of any two input units according to the connection relation between any two input units and the hidden unit, acquiring connection relative deviation of any two input units according to the connection consistency and the first similarity between the input units, and acquiring the information loss degree of each input unit according to the connection relative deviation, the first similarity and the second similarity;
(3) And obtaining a second loss coefficient of the first transmission model according to the information loss degree of each input unit, obtaining a comprehensive loss function according to the first loss coefficient and the second loss coefficient, constructing a second transmission model, and obtaining received label data by inputting initial label data into the second transmission model.
And the label data making module S203 is used for completing the making of the self-service label according to the received label data by the self-service label making center.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (5)

1. The method for making the self-service bottled water label data in real time is characterized by comprising the following steps of:
collecting initial label data;
training a first self-coding network according to initial label data, inputting the initial label data into the trained first self-coding network, outputting reconstruction data of the initial label data, acquiring a first loss coefficient of the first self-coding network according to the initial label data and the reconstruction data, acquiring a coding vector of each input layer neuron and a reconstruction vector of each output layer neuron, acquiring a first similarity between any two input layer neurons according to the coding vectors of different input layer neurons, and acquiring a second similarity of each input layer neuron according to the coding vectors of the input layer neurons and the reconstruction vectors of the corresponding output layer neurons;
Acquiring connection consistency of any two input layer neurons according to the connection relation between any two input layer neurons and hidden layer neurons, marking the ratio of the connection consistency between any two input layer neurons and the first similarity as the relative connection relation between the two input layer neurons, acquiring the connection relative deviation of any two input layer neurons according to the relative connection relation, and acquiring the information loss degree of each input layer neuron according to the connection relative deviation, the first similarity and the second similarity;
the sum of the information loss degrees of all the input layer neurons is recorded as a second loss coefficient, the sum of the first loss coefficient and the second loss coefficient is recorded as a comprehensive loss function, the loss function of the first self-coding network is modified into the comprehensive loss function, the modified first self-coding network is recorded as a second self-coding network, and the second self-coding network is trained through initial label data and the comprehensive loss function; inputting the initial label data into a trained second self-coding network, and marking the obtained output result as received label data;
the self-help label manufacturing center completes the manufacturing of the self-help label according to the received label data;
The method for acquiring the first loss coefficient of the first self-coding network according to the initial label data and the reconstruction data comprises the following specific steps:
wherein (1)>A first loss factor representing a first self-encoding network,/->Representing the number of input layer neurons in the first self-encoding network,/and>indicate->The number of pixels encoding the input image by the input layer neurons,/a>Indicate->The number of pixels containing user design information in the part of the input layer neuron encoding the input image, < >>Indicate->The input layer neurons encode the input image +.>The number of pixels in the neighborhood of pixels containing user design information, < >>Representing the number of neighborhood pixels of a pixel, < +.>Indicate->The input layer neurons encode the input image +.>Pixel value of each pixel, +.>Indicate->The corresponding +.>The corresponding reconstructed output image part of the output layer neurons is +.>Pixel value of each pixel, +.>Representing absolute value;
the pixel points containing the user design information are marked in the initial labeling data acquisition process;
the specific method for obtaining the coding vector of each input layer neuron and the reconstruction vector of each output layer neuron comprises the following steps:
Marking any input layer neuron as a target input layer neuron, acquiring a part of the target input layer neuron for encoding the initial label data, connecting pixel values of each pixel point of the encoding part line by line end to obtain a vector corresponding to the target input layer neuron, and marking the vector as an encoding vector of the target input layer neuron for the initial label data; acquiring a coding vector of each input layer neuron for initial label data;
marking any output layer neuron as a target output layer neuron, acquiring a decoded part of reconstruction data corresponding to the target output layer neuron, obtaining a vector corresponding to the target output layer neuron, and marking the vector as a reconstruction vector of the target output layer neuron for the initial label data; a reconstruction vector for each output layer neuron for the initial label data is obtained.
2. The method for real-time making self-service bottled water label data according to claim 1, wherein the method for obtaining the connection consistency of any two input layer neurons comprises the following specific steps:
wherein (1)>Indicate->Input layer neurons and->Connection consistency of individual input layer neurons, < >>Indicate- >Input layer neurons->Indicate->Input layer neurons->Indicate->The number of hidden layer neurons connected by the input layer neurons, < >>Indicate->Input layer neurons and->The number of hidden layer neurons to which the individual input layer neurons are commonly connected, < >>Represents the first part of the common connection>Hidden layer neuron and the firstInput numberConnection weight of layer neurons, +.>Represents the first part of the common connection>Hidden layer neurons and->Connection weights of neurons of the individual input layers, +.>Representing absolute value>A minimum value for avoiding a denominator of 0 is indicated.
3. The method for real-time production of self-service bottled water label data according to claim 1, wherein the method for obtaining the connection relative deviation of any two input layer neurons according to the relative connection relation comprises the following specific steps:
taking any one relative connection relation as a target relative connection relation, acquiring a preset number of relative connection relations with the smallest difference with the target relative connection relation, and taking the reciprocal of the absolute value of the difference between the preset number of the relative connection relations with the smallest difference and the target relative connection relation as the relative density of the target relative connection relation; acquiring the relative density of each relative connection relation, and marking the relative connection relation with the maximum relative density as a standard connection relation, wherein the relative connection relation is expressed as The method comprises the steps of carrying out a first treatment on the surface of the First->Input layer neurons and the firstRelative deviation of the connections of the neurons of the individual input layers +.>The calculation method of (1) is as follows:
wherein (1)>Indicate->Input layer neurons and->Connection consistency of individual input layer neurons, < >>Indicate->Input layer neurons and->First similarity of neurons of the input layer, < >>Representing standard connection relations,/-, and>representing absolute value>A minimum value for avoiding a denominator of 0 is indicated.
4. The method for real-time making self-service bottled water label data according to claim 1, wherein the obtaining the information loss degree of each input layer neuron according to the connection relative deviation, the first similarity and the second similarity comprises the following specific steps:
acquisition and the firstFirst similarity of neurons of the input layer is maximum +.>The input layer neurons, denoted by->Similar input layer neurons of the individual input layer neurons, th->Information loss degree of neurons of the individual input layers +.>The calculation method of (1) is as follows:
wherein (1)>Representing the number of neurons of a similar input layer, +.>Indicate->Input layer neurons->Indicate->Output layer neurons->Indicate->The +.o of the neurons of the input layer>A similar input layer neuron- >Indicate->The +.o of the neurons of the input layer>Output layer neurons corresponding to the similar input layer neurons,/->Indicate->Input layer neurons and->The +.o of the neurons of the input layer>Relative deviation of the connections of neurons of the individual similar input layers, < >>Indicate->The output layer neurons and->The +.o of the neurons of the input layer>The connection relative deviation of the output layer neurons corresponding to the similar input layer neurons, +.>Indicate->The +.o of the neurons of the input layer>Second similarity of neurons of the similar input layer, < ->The normalization process is represented, and the normalization object is the information loss degree of all the neurons of the input layer.
5. A system for real-time production of self-service bottled water label data, the system comprising:
the label data acquisition module acquires initial label data;
and the label data transmission module: training a first self-coding network according to initial label data, inputting the initial label data into the trained first self-coding network, outputting reconstruction data of the initial label data, acquiring a first loss coefficient of the first self-coding network according to the initial label data and the reconstruction data, acquiring a coding vector of each input layer neuron and a reconstruction vector of each output layer neuron, acquiring a first similarity between any two input layer neurons according to the coding vectors of different input layer neurons, and acquiring a second similarity of each input layer neuron according to the coding vectors of the input layer neurons and the reconstruction vectors of the corresponding output layer neurons;
Acquiring connection consistency of any two input layer neurons according to the connection relation between any two input layer neurons and hidden layer neurons, marking the ratio of the connection consistency between any two input layer neurons and the first similarity as the relative connection relation between the two input layer neurons, acquiring the connection relative deviation of any two input layer neurons according to the relative connection relation, and acquiring the information loss degree of each input layer neuron according to the connection relative deviation, the first similarity and the second similarity;
the sum of the information loss degrees of all the input layer neurons is recorded as a second loss coefficient, the sum of the first loss coefficient and the second loss coefficient is recorded as a comprehensive loss function, the loss function of the first self-coding network is modified into the comprehensive loss function, the modified first self-coding network is recorded as a second self-coding network, and the second self-coding network is trained through initial label data and the comprehensive loss function; inputting the initial label data into a trained second self-coding network, and marking the obtained output result as received label data;
the self-service label manufacturing center completes the manufacturing of the self-service label according to the received label data;
The method for acquiring the first loss coefficient of the first self-coding network according to the initial label data and the reconstruction data comprises the following specific steps:
wherein (1)>A first loss factor representing a first self-encoding network,/->Representing the number of input layer neurons in the first self-encoding network,/and>indicate->Individual input layer neuronsThe number of pixels encoding the input image, is->Indicate->The number of pixels containing user design information in the part of the input layer neuron encoding the input image, < >>Indicate->The input layer neurons encode the input image +.>The number of pixels in the neighborhood of pixels containing user design information, < >>Representing the number of neighborhood pixels of a pixel, < +.>Indicate->The input layer neurons encode the input image +.>Pixel value of each pixel, +.>Indicate->The corresponding +.>The corresponding reconstructed output image part of the output layer neurons is +.>Pixel value of each pixel, +.>Representing absolute value;
the pixel points containing the user design information are marked in the initial labeling data acquisition process;
the specific method for obtaining the coding vector of each input layer neuron and the reconstruction vector of each output layer neuron comprises the following steps:
Marking any input layer neuron as a target input layer neuron, acquiring a part of the target input layer neuron for encoding the initial label data, connecting pixel values of each pixel point of the encoding part line by line end to obtain a vector corresponding to the target input layer neuron, and marking the vector as an encoding vector of the target input layer neuron for the initial label data; acquiring a coding vector of each input layer neuron for initial label data;
marking any output layer neuron as a target output layer neuron, acquiring a decoded part of reconstruction data corresponding to the target output layer neuron, obtaining a vector corresponding to the target output layer neuron, and marking the vector as a reconstruction vector of the target output layer neuron for the initial label data; a reconstruction vector for each output layer neuron for the initial label data is obtained.
CN202310579967.9A 2023-05-23 2023-05-23 Method and system for real-time making of self-service bottled water label data Active CN116305223B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310579967.9A CN116305223B (en) 2023-05-23 2023-05-23 Method and system for real-time making of self-service bottled water label data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310579967.9A CN116305223B (en) 2023-05-23 2023-05-23 Method and system for real-time making of self-service bottled water label data

Publications (2)

Publication Number Publication Date
CN116305223A CN116305223A (en) 2023-06-23
CN116305223B true CN116305223B (en) 2023-08-04

Family

ID=86815310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310579967.9A Active CN116305223B (en) 2023-05-23 2023-05-23 Method and system for real-time making of self-service bottled water label data

Country Status (1)

Country Link
CN (1) CN116305223B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180024950A (en) * 2016-08-31 2018-03-08 주식회사 디자인모올 Records paper for disaster site
CN109443382A (en) * 2018-10-22 2019-03-08 北京工业大学 Vision SLAM closed loop detection method based on feature extraction Yu dimensionality reduction neural network
CN112381180A (en) * 2020-12-09 2021-02-19 杭州拓深科技有限公司 Power equipment fault monitoring method based on mutual reconstruction single-class self-encoder
CN112766223A (en) * 2021-01-29 2021-05-07 西安电子科技大学 Hyperspectral image target detection method based on sample mining and background reconstruction
CN114139522A (en) * 2021-11-09 2022-03-04 北京理工大学 Key information identification method based on level attention and label guided learning
CN114864108A (en) * 2022-07-05 2022-08-05 深圳市圆道妙医科技有限公司 Processing method and processing system for syndrome and prescription matching data
WO2023019329A1 (en) * 2021-08-19 2023-02-23 Souza Silveira Danielle Cainy Structural arrangement for an identification tag

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180024950A (en) * 2016-08-31 2018-03-08 주식회사 디자인모올 Records paper for disaster site
CN109443382A (en) * 2018-10-22 2019-03-08 北京工业大学 Vision SLAM closed loop detection method based on feature extraction Yu dimensionality reduction neural network
CN112381180A (en) * 2020-12-09 2021-02-19 杭州拓深科技有限公司 Power equipment fault monitoring method based on mutual reconstruction single-class self-encoder
CN112766223A (en) * 2021-01-29 2021-05-07 西安电子科技大学 Hyperspectral image target detection method based on sample mining and background reconstruction
WO2023019329A1 (en) * 2021-08-19 2023-02-23 Souza Silveira Danielle Cainy Structural arrangement for an identification tag
CN114139522A (en) * 2021-11-09 2022-03-04 北京理工大学 Key information identification method based on level attention and label guided learning
CN114864108A (en) * 2022-07-05 2022-08-05 深圳市圆道妙医科技有限公司 Processing method and processing system for syndrome and prescription matching data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于人工智能的视觉感知解码理论与方法研究;黄伟;CNKI博士学位论文数据库;全文 *

Also Published As

Publication number Publication date
CN116305223A (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN111901829B (en) Wireless federal learning method based on compressed sensing and quantitative coding
Pareek et al. IntOPMICM: intelligent medical image size reduction model
WO2021208247A1 (en) Mimic compression method and apparatus for video image, and storage medium and terminal
WO2018150083A1 (en) A method and technical equipment for video processing
CN109919864A (en) A kind of compression of images cognitive method based on sparse denoising autoencoder network
CN110870310A (en) Image encoding method and apparatus
Zhang et al. Image robust adaptive steganography adapted to lossy channels in open social networks
Duan et al. Optimizing JPEG quantization table for low bit rate mobile visual search
Narmatha et al. A LS-compression scheme for grayscale images using pixel based technique
CN111641826A (en) Method, device and system for encoding and decoding data
CN116233445B (en) Video encoding and decoding processing method and device, computer equipment and storage medium
CN115426075A (en) Encoding transmission method of semantic communication and related equipment
CN116305223B (en) Method and system for real-time making of self-service bottled water label data
CN113660386B (en) Color image encryption compression and super-resolution reconstruction system and method
CN116600119B (en) Video encoding method, video decoding method, video encoding device, video decoding device, computer equipment and storage medium
CN115880762B (en) Human-machine hybrid vision-oriented scalable face image coding method and system
CN117242493A (en) Point cloud decoding, upsampling and model training method and device
CN115766965B (en) Test paper image file processing method and storage medium
WO2023184980A1 (en) Codebook-based method for performing image compression using self-encoding machine
CN113554719B (en) Image encoding method, decoding method, storage medium and terminal equipment
CN113132755B (en) Method and system for encoding extensible man-machine cooperative image and method for training decoder
Bao et al. Image Compression for Wireless Sensor Network: A Model Segmentation‐Based Compressive Autoencoder
Flamich et al. Compression without quantization
CN104735459B (en) Compression method, system and the video-frequency compression method of video local feature description
CN117793289A (en) Video transmission method, video reconstruction method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant