CN112862073A - Compressed data analysis method and device, storage medium and terminal - Google Patents

Compressed data analysis method and device, storage medium and terminal Download PDF

Info

Publication number
CN112862073A
CN112862073A CN202110150563.9A CN202110150563A CN112862073A CN 112862073 A CN112862073 A CN 112862073A CN 202110150563 A CN202110150563 A CN 202110150563A CN 112862073 A CN112862073 A CN 112862073A
Authority
CN
China
Prior art keywords
compressed data
data
compressed
data analysis
analysis model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110150563.9A
Other languages
Chinese (zh)
Other versions
CN112862073B (en
Inventor
田永鸿
马力
彭佩玺
邢培银
高文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN202110150563.9A priority Critical patent/CN112862073B/en
Publication of CN112862073A publication Critical patent/CN112862073A/en
Application granted granted Critical
Publication of CN112862073B publication Critical patent/CN112862073B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The invention discloses a compressed data analysis method, a device, a storage medium and a terminal, wherein the method comprises the following steps: acquiring target data, compressing the target data and generating compressed data; inputting the compressed data into a pre-trained compressed data analysis model; the pre-trained compressed data analysis model is generated by selecting different normalization modules and feature constraint modules from the selective batch normalization modules for training based on the compression grade of compressed training data; and outputting an analysis result corresponding to the target data. Therefore, by adopting the embodiment of the application, the compressed data is analyzed by combining the neural network with the selective batch normalization module and the characteristic constraint module, so that the analysis performance of the model is improved, and the accuracy of the analysis result is further improved.

Description

Compressed data analysis method and device, storage medium and terminal
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a compressed data analysis method, a compressed data analysis device, a storage medium and a terminal.
Background
With the rapid development of machine learning techniques, various machine learning techniques are more and more widely applied. The deep neural network in machine learning is widely applied to the field of discrimination and analysis of different types of artificial intelligence data, and makes good progress in the aspect of processing unstructured data. In particular, in terms of natural language processing, neural network techniques based on recurrent neural networks and variants thereof have been well-suited for speech recognition and speech and text feature extraction. In the field of graphic images, the deep convolutional network and the variants thereof are widely applied to the fields of intelligent security, medical health and the like, and great progress is made in feature extraction of pictures.
In the technical scheme of training the current model, training data are collected to be compressed, then the model is input to be repeatedly trained, and after the training is finished, the generated model is applied to an actual scene.
Disclosure of Invention
The embodiment of the application provides a compressed data analysis method and device, a storage medium and a terminal. The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview and is intended to neither identify key/critical elements nor delineate the scope of such embodiments. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
In a first aspect, an embodiment of the present application provides a compressed data analysis method, where the method includes:
acquiring target data, compressing the target data and generating compressed data;
inputting compressed data into a pre-trained compressed data analysis model; the pre-trained compressed data analysis model is generated by selecting different normalization modules and feature constraint modules from the selective batch normalization modules for training based on the compression grade of the compressed training data;
and outputting an analysis result corresponding to the target data.
Optionally, the generating a pre-trained compressed data analysis model according to the following steps includes:
collecting training data samples;
creating a compressed data analysis model;
preprocessing the data analysis model to generate a processed compressed data analysis model;
acquiring a plurality of training data from training data samples, compressing the training data to generate compressed training data samples;
inputting uncompressed data samples and compressed training data samples in the training data samples into a processed compressed data analysis model for training, and outputting loss values of the model;
a pre-trained compressed data analysis model is generated based on the loss values.
Optionally, preprocessing the data analysis model to generate a processed compressed data analysis model, including:
locating a normalization layer in the compressed data analysis model;
loading a pre-created selective batch normalization module and a feature constraint module;
replacing the positioned normalization layer with a selective batch normalization module to generate a compressed data analysis model after replacement;
locating a loss function in the compressed data analysis model after the replacement;
and mapping the characteristic constraint module into a loss function to generate a processed compressed data analysis model.
Optionally, generating a pre-trained compressed data analysis model based on the loss value includes:
when the loss value reaches a preset minimum threshold value, generating a pre-trained compressed data analysis model;
or
And when the loss value does not reach the preset minimum threshold value, adjusting parameters of the model, and continuing to execute the step of inputting uncompressed data samples and compressed training data samples in the training data samples into the processed compressed data analysis model for training until the loss value of the model reaches the preset minimum threshold value, and generating a pre-trained compressed data analysis model.
Optionally, the compressed data analysis model obtains the normalization module of each data sample from the selective batch normalization module based on the compression level of the compressed training data sample during training.
Optionally, the target data includes at least one of the following data: audio data, video data, picture data.
Optionally, the manner of constraint by the feature constraint module at least includes a probability distribution spatial divergence constraint.
In a second aspect, an embodiment of the present application provides a compressed data analysis apparatus, including:
the data compression module is used for acquiring target data and compressing the target data to generate compressed data;
the data training module is used for inputting the compressed data into a pre-trained compressed data analysis model; the pre-trained compressed data analysis model is generated by selecting different normalization modules and feature constraint modules from the selective batch normalization modules for training based on the compression grade of the compressed training data;
and the result output module is used for outputting an analysis result corresponding to the target data.
In a third aspect, embodiments of the present application provide a computer storage medium having stored thereon a plurality of instructions adapted to be loaded by a processor and to perform the above-mentioned method steps.
In a fourth aspect, an embodiment of the present application provides a terminal, which may include: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the above-mentioned method steps.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
in the embodiment of the application, a compressed data analysis device firstly acquires target data, compresses the target data and generates compressed data; inputting compressed data into a pre-trained compressed data analysis model; the pre-trained compressed data analysis model is generated by selecting different normalization modules and feature constraint modules from the selective batch normalization modules for training based on the compression grade of the compressed training data; and outputting an analysis result corresponding to the target data. According to the method and the device, the compressed data are analyzed by combining the neural network with the selective batch normalization module and the characteristic constraint module, so that the analysis performance of the model is improved, and the accuracy of an analysis result is further improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a schematic flowchart of a compressed data analysis method according to an embodiment of the present application;
FIG. 2 is a process diagram of a compressed data analysis process provided by an embodiment of the present application;
FIG. 3 is a schematic flowchart of compressed data analysis model training in a compressed data analysis method according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an apparatus for analyzing compressed data according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
The following description and the drawings sufficiently illustrate specific embodiments of the invention to enable those skilled in the art to practice them.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
In the description of the present invention, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art. In addition, in the description of the present invention, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
In the technical scheme provided by the application, the compressed data is analyzed by combining the neural network with the selective batch normalization module and the feature constraint module, so that the analysis performance of the model is improved, the accuracy of the analysis result is further improved, and the detailed description is given by adopting an exemplary embodiment.
The compressed data analysis method provided by the embodiment of the present application will be described in detail below with reference to fig. 1 to 3. The method may be implemented in dependence on a computer program, operable on a compressed data analysis apparatus based on the von neumann architecture. The computer program may be integrated into the application or may run as a separate tool-like application. The compressed data analysis device in the embodiment of the present application may be a user terminal, including but not limited to: personal computers, tablet computers, handheld devices, in-vehicle devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and the like. The user terminals may be called different names in different networks, for example: user equipment, access terminal, subscriber unit, subscriber station, mobile station, remote terminal, mobile device, user terminal, wireless communication device, user agent or user equipment, cellular telephone, cordless telephone, Personal Digital Assistant (PDA), terminal equipment in a 5G network or future evolution network, and the like.
Referring to fig. 1, a schematic flow chart of a compressed data analysis method is provided in an embodiment of the present application. As shown in fig. 1, the method of the embodiment of the present application may include the following steps:
s101, acquiring target data, compressing the target data and generating compressed data;
the target data is data to be analyzed currently, and the target data comprises at least one of the following data: audio data, video data, picture data. The compressed data is data obtained by compressing target data by using a compression technology. Compression includes picture compression, video compression, and audio compression.
In a possible implementation manner, when the application is applied to picture classification, target data is a picture to be classified, an image acquisition device firstly acquires an image in real time, then transmits the acquired image to a rear-end image processing device in a wired or wireless manner, when the image processing device detects that an image transmission request is generated, the image processing device acquires the image transmitted by the image acquisition device in real time, and when the image processing device acquires the image, an image compression tool pre-stored in a memory is loaded, and the acquired image is compressed by the image compression tool to generate compressed data. Examples of the picture compression method include JPEG, WebP, and the like.
In another possible implementation manner, when the application is applied to voice recognition, the target data is voice to be recognized, the microphone first collects current voice data, then the voice data is sent to the voice processing device in a wired or wireless manner, the voice processing device acquires the voice data collected by the microphone when detecting a voice sending instruction, when the voice processing device acquires the voice data, the voice compression tool pre-stored in the voice processing device is loaded, and after the voice data is compressed by the voice compression tool, compressed data is generated.
In another possible implementation manner, when the application is applied to video identification, target data is a video clip to be identified, a camera firstly collects video clip data for a period of time, then the video clip data is sent to video processing equipment in a wired or wireless manner, the video processing equipment acquires the video clip data collected by the camera when detecting a video sending instruction, when the video processing equipment acquires the video clip data, a video compression tool stored in the video processing equipment in advance is loaded, and compressed data is generated after the video data is compressed by the video compression tool.
S102, inputting compressed data into a pre-trained compressed data analysis model; the pre-trained compressed data analysis model is generated by selecting different normalization modules and feature constraint modules from the selective batch normalization modules for training based on the compression grade of the compressed training data;
the pre-trained compressed data analysis model is a mathematical model generated after machine learning. The core module of the mathematical model comprises a selective batch normalization module and a feature constraint module.
Generally, when model training is performed, firstly, training data samples are collected, then, a compressed data analysis model is created, then, preprocessing is performed on the data analysis model to generate a processed compressed data analysis model, secondly, a plurality of training data are obtained from the training data samples to be compressed to generate a compressed training data sample, finally, uncompressed data samples and compressed training data samples in the training data samples are input into the processed compressed data analysis model to be trained, loss values of the model are output, and finally, a pre-trained compressed data analysis model is generated based on the loss values.
Further, when the data analysis model is preprocessed to generate the processed compressed data analysis model, a normalization layer in the compressed data analysis model is positioned, a pre-created selective batch normalization module and a feature constraint module are loaded, the positioned normalization layer is replaced by the selective batch normalization module to generate a replaced compressed data analysis model, a loss function in the replaced compressed data analysis model is positioned, and finally the feature constraint module is mapped into the loss function to generate the processed compressed data analysis model.
Further, when a pre-trained compressed data analysis model is generated based on the loss value, when the loss value reaches a preset minimum threshold value, the pre-trained compressed data analysis model is generated; or when the loss value does not reach the preset minimum threshold value, adjusting the parameters of the model, and continuing to execute the step of inputting the uncompressed data samples and the compressed training data samples in the training data samples into the processed compressed data analysis model for training until the loss value of the model reaches the preset minimum threshold value, and generating the pre-trained compressed data analysis model.
In the embodiment of the application, when the application is applied to classification in a graph, when a compressed data analysis model is used for model training, a compressed data model in a picture classification scene can be WRN-28-10, firstly, CIFAR-100 (a general baseline of a CIFAR-100 dataset) is obtained, then, a part of data samples in the CIFAR-100 are compressed to generate compressed data, secondly, a normalization module in the original WRN-28-10 is replaced by a selective normalization module to generate modified WRN-28-10, finally, the compressed data and uncompressed data are all input into the modified WRN-28-10 for training, pictures with different compression grades enter different normalization modules during training, a loss value is finally output, when the loss value reaches a preset threshold value, the model training is finished, and generating trained WRN-28-10.
It should be noted that, in the existing training, a feature constraint module is added to constrain the difference between the features of the uncompressed picture generated by the compressed data model and the features of the compressed picture generated by the compressed data model to be as small as possible.
In a possible implementation manner, after compressed data is generated, the compressed data may be input into a pre-trained compressed data analysis model, when the compressed data is input into the pre-trained compressed data analysis model, a compression level (degree) of the compressed data is first obtained, the higher the level is, the greater the compression degree is, then a normalization module corresponding to the compressed data is determined from a selective batch normalization module in the pre-trained compressed data analysis model according to the compression level, and an analysis result is output after model processing.
For example, it is necessary to classify the picture a, first input the picture a into a classification model, then select a corresponding normalization module according to its compression degree in the analysis process, and then output the model as the analysis result. Corresponding to the picture classification, that result is the confidence of the respective class.
In another possible implementation, if a neural network is not used, still taking the picture classification as an example: for example, initially, an SVM is used for image classification, and when the technology of the present application is used for classification, original data is compressed, then selective normalization is performed on features extracted in an original method, and then a feature constraint module is added on the basis of original SVM solution.
It should be noted that, the selective batch normalization module is in the training state, and the normalization modules adopted include, but are not limited to: and (4) batch normalization. The selective batch normalization module selects different compression algorithms or compression parameters or compression equipment in the compression process for different compression degrees of training data in the training state. The selective batch normalization module adopts selection modes including, but not limited to, a nearest neighbor method, a linear combination method, a bilinear combination method and a bicubic combination method for training data with different compression degrees in a training state. And the selective batch normalization module selects the corresponding normalization module for processing according to the compression degree of the test data in the test state. The selective batch normalization module is under test, and the normalization modules adopted include but are not limited to: and (4) batch normalization. The selective batch normalization module selects different compression algorithms or compression parameters or compression equipment in the compression process of test data with different compression degrees in the test state. The selective batch normalization module adopts a selection mode for test data in a test state, wherein the selection mode includes but is not limited to a nearest neighbor method, a linear combination method, a bilinear combination method and a bicubic combination method. And the characteristic constraint module is used for constraining the characteristic to comprise all characteristics for analysis. And the characteristic constraint module is used for constraining the characteristic space distance constraint and the probability distribution space divergence constraint.
And S103, outputting an analysis result corresponding to the target data.
Wherein, the analysis result is the output value of the model after a series of processing.
In general, the values output by the model may be used for various tasks such as classification, clustering, or similarity calculation.
For example, as shown in fig. 2, fig. 2 is a schematic process diagram of a compressed data analysis process provided in the embodiment of the present application, and the process may be divided into 3 modules for processing, which are a data compression module, a selective batch normalization module, and a feature constraint module. The metadata in the data compression module is firstly compressed to obtain compressed data, then the compressed data is processed by selecting a certain normalization layer from the selective batch normalization module according to the compression grade of the compressed data, and the characteristics of the compressed data and the original data are constrained according to the characteristic constraint module during processing.
In the embodiment of the application, a compressed data analysis device firstly acquires target data, compresses the target data and generates compressed data; inputting compressed data into a pre-trained compressed data analysis model; the pre-trained compressed data analysis model is generated by selecting different normalization modules and feature constraint modules from the selective batch normalization modules for training based on the compression grade of the compressed training data; and outputting an analysis result corresponding to the target data. According to the method and the device, the compressed data are analyzed by combining the neural network with the selective batch normalization module and the characteristic constraint module, so that the analysis performance of the model is improved, and the accuracy of an analysis result is further improved.
Referring to fig. 3, a flow chart of a training method of a pre-trained compressed data analysis model is provided for the embodiment of the present application. As shown in fig. 3, the method of the embodiment of the present application may include the following steps:
s201, collecting training data samples;
s202, creating a compressed data analysis model;
s203, positioning a normalization layer in the compressed data analysis model;
s204, loading a pre-created selective batch normalization module and a feature constraint module;
s205, replacing the positioned normalization layer with a selective batch normalization module to generate a compressed data analysis model after replacement;
s206, positioning a loss function in the compressed data analysis model after replacement;
s207, mapping the feature constraint module into a loss function to generate a processed compressed data analysis model;
s208, acquiring a plurality of training data from the training data samples, and compressing the training data to generate compressed training data samples;
s209, inputting uncompressed data samples and compressed training data samples in the training data samples into the processed compressed data analysis model for training, and outputting loss values of the model;
s210, when the loss value reaches a preset minimum threshold value, generating a pre-trained compressed data analysis model;
s211, or adjusting parameters of the model when the loss value does not reach the preset minimum threshold value, and continuing to execute the step of inputting uncompressed data samples and compressed training data samples in the training data samples into the processed compressed data analysis model for training until the loss value of the model reaches the preset minimum threshold value, and generating the pre-trained compressed data analysis model.
In the embodiment of the application, a compressed data analysis device firstly acquires target data, compresses the target data and generates compressed data; inputting compressed data into a pre-trained compressed data analysis model; the pre-trained compressed data analysis model is generated by selecting different normalization modules and feature constraint modules from the selective batch normalization modules for training based on the compression grade of the compressed training data; and outputting an analysis result corresponding to the target data. According to the method and the device, the compressed data are analyzed by combining the neural network with the selective batch normalization module and the characteristic constraint module, so that the analysis performance of the model is improved, and the accuracy of an analysis result is further improved.
The following are embodiments of the apparatus of the present invention that may be used to perform embodiments of the method of the present invention. For details which are not disclosed in the embodiments of the apparatus of the present invention, reference is made to the embodiments of the method of the present invention.
Referring to fig. 4, a schematic structural diagram of a compressed data analysis apparatus according to an exemplary embodiment of the present invention is shown. The compressed data analysis device may be implemented as all or part of the terminal by software, hardware, or a combination of both. The device 1 comprises a data compression module 10, a data training module 20 and a result output module 30.
The data compression module 10 is configured to obtain target data, compress the target data, and generate compressed data;
a data training module 20, configured to input compressed data into a pre-trained compressed data analysis model; the pre-trained compressed data analysis model is generated by selecting different normalization modules and feature constraint modules from the selective batch normalization modules for training based on the compression grade of the compressed training data;
and the result output module 30 is used for outputting an analysis result corresponding to the target data.
It should be noted that, when the compressed data analysis apparatus provided in the foregoing embodiment executes the compressed data analysis method, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed to different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the functions described above. In addition, the compressed data analysis apparatus and the compressed data analysis method provided in the above embodiments belong to the same concept, and details of implementation processes thereof are referred to in the method embodiments and are not described herein again.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the embodiment of the application, a compressed data analysis device firstly acquires target data, compresses the target data and generates compressed data; inputting compressed data into a pre-trained compressed data analysis model; the pre-trained compressed data analysis model is generated by selecting different normalization modules and feature constraint modules from the selective batch normalization modules for training based on the compression grade of the compressed training data; and outputting an analysis result corresponding to the target data. According to the method and the device, the compressed data are analyzed by combining the neural network with the selective batch normalization module and the characteristic constraint module, so that the analysis performance of the model is improved, and the accuracy of an analysis result is further improved.
The present invention also provides a computer readable medium having stored thereon program instructions which, when executed by a processor, implement the compressed data analysis method provided by the various method embodiments described above.
The present invention also provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the compressed data analysis method of the above-described method embodiments.
Please refer to fig. 5, which provides a schematic structural diagram of a terminal according to an embodiment of the present application. As shown in fig. 5, terminal 1000 can include: at least one processor 1001, at least one network interface 1004, a user interface 1003, memory 1005, at least one communication bus 1002.
Wherein a communication bus 1002 is used to enable connective communication between these components.
The user interface 1003 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Processor 1001 may include one or more processing cores, among other things. The processor 1001 interfaces various components throughout the electronic device 1000 using various interfaces and lines to perform various functions of the electronic device 1000 and to process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1005 and invoking data stored in the memory 1005. Alternatively, the processor 1001 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 1001 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 1001, but may be implemented by a single chip.
The Memory 1005 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 1005 includes a non-transitory computer-readable medium. The memory 1005 may be used to store an instruction, a program, code, a set of codes, or a set of instructions. The memory 1005 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 1005 may optionally be at least one memory device located remotely from the processor 1001. As shown in fig. 5, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a compressed data analysis application program.
In the terminal 1000 shown in fig. 5, the user interface 1003 is mainly used as an interface for providing input for a user, and acquiring data input by the user; and the processor 1001 may be configured to call the compressed data analysis application stored in the memory 1005, and specifically perform the following operations:
acquiring target data, compressing the target data and generating compressed data;
inputting compressed data into a pre-trained compressed data analysis model; the pre-trained compressed data analysis model is generated by selecting different normalization modules and feature constraint modules from the selective batch normalization modules for training based on the compression grade of the compressed training data;
and outputting an analysis result corresponding to the target data.
In one embodiment, the processor 1001, when executing the following before acquiring the target data:
collecting training data samples;
creating a compressed data analysis model;
preprocessing the data analysis model to generate a processed compressed data analysis model;
acquiring a plurality of training data from training data samples, compressing the training data to generate compressed training data samples;
inputting uncompressed data samples and compressed training data samples in the training data samples into a processed compressed data analysis model for training, and outputting loss values of the model;
a pre-trained compressed data analysis model is generated based on the loss values.
In one embodiment, when the processor 1001 performs preprocessing on the data analysis model to generate a processed compressed data analysis model, the following operations are specifically performed:
locating a normalization layer in the compressed data analysis model;
loading a pre-created selective batch normalization module and a feature constraint module;
replacing the positioned normalization layer with a selective batch normalization module to generate a compressed data analysis model after replacement;
locating a loss function in the compressed data analysis model after the replacement;
and mapping the characteristic constraint module into a loss function to generate a processed compressed data analysis model.
In one embodiment, the processor 1001, when executing the generation of the pre-trained compressed data analysis model based on the loss value, specifically performs the following operations:
when the loss value reaches a preset minimum threshold value, generating a pre-trained compressed data analysis model;
or
And when the loss value does not reach the preset minimum threshold value, adjusting parameters of the model, and continuing to execute the step of inputting uncompressed data samples and compressed training data samples in the training data samples into the processed compressed data analysis model for training until the loss value of the model reaches the preset minimum threshold value, and generating a pre-trained compressed data analysis model.
In the embodiment of the application, a compressed data analysis device firstly acquires target data, compresses the target data and generates compressed data; inputting compressed data into a pre-trained compressed data analysis model; the pre-trained compressed data analysis model is generated by selecting different normalization modules and feature constraint modules from the selective batch normalization modules for training based on the compression grade of the compressed training data; and outputting an analysis result corresponding to the target data. According to the method and the device, the compressed data are analyzed by combining the neural network with the selective batch normalization module and the characteristic constraint module, so that the analysis performance of the model is improved, and the accuracy of an analysis result is further improved.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program to instruct associated hardware, and the program for compressed data analysis can be stored in a computer-readable storage medium, and when executed, the program can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory or a random access memory.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and is not to be construed as limiting the scope of the present application, so that the present application is not limited thereto, and all equivalent variations and modifications can be made to the present application.

Claims (10)

1. A method of compressed data analysis, the method comprising:
acquiring target data, compressing the target data and generating compressed data;
inputting the compressed data into a pre-trained compressed data analysis model; the pre-trained compressed data analysis model is generated by selecting different normalization modules and feature constraint modules from the selective batch normalization modules for training based on the compression grade of compressed training data;
and outputting an analysis result corresponding to the target data.
2. The method of claim 1, wherein generating the pre-trained compressed data analysis model comprises:
collecting training data samples;
creating a compressed data analysis model;
preprocessing the data analysis model to generate a processed compressed data analysis model;
acquiring a plurality of training data from the training data samples, and compressing the training data to generate compressed training data samples;
inputting uncompressed data samples in the training data samples and the compressed training data samples into the processed compressed data analysis model for training, and outputting loss values of the model;
generating a pre-trained compressed data analysis model based on the loss values.
3. The method of claim 2, wherein the pre-processing the data analysis model to generate a processed compressed data analysis model comprises:
locating a normalization layer in the compressed data analysis model;
loading a pre-created selective batch normalization module and a feature constraint module;
replacing the positioned normalization layer with the selective batch normalization module to generate a compressed data analysis model after replacement;
locating a loss function within the replaced compressed data analysis model;
and mapping the characteristic constraint module to a loss function to generate a processed compressed data analysis model.
4. The method of claim 2, wherein generating a pre-trained compressed data analysis model based on the loss values comprises:
when the loss value reaches a preset minimum threshold value, generating a pre-trained compressed data analysis model; or
And when the loss value does not reach a preset minimum threshold value, adjusting parameters of the model, and continuing to execute the step of inputting uncompressed data samples in the training data samples and the compressed training data samples into the processed compressed data analysis model for training until the loss value of the model reaches the preset minimum threshold value, and generating a pre-trained compressed data analysis model.
5. The method of claim 4, wherein the compressed data analysis model is trained to obtain a normalization module for each data sample from a selective batch normalization module based on a compression level of the compressed training data sample.
6. The method of claim 1, wherein the target data comprises at least one of: audio data, video data, picture data.
7. The method of claim 3, wherein the manner in which the feature constraint module constrains comprises at least a probability distribution spatial divergence constraint.
8. A compressed data analysis apparatus, the apparatus comprising:
the data compression module is used for acquiring target data, compressing the target data and generating compressed data;
the data training module is used for inputting the compressed data into a pre-trained compressed data analysis model; the pre-trained compressed data analysis model is generated by selecting different normalization modules and feature constraint modules from the selective batch normalization modules for training based on the compression grade of compressed training data;
and the result output module is used for outputting the analysis result corresponding to the target data.
9. A computer storage medium, characterized in that it stores a plurality of instructions adapted to be loaded by a processor and to perform the method steps according to any of claims 1-7.
10. A terminal, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method steps of any of claims 1-7.
CN202110150563.9A 2021-02-03 2021-02-03 Compressed data analysis method and device, storage medium and terminal Active CN112862073B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110150563.9A CN112862073B (en) 2021-02-03 2021-02-03 Compressed data analysis method and device, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110150563.9A CN112862073B (en) 2021-02-03 2021-02-03 Compressed data analysis method and device, storage medium and terminal

Publications (2)

Publication Number Publication Date
CN112862073A true CN112862073A (en) 2021-05-28
CN112862073B CN112862073B (en) 2022-11-18

Family

ID=75987780

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110150563.9A Active CN112862073B (en) 2021-02-03 2021-02-03 Compressed data analysis method and device, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN112862073B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114565057A (en) * 2022-03-15 2022-05-31 中科三清科技有限公司 Machine learning-based grading field identification method and device, storage medium and terminal

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101828399A (en) * 2007-10-15 2010-09-08 高通股份有限公司 Scalable video coding techniques for scalable bitdepths
US9742435B1 (en) * 2016-06-21 2017-08-22 Vmware, Inc. Multi-stage data compression for time-series metric data within computer systems
US20170262962A1 (en) * 2016-03-11 2017-09-14 Qualcomm Incorporated Systems and methods for normalizing an image
CN108696649A (en) * 2017-04-06 2018-10-23 百度在线网络技术(北京)有限公司 Image processing method, device, equipment and computer readable storage medium
CN111144561A (en) * 2018-11-05 2020-05-12 杭州海康威视数字技术股份有限公司 Neural network model determining method and device
CN112005255A (en) * 2018-05-03 2020-11-27 国际商业机器公司 Hierarchical random anonymization of data
CN112052916A (en) * 2020-10-10 2020-12-08 腾讯科技(深圳)有限公司 Data processing method and device based on neural network and readable storage medium
CN112119408A (en) * 2019-08-29 2020-12-22 深圳市大疆创新科技有限公司 Method for acquiring image quality enhancement network, image quality enhancement method, image quality enhancement device, movable platform, camera and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101828399A (en) * 2007-10-15 2010-09-08 高通股份有限公司 Scalable video coding techniques for scalable bitdepths
US20170262962A1 (en) * 2016-03-11 2017-09-14 Qualcomm Incorporated Systems and methods for normalizing an image
US9742435B1 (en) * 2016-06-21 2017-08-22 Vmware, Inc. Multi-stage data compression for time-series metric data within computer systems
CN108696649A (en) * 2017-04-06 2018-10-23 百度在线网络技术(北京)有限公司 Image processing method, device, equipment and computer readable storage medium
CN112005255A (en) * 2018-05-03 2020-11-27 国际商业机器公司 Hierarchical random anonymization of data
CN111144561A (en) * 2018-11-05 2020-05-12 杭州海康威视数字技术股份有限公司 Neural network model determining method and device
CN112119408A (en) * 2019-08-29 2020-12-22 深圳市大疆创新科技有限公司 Method for acquiring image quality enhancement network, image quality enhancement method, image quality enhancement device, movable platform, camera and storage medium
CN112052916A (en) * 2020-10-10 2020-12-08 腾讯科技(深圳)有限公司 Data processing method and device based on neural network and readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
GORDON K. SMYTH等: "Statistical Issues in cDNA Microarray Data Analysis", 《METHODS IN MOLECULAR BIOLOGY》 *
HYEONSEOB NAM等: "Batch-Instance Normalization for Adaptively Style-Invariant Neural Networks", 《32ND CONFERENCE ON NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2018), MONTRÉAL, CANADA.》 *
张德园 等: "BN-cluster:基于批归一化的集成算法实例分析", 《沈阳航空航天大学学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114565057A (en) * 2022-03-15 2022-05-31 中科三清科技有限公司 Machine learning-based grading field identification method and device, storage medium and terminal

Also Published As

Publication number Publication date
CN112862073B (en) 2022-11-18

Similar Documents

Publication Publication Date Title
CN111368893B (en) Image recognition method, device, electronic equipment and storage medium
US8463025B2 (en) Distributed artificial intelligence services on a cell phone
CN109189950B (en) Multimedia resource classification method and device, computer equipment and storage medium
CN109993150B (en) Method and device for identifying age
CN110830807B (en) Image compression method, device and storage medium
CN110992989B (en) Voice acquisition method and device and computer readable storage medium
KR102576344B1 (en) Method and apparatus for processing video, electronic device, medium and computer program
CN112328823A (en) Training method and device for multi-label classification model, electronic equipment and storage medium
CN113408570A (en) Image category identification method and device based on model distillation, storage medium and terminal
CN111191503A (en) Pedestrian attribute identification method and device, storage medium and terminal
CN112839223A (en) Image compression method, image compression device, storage medium and electronic equipment
CN112862073B (en) Compressed data analysis method and device, storage medium and terminal
JP2023535108A (en) Video tag recommendation model training method, video tag determination method, device, electronic device, storage medium and computer program therefor
CN111666820A (en) Speaking state recognition method and device, storage medium and terminal
CN113313066A (en) Image recognition method, image recognition device, storage medium and terminal
CN110532448B (en) Document classification method, device, equipment and storage medium based on neural network
CN112036307A (en) Image processing method and device, electronic equipment and storage medium
CN113408571B (en) Image classification method and device based on model distillation, storage medium and terminal
CN114862720A (en) Canvas restoration method and device, electronic equipment and computer readable medium
CN106777379B (en) Method and equipment for intelligently recognizing relationship between objects
CN111460214B (en) Classification model training method, audio classification method, device, medium and equipment
CN114220111B (en) Image-text batch identification method and system based on cloud platform
CN113610064B (en) Handwriting recognition method and device
CN116775980A (en) Cross-modal searching method and related equipment
CN117079321A (en) Face attribute identification method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant