CN116205283A - Data processing method, device, electronic equipment and computer readable storage medium - Google Patents
Data processing method, device, electronic equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN116205283A CN116205283A CN202310478200.7A CN202310478200A CN116205283A CN 116205283 A CN116205283 A CN 116205283A CN 202310478200 A CN202310478200 A CN 202310478200A CN 116205283 A CN116205283 A CN 116205283A
- Authority
- CN
- China
- Prior art keywords
- processing
- data
- data processing
- layer
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Neurology (AREA)
- Image Analysis (AREA)
Abstract
The application provides a data processing method, a data processing device, an electronic device and a computer readable storage medium, wherein one specific embodiment of the method comprises the following steps: inputting data to be processed into a pre-trained data processing model; the data processing model is obtained based on training samples and quantization parameters, and the quantization parameters are determined according to the specific layer of the data processing model and aiming at the output of the training samples; and processing the data processing model to obtain target data which is suitable for the data to be processed. In this way, the quantization parameters of the data processing model are determined by the output of the specific layer for the training sample, so that the model performance of the data processing model is improved, and the accuracy of processing data is improved.
Description
Technical Field
The present invention relates to the field of data processing, and in particular, to a data processing method, apparatus, electronic device, and computer readable storage medium.
Background
With the development of computer technology, the neural network model is increasingly widely applied, and can be applied to data processing such as image processing, video processing, voice recognition and the like. Further, the neural network model may train the sample data, and then the actual output and the desired output obtained by training the sample data continuously modify the model parameters to be able to be used for processing the actual data.
In the related art, some parameters of the data processing model are typically set manually and empirically, and are not modified later. This results in a lower performance of the data processing model and in turn in lower accuracy of the processed data.
Disclosure of Invention
An object of an embodiment of the present application is to provide a data processing method, apparatus, electronic device, and computer readable storage medium, so as to improve accuracy of processing data.
In a first aspect, an embodiment of the present application provides a data processing method, including: inputting data to be processed into a pre-trained data processing model; the data processing model is obtained based on training samples and quantization parameters, and the quantization parameters are determined according to the specific layer of the data processing model and aiming at the output of the training samples; and processing the data processing model to obtain target data which is suitable for the data to be processed. In this way, the quantization parameters of the data processing model are determined by the output of the specific layer for the training sample, so that the model performance of the data processing model is improved, and the accuracy of processing data is improved.
Optionally, the training process of the data processing model includes: dividing the output of the specific layer into at least two paths; one path is used for enabling at least one processing layer behind the specific layer to continuously process training samples, and the other path is used for determining quantization parameters corresponding to the at least one processing layer respectively; for each processing layer, continuing to process the training samples by combining quantization parameters corresponding to the processing layer; the data processing model is trained based on the output of the last processing layer and the desired output. In this way, since the quantization parameter is determined based on the input training sample, it is more suitable for processing the training sample, and then each processing layer following a specific layer is made to continue processing the training sample in combination with the quantization parameter of that processing layer, so that the model performance of the data processing model can be improved.
Optionally, the data processing model is obtained based on quantized perceptual training; and determining quantization parameters respectively corresponding to the at least one processing layer, including: for each processing layer, if the quantization parameter is determined to be transmitted forward, determining the quantization parameter through a first preset function; and if the quantization parameter is determined to be counter-propagated, determining the quantization parameter through a second preset function. In this way, the quantization parameter of each processing layer can be determined through different preset functions based on forward propagation or reverse propagation quantization parameters, so that the process of quantized perception training has a wider application range.
Optionally, the data processing model is obtained based on quantization after training; and determining quantization parameters respectively corresponding to the at least one processing layer, including: determining a different combination of quantization parameters for each training sample; training to obtain a processing structure based on the training sample and a quantization parameter combination meeting preset requirements; and determining quantization parameters respectively corresponding to the at least one processing layer through the processing structure. Therefore, the processing structure can be trained based on the training samples and the quantization parameter combination meeting the preset requirements, and the quantization parameter output by the processing structure after training can further improve the model performance of the data processing model obtained based on quantization after training.
Optionally, the training to obtain the processing structure based on the training samples and the quantization parameter combination meeting the preset requirement includes: taking the training sample as input of a processing structure, and taking a quantization parameter combination meeting preset requirements as expected output; the processing structure is trained based on the actual output for the training samples and the desired output. In this way, by combining the quantization parameters satisfying the preset requirements as the desired output of the processing structure, the quantization parameters output by the processing structure after training can be made to approach the desired output, and then the model performance of the data processing model based on quantization after training can be further improved.
Optionally, the processing structure for determining quantization parameters respectively corresponding to the at least one processing layer includes any one of the following: convolution structure, pooling structure, full connection layer structure. Here, several processing structures are provided that can be used to determine quantization parameters of a processing layer. In this way, one path of data output by a specific layer can be used as input of a processing structure, and then the data is processed by a corresponding processing mode.
Optionally, the data processing model is trained by embedded hardware. The embedded hardware is integrated with a functional module capable of determining the quantization parameter of each processing layer, so that the calculation rate during training of the data processing model can be improved, and the model performance is improved.
In a second aspect, an embodiment of the present application provides a data processing apparatus, including: the input module is used for inputting the data to be processed into the data processing model which is trained in advance; the data processing model is obtained based on training samples and quantization parameters, and the quantization parameters are determined according to the specific layer of the data processing model and aiming at the output of the training samples; and the processing module is used for processing the target data which is suitable for the data to be processed through the data processing model. In this way, the quantization parameters of the data processing model are determined by the output of the specific layer for the training sample, so that the model performance of the data processing model is improved, and the accuracy of processing data is improved.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing computer readable instructions that, when executed by the processor, perform the steps of the method as provided in the first aspect above.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method as provided in the first aspect above.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the embodiments of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a data processing method according to an embodiment of the present application;
FIG. 2 is a block diagram of a data processing model according to an embodiment of the present application;
FIG. 3 is a block diagram of a data processing apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device for performing a data processing method according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
It should be noted that embodiments or technical features of embodiments in the present application may be combined without conflict.
In the related art, the problem of low accuracy of processing data due to low performance of a data processing model exists; in order to solve the problem, the present application provides a data processing method, apparatus, electronic device, and computer-readable storage medium; further, the data can be processed through a data processing model, and model parameters are determined through output of training samples, so that the model parameters obtained through adaptability of the training samples are more suitable for the data processing, and accuracy of the processed data is improved.
In some application scenarios, the data processing method may be applied to a terminal device such as a computer, a tablet computer, or the like, and may also be applied to a server, a server cluster, or a cloud platform that can substantially provide a data processing service. For convenience, the application is described below as applied to a server.
The above related art solutions have drawbacks, which are results obtained by the inventor after practice and careful study, and therefore, the discovery process of the above problems and the solutions proposed by the embodiments of the present invention hereinafter for the above problems should be all contributions of the inventor to the present invention in the process of the present invention.
Referring to fig. 1, a flowchart of a data processing method according to an embodiment of the present application is shown. As shown in fig. 1, the data processing method may include the following steps 101 to 102.
the data to be processed, that is, the data to be processed, may include, for example, image data, video data, voice data, and the like.
In some application scenarios, the server may process the data to be processed using a data processing model. The data processing model may include, for example, a classification model, an image noise model (Image Signal Processor, abbreviated as ISP), and the like.
The data processing model can be obtained through training samples and quantization parameters. The quantization parameters may include super parameters and other model parameters. The above-mentioned super-parameters may include, for example, the number of weight quantization bits and the number of feature quantization bits.
Further, the data processing model may include a plurality of processing layers, where each processing layer may perform a corresponding processing on the training sample, and after the processing, an output of the current processing layer for the training sample may be obtained. For example, for the fourth processing layer, it may perform a processing operation such as compression on the picture, and then may output a compression result for the training sample.
Then, one of the processing layers of the data processing model may be designated as the specific layer. For example, designating the third layer as a particular layer, designating the first layer as a particular layer, and so on.
In some application scenarios, since the superparameter needs to be set manually and empirically, the superparameter of the first 1 or first few processing layers of the data processing model may be set, and then the last processing layer specified is determined to be the specific layer. In this way, the quantization parameter is determined for the output of the training samples by the specified particular layer.
And 102, processing the target data to obtain target data suitable for the data to be processed through the data processing model.
After the server inputs the data to be processed into the data processing model, the data processing model can process the data to be processed to obtain the adaptive target data.
In some application scenarios, the data to be processed may include, for example, image data to be processed, and after the image data to be processed is input into the data processing model, the data processing model may process the image data to obtain the target data.
The application scenes may include, for example, security application scenes, the data processing model may process an input photographed image, which may include, for example, an image obtained by photographing a front portal of a hall, and then the data processing model may output character data in the photographed image. The character data may include, for example, data of the sex, the number of persons, etc. of the entrant.
These application scenarios may also include, for example, autopilot application scenarios. Similarly, the data processing model may process an input photographed image, which may include, for example, an image obtained by photographing something directly in front of the vehicle, and then the data processing model may output the thing data in the photographed image. The thing data may include, for example, data of whether there is another vehicle in front of the vehicle, whether there is a pedestrian, or the like.
In this embodiment, the quantization parameter of the data processing model is determined by the specific layer for the output of the training sample, so that the model performance of the data processing model is improved, and the accuracy of processing data is improved. It should be noted that the model behavior may differ due to differences in the data processing models. For example, for a classification network, its model representation may include, for example, the accuracy of data classification; for an image Noise model, its model representation may for example comprise a Peak Signal-to-Noise Ratio (PSNR).
In some alternative implementations, the data processing model is trained by embedded hardware.
In the related art, the network model accelerator uses a quantization bit width manner to reduce the calculation amount and memory occupation, so as to achieve the purpose of improving the calculation rate. In this way, the performance of the model is reduced.
In some application scenarios, the data processing model described above may be trained by embedded hardware. The embedded hardware is integrated with a functional module capable of determining the quantization parameter of each processing layer, so that the calculation rate during training of the data processing model can be improved, and the model performance is improved.
In other applications, the data processing model may be trained by a computer, a server, or the like, and the training process may be the same as or similar to the process of training using embedded hardware. To facilitate the text, the following is set forth in terms of embedded hardware training.
In some alternative implementations, the training process of the data processing model may include:
firstly, dividing the output of the specific layer into at least two paths; one path is used for enabling at least one processing layer behind the specific layer to continuously process training samples, and the other path is used for determining quantization parameters corresponding to the at least one processing layer respectively;
in some application scenarios, embedded hardware may divide the output of a specific layer specified. The output of the specific layer can be divided into multiple paths, wherein one path is used as the input of other processing layers after the specific layer, and the other path is used for determining quantization parameters respectively corresponding to the processing layers after the specific layer.
In these application scenarios, the data of each path may be the same or different for the divided multiple paths of data. For example, the output of a specific layer may be "ABCD", one of the data may be "AB", and the other data may be "CD". Alternatively, both paths of data may be "ABCD".
In some optional implementations, the processing structure for determining quantization parameters respectively corresponding to the at least one processing layer includes any one of the following: convolution structure, pooling structure, full connection layer structure.
That is, in the training process of the data processing model, the embedded hardware can determine the quantization parameter corresponding to each processing layer through a convolution structure, a pooling structure or a full connection layer structure.
When the processing structure is any one of a convolution structure, a pooling structure and a full-connection layer structure, one path of data output by a specific layer can be used as input, and then the data is processed by a corresponding processing mode.
Then, for each processing layer, continuing to process the training samples by combining quantization parameters corresponding to the processing layer;
the training samples may include, for example, image data, video data, voice data, and the like. In some application scenarios, training samples may be adaptively selected, for example. For example, if the data processing model is mainly used for processing image data, the image data may be selected as a training sample; if the data processing model is mainly used for processing video data, the video data can be selected as training samples.
For each processing layer of the data processing model, the embedded hardware can continue to process training samples in combination with the quantization parameters corresponding to the processing layer.
In some application scenarios, training samples may be processed in a time multiplexed manner. That is, when the processing layer processes the training samples, the quantization parameter corresponding to the processing layer may be determined to process the training samples in combination with the quantization parameter. For example, for a particular third processing layer output, it may be input into the selected processing structure, through which quantization parameters for the fourth processing layer are output. When the fourth processing layer processes the training sample, the quantization parameter can be obtained, so that the continuous processing of the training sample is realized. It should be noted that, when time multiplexing is performed, the processing layer may be able to acquire quantization parameters when processing training samples.
Finally, training to obtain the data processing model based on the output of the last processing layer and the expected output.
After each processing layer continues processing the training sample, corresponding output data can be obtained. Thus, the resulting data processing model can be trained based on the output of the last processing layer and the desired output. The desired output may be considered a preferred output that may be obtained for training samples.
In some application scenarios, a loss value between the output of the last processing layer and the desired output may be calculated, for example, by a loss function, and the data processing model may be trained in a direction in which the loss value decreases or remains unchanged.
In some application scenarios, the data processing model may be composed of, for example, N processing layers deployed as shown in fig. 2. As shown in fig. 2, the data processing model may include N processing layers, where if a specific layer is a third processing layer, the output of the processing layer may be divided into two paths of data, where one path is used as an input of a fourth processing layer to continue processing the training sample through the processing layer, and the other path is used as an input of a full connection layer (the number of full connection layers may be the same as the number of processing layers after the specific layer) for determining the quantization parameter, so as to output the quantization parameter of each processing layer. And then the training samples can be processed by combining quantization parameters respectively corresponding to the processing layers, and training output data corresponding to the training samples is output.
In this implementation, the output of a specific layer may be divided into multiple paths of data, where one path of data is used to continue processing training samples, and the other path of data is used to determine quantization parameters, and since the quantization parameters are determined based on the input training samples, the quantization parameters are more suitable for processing training samples, and then each processing layer after the specific layer is combined with the quantization parameters of the processing layer to continue processing training samples, so that the model performance of the data processing model can be improved.
In some alternative implementations, the data processing model is derived based on quantized perceptual training; and determining quantization parameters respectively corresponding to the at least one processing layer, including: for each processing layer, if the quantization parameter is determined to be transmitted forward, determining the quantization parameter through a first preset function; and if the quantization parameter is determined to be counter-propagated, determining the quantization parameter through a second preset function.
In some application scenarios, for a processing layer, if the quantization parameter corresponding to the processing layer is determined to be propagated in the forward direction, the quantization parameter may be determined by a first preset function. The first preset function may for example comprise a function argmax parameterizing the function (set).
In other application scenarios, for a processing layer, if the quantization parameter of the processing layer is determined to be counter-propagated, the quantization parameter may be determined by a second predetermined function. The second preset function may for example comprise a normalized exponential function softmax. Specifically, one path of data of a specific layer can be processed through softmax, and then the processing result is derived, so that the quantization parameter can be determined.
In this implementation manner, the quantization parameter of each processing layer can be determined through different preset functions based on the forward propagation or reverse propagation quantization parameter, so that the process of quantized perceptual training has a wider application range.
In some alternative implementations, the data processing model is derived based on post-training quantization; and determining quantization parameters respectively corresponding to the at least one processing layer may include the steps of:
in some application scenarios, for different training samples, the data processing model processes the training samples by using quantization parameters of different combinations, so that different processing results can be obtained. Thus, for each training sample, a plurality of quantization parameter combinations corresponding thereto may be determined to select therefrom the preferred quantization parameter combination corresponding thereto.
Step 2, training to obtain a processing structure based on the training samples and the quantization parameter combination meeting the preset requirements;
for each training sample, the embedded hardware determines different quantization parameter combinations, and then selects the quantization parameter combinations meeting preset requirements. The preset requirements may include, for example, bringing the model representation closest to a floating point model. Here, for example, different combinations of quantization parameters may be back-propagated to determine quantization parameters that bring the model representation closest to the floating point model.
After determining the quantization parameter combination meeting the preset requirement, the embedded hardware can be combined with a training sample to train to obtain a processing structure.
And 3, determining quantization parameters corresponding to the at least one processing layer respectively through the processing structure.
After the embedded hardware is trained to obtain the processing structure, the quantization parameter corresponding to each processing layer can be determined through the processing structure.
In the implementation manner, the processing structure can be trained based on the training sample and the quantization parameter combination meeting the preset requirement, so that the quantization parameter output by the processing structure after training can further improve the model performance of the data processing model obtained based on quantization after training.
In some optional implementations, the training in step 2 based on the training samples and the quantization parameter combination meeting the preset requirements, the training to obtain the processing structure includes: taking the training sample as input of a processing structure, and taking a quantization parameter combination meeting preset requirements as expected output; the processing structure is trained based on the actual output for the training samples and the desired output.
In some application scenarios, when training the processing structure, a training sample may be used as an input of the processing structure, and a combination of quantization parameters meeting a preset requirement may be used as an expected output of the processing structure, so as to train to obtain the processing structure based on an actual output of the training sample and the expected output.
In these application scenarios, for example, a loss value between the actual output of the processing structure and the desired output can also be calculated by a loss function, and the processing structure can be trained in a direction in which the loss value decreases or remains unchanged.
In this implementation manner, by combining quantization parameters satisfying preset requirements as the expected output of the processing structure, the quantization parameters output by the processing structure after training can be made to approach the expected output, and then the model performance of the data processing model obtained based on quantization after training can be further improved.
Referring now to FIG. 3, a block diagram illustrating a data processing apparatus, which may be a module, a program segment, or a code on an electronic device, is provided in an embodiment of the present application. It should be understood that the apparatus corresponds to the embodiment of the method of fig. 1 described above, and is capable of performing the steps involved in the embodiment of the method of fig. 1, and specific functions of the apparatus may be referred to in the foregoing description, and detailed descriptions thereof are omitted herein as appropriate to avoid redundancy.
Optionally, the data processing device includes an input module 301 and a processing module 302. The input module 301 is configured to input data to be processed into a data processing model that is trained in advance; the data processing model is obtained based on training samples and quantization parameters, and the quantization parameters are determined according to the specific layer of the data processing model and aiming at the output of the training samples; and the processing module 302 is used for processing the target data which is adaptive to the data to be processed through the data processing model.
Optionally, the training process of the data processing model includes: dividing the output of the specific layer into at least two paths; one path is used for enabling at least one processing layer behind the specific layer to continuously process training samples, and the other path is used for determining quantization parameters corresponding to the at least one processing layer respectively; for each processing layer, continuing to process the training samples by combining quantization parameters corresponding to the processing layer; the data processing model is trained based on the output of the last processing layer and the desired output.
Optionally, the data processing model is obtained based on quantized perceptual training; and determining quantization parameters respectively corresponding to the at least one processing layer, including: for each processing layer, if the quantization parameter is determined to be transmitted forward, determining the quantization parameter through a first preset function; and if the quantization parameter is determined to be counter-propagated, determining the quantization parameter through a second preset function.
Optionally, the data processing model is obtained based on quantization after training; and determining quantization parameters respectively corresponding to the at least one processing layer, including: determining a different combination of quantization parameters for each training sample; training to obtain a processing structure based on the training sample and a quantization parameter combination meeting preset requirements; and determining quantization parameters respectively corresponding to the at least one processing layer through the processing structure.
Optionally, the training to obtain the processing structure based on the training samples and the quantization parameter combination meeting the preset requirement includes: taking the training sample as input of a processing structure, and taking a quantization parameter combination meeting preset requirements as expected output; the processing structure is trained based on the actual output for the training samples and the desired output.
Optionally, the processing structure for determining quantization parameters respectively corresponding to the at least one processing layer includes any one of the following: convolution structure, pooling structure, full connection layer structure.
Optionally, the data processing model is trained by embedded hardware.
It should be noted that, for convenience and brevity, a person skilled in the art will clearly understand that, for the specific working process of the system or apparatus described above, reference may be made to the corresponding process in the foregoing method embodiment, and the description will not be repeated here.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device for executing a data processing method according to an embodiment of the present application, where the electronic device may include: at least one processor 401, such as a CPU, at least one communication interface 402, at least one memory 403, and at least one communication bus 404. Wherein the communication bus 404 is used to enable direct connection communication of these components. The communication interface 402 of the device in the embodiment of the present application is used for performing signaling or data communication with other node devices. The memory 403 may be a high-speed RAM memory or a nonvolatile memory (non-volatile memory), such as at least one magnetic disk memory. The memory 403 may also optionally be at least one storage device located remotely from the aforementioned processor. The memory 403 has stored therein computer readable instructions which, when executed by the processor 401, may cause the electronic device to perform the method process described above with reference to fig. 1.
It will be appreciated that the configuration shown in fig. 4 is merely illustrative, and that the electronic device may also include more or fewer components than shown in fig. 4, or have a different configuration than shown in fig. 4. The components shown in fig. 4 may be implemented in hardware, software, or a combination thereof.
Embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, may perform a method process performed by an electronic device in the method embodiment shown in fig. 1.
Embodiments of the present application provide a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform the method provided by the method embodiments described above, for example, the method may comprise: inputting data to be processed into a pre-trained data processing model; the data processing model is obtained based on training samples and quantization parameters, and the quantization parameters are determined according to the specific layer of the data processing model and aiming at the output of the training samples; and processing the data processing model to obtain target data which is suitable for the data to be processed.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
Further, the units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Furthermore, functional modules in various embodiments of the present application may be integrated together to form a single portion, or each module may exist alone, or two or more modules may be integrated to form a single portion.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application, and various modifications and variations may be suggested to one skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.
Claims (10)
1. A method of data processing, comprising:
inputting data to be processed into a pre-trained data processing model; the data processing model is obtained based on training samples and quantization parameters, and the quantization parameters are determined according to the specific layer of the data processing model and aiming at the output of the training samples;
and processing the data processing model to obtain target data which is suitable for the data to be processed.
2. The method of claim 1, wherein the training process of the data processing model comprises:
dividing the output of the specific layer into at least two paths; one path is used for enabling at least one processing layer behind the specific layer to continuously process training samples, and the other path is used for determining quantization parameters corresponding to the at least one processing layer respectively;
for each processing layer, continuing to process the training samples by combining quantization parameters corresponding to the processing layer;
the data processing model is trained based on the output of the last processing layer and the desired output.
3. The method of claim 2, wherein the data processing model is derived based on quantized perceptual training; and
determining quantization parameters respectively corresponding to the at least one processing layer, including:
for each processing layer, if the quantization parameter is determined to be transmitted forward, determining the quantization parameter through a first preset function; and
and if the quantization parameter is determined to be counter-propagated, determining the quantization parameter through a second preset function.
4. The method of claim 2, wherein the data processing model is based on post-training quantization; and
determining quantization parameters respectively corresponding to the at least one processing layer, including:
determining a different combination of quantization parameters for each training sample;
training to obtain a processing structure based on the training sample and a quantization parameter combination meeting preset requirements;
and determining quantization parameters respectively corresponding to the at least one processing layer through the processing structure.
5. The method of claim 4, wherein training the resulting processing structure based on training samples and quantization parameter combinations that meet preset requirements comprises:
taking the training sample as input of a processing structure, and taking a quantization parameter combination meeting preset requirements as expected output;
the processing structure is trained based on the actual output for the training samples and the desired output.
6. The method according to any one of claims 2-5, wherein the processing structure for determining quantization parameters respectively corresponding to the at least one processing layer comprises any one of:
convolution structure, pooling structure, full connection layer structure.
7. The method of any of claims 2-5, wherein the data processing model is trained by embedded hardware.
8. A data processing apparatus, comprising:
the input module is used for inputting the data to be processed into the data processing model which is trained in advance; the data processing model is obtained based on training samples and quantization parameters, and the quantization parameters are determined according to the specific layer of the data processing model and aiming at the output of the training samples;
and the processing module is used for processing the target data which is suitable for the data to be processed through the data processing model.
9. An electronic device comprising a processor and a memory storing computer readable instructions that, when executed by the processor, perform the method of any of claims 1-7.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, performs the method according to any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310478200.7A CN116205283B (en) | 2023-04-28 | 2023-04-28 | Data processing method, device, electronic equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310478200.7A CN116205283B (en) | 2023-04-28 | 2023-04-28 | Data processing method, device, electronic equipment and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116205283A true CN116205283A (en) | 2023-06-02 |
CN116205283B CN116205283B (en) | 2023-08-25 |
Family
ID=86517595
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310478200.7A Active CN116205283B (en) | 2023-04-28 | 2023-04-28 | Data processing method, device, electronic equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116205283B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2021099677A (en) * | 2019-12-23 | 2021-07-01 | Kddi株式会社 | Learning device, learning method and program |
CN113792567A (en) * | 2020-08-24 | 2021-12-14 | 京东安联财产保险有限公司 | Data processing method and training method of data processing model |
US20220148290A1 (en) * | 2019-02-25 | 2022-05-12 | Nec Corporation | Method, device and computer storage medium for data analysis |
CN114580630A (en) * | 2022-03-01 | 2022-06-03 | 厦门大学 | Neural network model training method and graph classification method for AI chip design |
CN115512175A (en) * | 2022-08-12 | 2022-12-23 | 北京亮道智能汽车技术有限公司 | Model training method, point cloud data processing device, point cloud data processing equipment and storage medium |
-
2023
- 2023-04-28 CN CN202310478200.7A patent/CN116205283B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220148290A1 (en) * | 2019-02-25 | 2022-05-12 | Nec Corporation | Method, device and computer storage medium for data analysis |
JP2021099677A (en) * | 2019-12-23 | 2021-07-01 | Kddi株式会社 | Learning device, learning method and program |
CN113792567A (en) * | 2020-08-24 | 2021-12-14 | 京东安联财产保险有限公司 | Data processing method and training method of data processing model |
CN114580630A (en) * | 2022-03-01 | 2022-06-03 | 厦门大学 | Neural network model training method and graph classification method for AI chip design |
CN115512175A (en) * | 2022-08-12 | 2022-12-23 | 北京亮道智能汽车技术有限公司 | Model training method, point cloud data processing device, point cloud data processing equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN116205283B (en) | 2023-08-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110612538B (en) | Generating discrete potential representations of input data items | |
CN110796154B (en) | Method, device and equipment for training object detection model | |
CN108156519B (en) | Image classification method, television device and computer-readable storage medium | |
US11521038B2 (en) | Electronic apparatus and control method thereof | |
US11798254B2 (en) | Bandwidth limited context based adaptive acquisition of video frames and events for user defined tasks | |
KR20210088656A (en) | Methods, devices, devices and media for image generation and neural network training | |
CN112101543A (en) | Neural network model determination method and device, electronic equipment and readable storage medium | |
CN113987269A (en) | Digital human video generation method and device, electronic equipment and storage medium | |
CN115115540A (en) | Unsupervised low-light image enhancement method and unsupervised low-light image enhancement device based on illumination information guidance | |
CN116205283B (en) | Data processing method, device, electronic equipment and computer readable storage medium | |
CN111405293B (en) | Video transmission method and device | |
CN110570877A (en) | Sign language video generation method, electronic device and computer readable storage medium | |
CN112396100B (en) | Optimization method, system and related device for fine-grained classification model | |
KR102120669B1 (en) | Image classification apparatus using feedback method for reducing uncertainty of image classification result in neural net architectures and method thereof | |
CN116823869A (en) | Background replacement method and electronic equipment | |
CN116167926A (en) | Model training method and contrast adjustment method | |
CN113590168B (en) | Method, device, equipment, medium and program product for upgrading embedded equipment | |
Akutsu et al. | End-to-End Deep ROI Image Compression | |
CN116361658B (en) | Model training method, task processing method, device, electronic equipment and medium | |
CN113554179B (en) | Information processing system | |
CN114095728B (en) | End-to-end video compression method, device and computer readable storage medium | |
CN115482422B (en) | Training method of deep learning model, image processing method and device | |
CN111953974B (en) | Motion parameter candidate list construction method and device and computer equipment | |
US20220178814A1 (en) | Method for calculating a density of stem cells in a cell image, electronic device, and storage medium | |
CN115906941A (en) | Neural network self-adaptive exiting method, device, equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |