CN114926706A - Data processing method, device and equipment - Google Patents

Data processing method, device and equipment Download PDF

Info

Publication number
CN114926706A
CN114926706A CN202210560452.XA CN202210560452A CN114926706A CN 114926706 A CN114926706 A CN 114926706A CN 202210560452 A CN202210560452 A CN 202210560452A CN 114926706 A CN114926706 A CN 114926706A
Authority
CN
China
Prior art keywords
model
weight
target
carrier object
written
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210560452.XA
Other languages
Chinese (zh)
Inventor
曹佳炯
丁菁汀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202210560452.XA priority Critical patent/CN114926706A/en
Publication of CN114926706A publication Critical patent/CN114926706A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Abstract

The embodiment of the specification provides a data processing method, a data processing device and data processing equipment, wherein the method comprises the following steps: obtaining a target model to be steganographically, and determining the importance of each model weight of the target model, wherein the importance of the model weight is used for representing the influence degree of the model weight on the model accuracy of the target model; determining a coding mode corresponding to each model weight based on the importance of each model weight of the target model, and coding the model weight based on the coding mode to obtain a coded model weight; writing the coded model weight and the model structure of the target model into a carrier object to obtain a written carrier object; and sending the written carrier object to target equipment, wherein the written carrier object is used for triggering the target equipment to obtain the target model based on the written carrier object so as to process a target service based on the target model.

Description

Data processing method, device and equipment
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a data processing method, device and apparatus.
Background
With the rapid development of computer technology, the application scenarios of the artificial intelligence system are more and more extensive, such as face recognition, automatic driving, etc., while the core of the artificial intelligence system is a model constructed by a deep learning algorithm, and in order to improve the security of the artificial intelligence system, privacy protection processing needs to be performed on the model constructed by the deep learning algorithm.
However, under the condition that the model structure is increasingly complex, the requirement on the steganographically writable space of the carrier object is high, the steganographically effect of the model is poor, and meanwhile, the processing efficiency of the steganographically model is low due to the fact that the data processing amount is large, so that a solution capable of improving the steganographically effect and the steganographically efficiency of the model in the steganographically scene of the model is needed.
Disclosure of Invention
The embodiment of the specification aims to provide a solution for improving a model steganography effect and model steganography efficiency in a model steganography scene.
In order to implement the above technical solution, the embodiments of the present specification are implemented as follows:
in a first aspect, an embodiment of the present specification provides a data processing method, including: acquiring a target model to be steganographically, and determining the importance of each model weight of the target model, wherein the importance of the model weight is used for representing the influence degree of the model weight on the model accuracy of the target model; determining a coding mode corresponding to each model weight based on the importance of each model weight of the target model, and coding the model weight based on the coding mode to obtain a coded model weight; writing the coded model weight and the model structure of the target model into a carrier object to obtain a written carrier object; and sending the written carrier object to target equipment, wherein the written carrier object is used for triggering the target equipment to obtain the target model based on the written carrier object so as to process a target service based on the target model.
In a second aspect, an embodiment of the present specification provides a data processing method, including: receiving a written carrier object sent by a server, wherein the written carrier object is obtained by writing a model structure of the target model and a coded model weight into the carrier object by the server, the coded model weight is obtained by coding a coding mode corresponding to each model weight determined by the server according to the importance of each model weight of the target model, and the importance of the model weight is used for representing the influence degree of the model weight on the model accuracy of the target model; extracting the written carrier object based on a preset extraction model to obtain the model weight of the coded target model and the model structure of the target model; and determining the target model based on the coded model weight and the model structure of the target model, and processing the target service based on the target model.
In a third aspect, an embodiment of the present specification provides a data processing apparatus, including: the model obtaining module is used for obtaining a target model to be steganographically and determining the importance of each model weight of the target model, wherein the importance of the model weight is used for representing the influence degree of the model weight on the model accuracy of the target model; the weight coding module is used for determining a coding mode corresponding to each model weight based on the importance of each model weight of the target model and coding the model weight based on the coding mode to obtain a coded model weight; an information writing module, configured to write the encoded model weight and the model structure of the target model into a carrier object, so as to obtain a written carrier object; and the data sending module is used for sending the written carrier object to target equipment, and the written carrier object is used for triggering the target equipment to obtain the target model based on the written carrier object so as to process the target service based on the target model.
In a fourth aspect, an embodiment of the present specification provides a data processing apparatus, including: an object receiving module, configured to receive a written carrier object sent by a server, where the written carrier object is obtained by writing, by the server, a model structure of the target model and a coded model weight into a carrier object, the coded model weight is obtained by coding, by the server, a coding manner corresponding to each model weight determined by importance of each model weight of the target model, and the importance of the model weight is used to represent an influence degree of the model weight on a model accuracy of the target model; the data extraction module is used for extracting the written carrier object based on a preset extraction model to obtain the model weight of the coded target model and the model structure of the target model; and the model determining module is used for determining the target model based on the coded model weight and the model structure of the target model and processing the target service based on the target model.
In a fifth aspect, an embodiment of the present specification provides a data processing apparatus, including: a processor; and a memory arranged to store computer executable instructions that, when executed, cause the processor to: obtaining a target model to be steganographically, and determining the importance of each model weight of the target model, wherein the importance of the model weight is used for representing the influence degree of the model weight on the model accuracy of the target model; determining a coding mode corresponding to each model weight based on the importance of each model weight of the target model, and coding the model weight based on the coding mode to obtain a coded model weight; writing the coded model weight and the model structure of the target model into a carrier object to obtain a written carrier object; and sending the written carrier object to target equipment, wherein the written carrier object is used for triggering the target equipment to obtain the target model based on the written carrier object so as to process a target service based on the target model.
In a sixth aspect, an embodiment of the present specification provides a data processing apparatus, including: a processor; and a memory arranged to store computer executable instructions that, when executed, cause the processor to: receiving a written carrier object sent by a server, wherein the written carrier object is obtained by writing a model structure of the target model and a coded model weight into a carrier object by the server, the coded model weight is obtained by coding a coding mode corresponding to each model weight determined by the server according to the importance of each model weight of the target model, and the importance of the model weight is used for representing the influence degree of the model weight on the model accuracy of the target model; extracting the written carrier object based on a preset extraction model to obtain the model weight of the coded target model and the model structure of the target model; and determining the target model based on the coded model weight and the model structure of the target model, and processing the target service based on the target model.
In a seventh aspect, the present specification provides a storage medium for storing computer-executable instructions, where the executable instructions, when executed, implement the following processes: obtaining a target model to be steganographically, and determining the importance of each model weight of the target model, wherein the importance of the model weight is used for representing the influence degree of the model weight on the model accuracy of the target model; determining a coding mode corresponding to each model weight based on the importance of each model weight of the target model, and coding the model weight based on the coding mode to obtain a coded model weight; writing the coded model weight and the model structure of the target model into a carrier object to obtain a written carrier object; and sending the written carrier object to target equipment, wherein the written carrier object is used for triggering the target equipment to obtain the target model based on the written carrier object so as to process a target service based on the target model.
In an eighth aspect, the present specification provides a storage medium for storing computer-executable instructions, which when executed implement the following flow: receiving a written carrier object sent by a server, wherein the written carrier object is obtained by writing a model structure of the target model and a coded model weight into a carrier object by the server, the coded model weight is obtained by coding a coding mode corresponding to each model weight determined by the server according to the importance of each model weight of the target model, and the importance of the model weight is used for representing the influence degree of the model weight on the model accuracy of the target model; extracting the written carrier object based on a preset extraction model to obtain the model weight of the coded target model and the model structure of the target model; and determining the target model based on the coded model weight and the model structure of the target model, and processing the target service based on the target model.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
FIG. 1A is a flow chart of one embodiment of a data processing method of the present disclosure;
FIG. 1B is a schematic diagram of a data processing method according to the present disclosure;
FIG. 2 is a schematic diagram of an importance determination process for model weights according to the present disclosure;
FIG. 3 is a schematic process diagram of another data processing method of the present disclosure;
FIG. 4 is a schematic process diagram of another data processing method of the present disclosure;
FIG. 5 is a schematic diagram of another process for determining importance of model weights described herein;
FIG. 6A is a flow chart of yet another embodiment of a data processing method herein;
FIG. 6B is a schematic process diagram of another data processing method of the present disclosure;
FIG. 7 is a block diagram of an embodiment of a data processing apparatus according to the present disclosure;
FIG. 8 is a block diagram illustrating another embodiment of a data processing apparatus;
fig. 9 is a schematic structural diagram of a data processing apparatus according to the present specification.
Detailed Description
The embodiment of the specification provides a data processing method, a data processing device and data processing equipment.
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present specification without making any creative effort shall fall within the protection scope of the present specification.
Example one
As shown in fig. 1A and 1B, an execution subject of the method may be a server, and the server may be a server, where the server may be an independent server or a server cluster composed of multiple servers. The method may specifically comprise the steps of:
in S102, a target model to be steganographically represented is obtained, and the importance of each model weight of the target model is determined.
The target model to be steganographically may be a model obtained by training a model constructed by a deep learning algorithm based on historical service data and may be used for processing a predetermined service, and the importance of the model weight may be used for representing the degree of influence of the model weight on the model accuracy of the target model, for example, the target model may be a model obtained by training a classification model constructed by a neural network learning algorithm based on historical resource transfer service data and may be used for determining whether there is a risk in executing resource transfer service, and the importance of the model weight may be used for representing the degree of influence of the model weight on the classification accuracy of the target model, that is, the higher the importance of the model weight is, the higher the degree of influence of the model weight on the classification accuracy of the target model is.
In implementation, with the rapid development of computer technology, the application scenarios of the artificial intelligence system are more and more extensive, such as face recognition, automatic driving, etc., while the core of the artificial intelligence system is a model constructed by a deep learning algorithm, and in order to improve the security of the artificial intelligence system, privacy protection processing needs to be performed on the model constructed by the deep learning algorithm. For example, the structure of the model may be processed by adding unnecessary operations (e.g., adding a number a to the model weight of the model, subtracting the number a, etc.), and the processed model is written into the carrier object, so that an attacker cannot accurately locate the effective structure of the model according to the carrier object, so as to implement the privacy protection process for the model.
However, as too many unnecessary operations need to be added, under the condition that the model structure is increasingly complex, the requirement on the steganographically writable space of the carrier object is high, which results in poor steganographically effect of the model, and the processing efficiency of the steganographically of the model is low due to the large data processing amount, so that a solution capable of improving the steganographically effect and the steganographically efficiency of the model in the steganographically scene of the model is needed. Therefore, the embodiments of the present disclosure provide a technical solution that can solve the above problems, and refer to the following specifically.
Taking a target model to be steganographically as an example of a model for determining whether a resource transfer service is executed or not, the target model may be obtained by training a server based on a predetermined amount of historical resource transfer service data, and the target device may be a client used by an organization capable of providing the resource transfer service for a user.
As shown in fig. 2, taking the target model as a classification model constructed by a 3-layer convolutional neural network, and each convolutional layer includes 3 filters for identifying specific features of data as an example, a first classification accuracy of the target model may be determined based on a sample data set, then a weight value of the filter of each convolutional layer is sequentially set to 0, a second classification accuracy of the zeroed target model is calculated based on the sample data set, and finally, importance of the filter of each convolutional layer (i.e., importance of the model weight of each convolutional layer) may be determined based on the first classification accuracy and the second classification accuracy.
Specifically, as shown in fig. 2, the sample data set may be input into the target model for classification processing to obtain a first classification result, and then a first classification accuracy may be obtained according to the class label and the first classification result of the sample data set. The weights of the weight 1 of the filter 1, the weight 2 of the filter 2 and the weight 3 of the filter 3 of the first convolutional layer may be set to 0, the sample data set is input into the target model after being set to zero to obtain a second classification result, the second classification accuracy may be obtained according to the class label and the second classification result of the sample data set, and finally, the importance of the filter 1, the filter 2 and the filter 3 of the first convolutional layer may be determined based on the first classification accuracy and the second classification accuracy, that is, the importance of the model weight of each convolutional layer may be obtained according to the above method, for example, the importance of the filter 1, the filter 2 and the filter 3 of the first convolutional layer may be importance 1, the importance of the filter 4, the filter 5 and the filter 6 of the second convolutional layer may be importance 2, and the importance of the filter 7, the filter 3 of the third convolutional layer may be importance 2, The importance of filter 8 and filter 9 may be importance 3.
The method for determining the importance of each model weight of the target model is an optional and realizable determination method, and in an actual application scenario, there may be a plurality of different determination methods, and different determination methods may be selected according to different actual application scenarios, which is not specifically limited in the embodiments of the present specification.
In addition, the importance of the model weight may be represented in a variety of different manners, such as a score and a level, and as described above, the importance of the filter 1, the filter 2, and the filter 3 of the first convolutional layer may be the importance score 1, or may be the importance level 1, different representation methods may be selected according to different practical application scenarios, and this is not limited in this embodiment of the present specification.
In S104, based on the importance of each model weight of the target model, a coding method corresponding to each model weight is determined, and the model weights are coded based on the coding method to obtain coded model weights.
In implementation, the encoding modes corresponding to different model weights can be determined according to the importance of the model weights, for example, taking the importance of the model weights as the importance level as an example, in the object model shown in fig. 2, it is assumed that the importance of the filter 1, the filter 2, and the filter 3 of the first layer convolutional layer may be importance level 1, the importance of the filter 4, the filter 5, and the filter 6 of the second layer convolutional layer may be importance level 2, the importance of the filter 7, the filter 8, and the filter 9 of the third layer convolutional layer may be importance level 3, wherein the importance level 1 is greater than the importance level 2, the importance level 2 is greater than the importance level 3, namely, the degree of influence of the filter of the first layer of convolutional layer on the classification accuracy of the target model is higher than that of the filter of the second layer of convolutional layer on the classification accuracy of the target model.
If it is assumed that the importance levels 1 and 2 correspond to the coding method 1 and the importance level 3 corresponds to the coding method 2, the weight values of the filters of the first layer of convolutional layer and the second layer of convolutional layer may be encoded by the coding method 1, and the weight value of the filter of the third layer of convolutional layer may be encoded by the coding method 2, so as to obtain the model weight of the encoded target model.
The precision of the coded data obtained by the coding mode 1 is higher than that of the coded data obtained by the coding mode 2, namely, the coding mode 1 can carry out high-precision coding on the model weight with higher importance, and the coding mode 2 can carry out low-precision coding on the model weight with lower importance, so that the effective utilization of the carrier space can be improved, and the data processing efficiency can be improved.
In addition, the determination method of the coding mode corresponding to the model weight is an optional and realizable determination method, and in an actual application scenario, there may be a plurality of different determination methods, and different determination methods may be selected according to different actual application scenarios, which is not specifically limited in this embodiment of the present specification.
In addition, there may be a plurality of coding modes corresponding to the model weight, for example, the coding mode 1 may be an Fp32 coding mode, and the coding mode 2 may be an Fp16 coding mode, so that the model weight with higher importance may be coded with high precision by the Fp32 coding mode, and the model weight with lower importance may be coded with low precision by the Fp16 coding mode.
In S106, the encoded model weights and the model structure of the target model are written into the carrier object, and the written carrier object is obtained.
The carrier object may be any object capable of writing data to be hidden, such as an image, a video, an audio, and the like, for example, the carrier object may be a carrier image, and the data to be hidden may be written into some pixel points of the carrier image, so that a Peak Signal to Noise Ratio (PSNR) of the obtained written carrier image is not lower than a preset Noise threshold, that is, the written carrier image has a good visual effect.
In implementation, the model structure of the target model may be encoded based on preset encoding rules, wherein the preset encoding rule may be a rule for encoding based on the type and size of the target model, for example, assume that the object model is a classification model constructed from a 3-layer convolutional neural network, and each convolutional layer contains 3 filters for identifying specific features of the data, i.e. the target model is a 3 x 3 convolutional neural network model, the coded model structure obtained by coding the model structure of the target model based on the preset coding rule may be 0011, where "0" may indicate that the target model is a convolutional neural network model (e.g., "0" may indicate a convolutional neural network model, "1" may indicate a decision tree model, etc.), and "011" may indicate the size of the target model (i.e., the convolution kernel of the target model is 3 x 3).
Taking a carrier object as a carrier image as an example, assuming that original information of each pixel point is 8bit, and under the condition that PSNR of the carrier image is not lower than a preset noise threshold, each pixel point may provide 4bit for writing data to be steganographically, then the encoded model weight and the encoded model structure may be written into 4bit of a plurality of pixel points of the carrier object, for example, assuming that data volume of the encoded model weight and the encoded model structure is n, n/4 pixel points may be selected from the carrier image (for example, the pixel points may be randomly selected, sequentially selected, or selected based on a preset selection rule, and the like) for storing the encoded model weight and the encoded model structure.
In the above, the carrier object is taken as a carrier image as an example, and how to write the encoded model weight and the model structure of the target model into the carrier object to obtain the written carrier object is described, in an actual application scenario, the carrier object may also have a plurality of different forms, for example, if the carrier object is a carrier video, the encoded model weight may be written into a pixel point of a certain frame of image in the carrier video, the encoded model structure may be written into a certain section of audio in the carrier video, and different data writing methods may be selected according to different carrier objects, which is not specifically limited in this embodiment of the present specification.
In S108, the written carrier object is sent to the target device.
The written carrier object may be used to trigger the target device to obtain a target model based on the written carrier object, so as to process the target service based on the target model.
In implementation, for example, the target model may be a model for determining whether there is a risk in executing the resource transfer service, the target service may be the resource transfer service, the server may send the written bearer object to the target device, the target device may perform extraction processing on the written bearer object to obtain the encoded model weight and the model structure of the target model, and determine the target model based on the encoded model weight and the model structure of the target model.
The embodiment of the specification provides a data processing method, which includes obtaining a target model to be steganographically, determining importance of each model weight of the target model, wherein the importance of the model weight can be used for representing influence degree of the model weight on model accuracy of the target model, determining a coding mode corresponding to each model weight based on the importance of each model weight of the target model, coding the model weight based on the coding mode to obtain the coded model weight, writing the coded model weight and a model structure of the target model into a carrier object to obtain a written carrier object, sending the written carrier object to target equipment, and triggering the target equipment to obtain the target model based on the written carrier object to process a target service based on the target model. Therefore, the server side can encode the model weight according to the encoding mode corresponding to the model weight, so that the introduction of extra data is avoided, the efficiency of model steganography can be improved, the corresponding encoding mode is determined according to the importance of the model weight, the encoding precision of the model weight with high importance can be improved, and the effect of model steganography is improved.
Example two
An execution main body of the method may be a server, and the server may be a server, where the server may be an independent server, or a server cluster composed of multiple servers. The method may specifically comprise the steps of:
in practical applications, the processing manner of S102 may be various, and as shown in fig. 3, an alternative implementation manner is provided below, which may specifically refer to the processing of S302 to S318 described below.
In S302, a history model to be steganographically displayed is obtained.
For example, the target model may be a classification model constructed by a 3-layer convolutional neural network, and each convolutional layer includes 3 filters for identifying specific features of data, and the history model may also be a classification model constructed by a 3-layer convolutional neural network, and each convolutional layer includes 3 filters for identifying specific features of data, but the weight value of the filter of the history model is different from that of the filter of the target model.
In S304, the weight value of each model weight of the historical model is input into the first model, and the attention score of each model weight of the historical model is obtained.
The first model may be constructed by a preset number of full link layers with attention mechanism, for example, the first model may be a meta-network model constructed by a preset number of convolutional layers with attention mechanism, and the attention mechanism may be used to focus on information more critical to the current task and reduce attention to other information in the case of more input information.
In an implementation, a weight value of each model weight of the historical model may be input to the first model to determine an attention score of each model weight through the attention mechanism of the first model, i.e., the greater the attention score of the model weight, the more critical the model weight is to the historical model, i.e., the greater the importance of the model weight.
In S306, a target weight value for each model weight of the history model is determined based on the attention score and the weight value for each model weight of the history model.
In an implementation, the product of the attention score and the weight value of each model weight of the historical model may be determined as the target weight value of the model weight.
In S308, the changed history model is determined based on the target weight value of the history model.
In S310, a second training sample is obtained.
For example, if the historical model is a classification model, user behavior data (such as resource transfer data of the user) of approximately 1 month may be obtained as the second training sample.
In S312, a second training sample is input into the changed historical model, and the model accuracy of the changed historical model is determined based on the sample label of the second training sample and the output result of the changed historical model.
In implementation, if the history model is a classification model, the second training sample may be input into the changed history model to obtain a classification result for the second training sample, and then the model accuracy of the changed history model is determined based on the sample label of the second training sample and the classification result of the second training sample.
In S314, it is determined whether the first model converges based on the changed model accuracy and parameter sparsity of the historical model, and if not, the first model continues to be trained based on the model weight of the historical model until the first model converges, so as to obtain the trained first model.
Wherein, the parameter sparsity rate can be determined by the number of zeros in the parameter.
In S316, the weight value of each model weight of the target model is input into the pre-trained first model, so as to obtain the attention score of each model weight of the target model.
In S318, the attention score of each model weight of the target model is determined as the importance score of each model weight of the target model.
In addition, in practical applications, the processing manner of S102 may be various, as shown in fig. 4, and an optional implementation manner is provided below, which may specifically refer to the processing of S402 to S410 described below.
In S402, a first training sample is acquired.
For example, if the target model is a classification model, user behavior data (such as resource transfer data of a user) of approximately 1 month may be acquired as the first training sample.
In S404, a first training sample is input into the target model, and a first model accuracy of the target model is determined based on the sample label of the first training sample and the output result of the target model.
In S406, a weight value of a target weight of the target model is set to a first weight value, and the changed target model is determined based on the set target weight.
The target weight is any model weight of the target model, and the first weight value may be different weight values according to different actual application scenarios, which is not specifically limited in the embodiments of the present specification.
In S408, the first training sample is input into the changed target model, and the second model accuracy of the changed target model is determined based on the sample label of the first training sample and the output result of the changed target model.
In S410, an importance score for the target weight of the target model is determined based on the first model accuracy and the second model accuracy.
In an implementation, as shown in fig. 5, taking as an example that the target model may be a classification model constructed by a 3-layer convolutional neural network, and each convolutional layer includes 3 filters for identifying specific features of data, a first training sample may be input into the target model, a first model accuracy of the target model may be determined based on a sample label of the first training sample and an output result of the target model, and then a weight value of the filter 1 of the first convolutional layer is set as a first weight value, and weight values of the remaining filters are unchanged, so as to obtain a changed target model. The first training sample may be input into the changed target model, the second model accuracy of the changed target model may be determined based on the sample label of the first training sample and the output result of the changed target model, and finally, the difference between the first model accuracy and the second model accuracy may be determined as the importance score of the filter 1 of the first layer convolutional layer.
The importance scores for each of the remaining filters may be determined in turn based on the above-described method of determining the importance score for filter 1 of the first-layer convolutional layer.
In addition, in practical applications, the processing manner of S410 may be various, and an alternative implementation manner is provided as follows, which may specifically refer to the following processing from step one to step three:
step one, determining a first score of a target weight of a target model based on a first model accuracy and a second model accuracy to determine a first score of each model weight of the target model.
In implementation, the first score of the target weight may be determined according to a difference between the first model accuracy and the second model accuracy corresponding to the target weight, and then the first score of each model weight may be determined according to a method for determining the first score of the target weight.
And step two, respectively inputting the weight value of each model weight of the target model into the pre-trained first model to obtain the attention score of each model weight of the target model.
In practice, the first model and the training process of the first model may be as described in the above S302 to S318, and are not described herein again.
And step three, determining the importance score of each model weight based on the first score and the attention score of each model weight of the target model.
In implementation, the importance score of each model weight may be determined according to the sum of the first score and the attention score of each model weight, for example, the sum of the first score and the attention score of each model weight may be determined as the importance score of each model weight, and the sum of the first score and the attention score of each model weight may be ranked and the importance score of each model weight may be determined according to the ranking result.
The determination method of the importance score of the model weight may be various, and different determination methods may be selected according to actual application scenarios, which is not specifically limited in the embodiments of the present specification.
After the importance of each model weight is determined based on the above-described S302 to S318 or S402 to S410, the following S320 to S326 may be further continuously performed as shown in fig. 4 or fig. 5.
In S320, the model weight of the target model is divided into a first class weight and a second class weight based on the importance of each model weight of the target model.
In implementation, the model weights of the target model may be divided into a first class weight and a second class weight according to a preset selection ratio and the importance of each model weight. For example, the importance of each model weight may be ranked, and the top 25% of the model weights may be determined as the first class weight, and the remaining 75% of the model weights may be determined as the second class weight.
Or, the model weights of the target model can be divided into a first class weight and a second class weight according to a preset importance threshold and the importance of each model weight. For example, a model weight having an importance greater than a preset importance threshold may be determined as a first type of weight, and a model weight having an importance not greater than the preset importance threshold may be determined as a second type of weight.
The method for dividing the model weight of the target model may be various, and different dividing methods may be selected according to actual application scenarios, which is not specifically limited in the embodiments of the present specification.
In S322, a preset first encoding mode corresponding to the first class weight is obtained, and the first class weight is encoded based on the preset first encoding mode to obtain the encoded first class weight.
In an implementation, for example, the first encoding manner may be an Fp32 encoding manner, and assuming that the default weight type of the first class of weights is an Fp64 type, the type of the first class of weights may be directly converted from an Fp64 type to an Fp32 type, so that a precision loss of the converted first class of weights (i.e., the encoded first class of weights) is less than a preset loss threshold, i.e., the precision of the first class of weights is less affected.
In S324, a preset second encoding manner corresponding to the second class weight is obtained, and the second class weight is encoded based on the preset second encoding manner, so as to obtain an encoded second class weight.
In practice, the processing manner of S324 may be varied in practical applications, and an alternative implementation manner is provided below, which may specifically refer to the following processing from step one to step four:
step one, clustering processing is carried out on the second class weights to obtain a plurality of sub-classes, and each sub-class corresponds to one or more second class weights.
In implementation, the second class of weights may be clustered based on a preset clustering algorithm (e.g., K-Means algorithm, DBSCAN algorithm, etc.) to obtain a plurality of sub-classes.
And step two, determining the target label corresponding to each sub-category based on the corresponding relation between the preset label and the preset weight value and the second class weight corresponding to each sub-category.
And step three, updating the weight values of the second class of weights corresponding to the sub-classes based on the target labels corresponding to the sub-classes to obtain the processed second class weights.
And fourthly, coding the processed second class weight based on a preset second coding mode to obtain the coded second class weight.
In an implementation, for example, assuming that the second class of weights includes weight 1, weight 2, weight 3, weight 4 and weight 5, the weight values of the 5 weights may be converted into Fp32 types, and then the 5 weights are clustered, so that 2 sub-categories, namely sub-category 1 including weight 1, weight 3 and weight 4, and sub-category 2 including weight 2 and weight 5, may be obtained.
The center point of each sub-category can be obtained, then the label corresponding to the center point of each sub-category is determined based on the corresponding relation between the preset label and the preset weight value, and the label is determined to be the target label corresponding to the sub-category. The correspondence between the preset label and the preset weight value may be as shown in table 1 below.
TABLE 1
Label (R) Weight value
0 0.01-0.05
1 0.06-0.10
2 0.11-0.15
For example, the weight value at the center point of the sub-category 1 may be 0.02, and then, according to the correspondence between the preset tag and the preset weight value in table 1, it may be determined that the tag corresponding to the weight value is 0, and then the target tag corresponding to the sub-category 1 is 0.
The determination method of the target tag of each sub-category is an optional and realizable determination method, and in an actual application scenario, there may be a plurality of different determination methods, and different determination methods may be selected according to different actual application scenarios, which is not specifically limited in the embodiment of the present specification.
After the target tag of each sub-category is determined, the target tag of each sub-category may be determined as the weight value of the second class weight corresponding to each sub-category, for example, if the target tag corresponding to the sub-category 1 is tag 1, the weight values of weight 1, weight 3, and weight 4 corresponding to the sub-category 1 may be updated to tag 1, so that the processed weight value of the second class weight is the corresponding tag value.
The processed second class weight may be encoded based on a preset second encoding manner to obtain an encoded second class weight, for example, the processed second class weight may be encoded by an Int8 encoding manner to obtain an encoded second class weight, and specifically, the tag value corresponding to the second class weight may be encoded by an Int8 encoding manner to obtain an encoded second class weight.
Therefore, the storage space occupied by the target label is smaller than that occupied by the model weight, and after the target label is coded by the Int8 coding mode, the storage space occupied by one model weight is only 8 bits, so that the storage space can be saved, and the data processing efficiency is improved.
In S326, an encoded model weight is obtained based on the encoded first class weight and the encoded second class weight.
The number of the pixel points required by the first-class weight after the coding is stored is larger than that of the pixel points required by the second-class weight after the coding is stored.
In the implementation, based on the above steps, it can be known that, taking a carrier object as a carrier image, each pixel point has 4 bits that can be used for storing data to be steganographically represented, the first class weight after storing codes needs to occupy 32bit storage space, the second class weight after storing codes needs to occupy 8bit storage space, the first class weight accounts for only 25%, and the second class weight accounts for 75%, and under the condition of N model weights, 25% N32/4 + 75% N8/4 is 3.5N pixel points for storing the N model weights, if the N model weights are directly stored, then N32/4 is 8N pixel points, obviously, by dividing the model weights and encoding in different encoding manners, the storage space can be saved, and the model weights with higher importance can be encoded in an encoding manner, and ensuring the model effect of the target model.
In S106, the encoded model weights and the model structure of the target model are written into the carrier object, and the written carrier object is obtained.
Wherein the carrier object may comprise a first carrier object and a second carrier object, the written carrier object comprising the first carrier object to which the encoded model weights are written and the second carrier object to which the model structure of the target model is written.
In implementation, to implement the security at the carrier level, the model weight and the model structure of the target model may be written into different carrier objects, that is, the model structure of the target model may be written into a second carrier object, and the model weight (that is, the encoded model weight) of the target model may be written into a first carrier object, where the types of the first carrier object and the second carrier object may be the same or different, for example, the first carrier object may be a carrier image, the second carrier object may be a carrier image different from the carrier image of the first carrier object, and the second carrier object may be a carrier video or the like.
In addition, the encoded first class weight, the encoded second class weight, and the corresponding relationship between the preset tag and the preset weight value may be written into the first carrier object, so that after receiving the first carrier object, the target device may reversely solve the original weight value corresponding to the processed second class weight according to the corresponding relationship between the preset tag and the preset weight value.
In practical applications, the processing manner of S106 may be various, and an alternative implementation manner is provided as follows, which may specifically refer to the following steps one to five:
step one, acquiring the coded historical weight and the historical carrier object.
And step two, inputting the coded historical weight and the historical carrier object into a steganography model to obtain the written historical carrier object.
Wherein the steganographic model may be a model constructed by a deep learning algorithm for steganographic writing data into the carrier.
And step three, inputting the written historical carrier object into a weight extraction model to obtain the extracted historical weight.
And step four, determining whether the steganographic model is converged or not based on the historical carrier object, the written historical carrier object, the coded historical weight and the extracted historical weight, and if not, training the steganographic model and the extraction model based on the coded historical weight and the historical carrier object until the steganographic model and the extraction model are converged to obtain the trained steganographic model.
In implementation, the steganographic model may be trained in conjunction with a Stochastic Gradient Descent (SGD) until convergence.
Taking the historical carrier object as the carrier image as an example, the loss function in the training process may include two parts: firstly, the difference value of the PSNR of the historical carrier object and the PSNR of the written historical carrier object; and II, the difference value of the coded historical weight and the extracted historical weight.
And fifthly, inputting the coded model weight and the first carrier object into a pre-trained steganography model to obtain the first carrier object written with the coded model weight.
In S108, the written carrier object is sent to the target device.
The written carrier object is used for triggering the target device to obtain the target model based on the written carrier object, so as to process the target service based on the target model.
The embodiment of the specification provides a data processing method, which includes the steps of obtaining a target model to be steganographically written, determining importance of each model weight of the target model, wherein the importance of the model weight can be used for representing influence degree of the model weight on model accuracy of the target model, determining a coding mode corresponding to each model weight based on the importance of each model weight of the target model, coding the model weights based on the coding modes to obtain coded model weights, writing the coded model weights and a model structure of the target model into a carrier object to obtain a written carrier object, sending the written carrier object to target equipment, and triggering the written carrier object to obtain the target model based on the written carrier object so as to process a target service based on the target model. Therefore, the server side can encode the model weight according to the encoding mode corresponding to the model weight, so that the introduction of extra data is avoided, the efficiency of model steganography can be improved, the corresponding encoding mode is determined according to the importance of the model weight, the encoding precision of the model weight with high importance can be improved, and the effect of model steganography is improved.
EXAMPLE III
As shown in fig. 6A and 6B, an execution subject of the method may be a target device, and the target device may be a server, where the server may be an independent server or a server cluster composed of multiple servers. The method specifically comprises the following steps:
in S602, the written carrier object sent by the server is received.
The written carrier object can be obtained by writing the model structure of the target model and the encoded model weight into the carrier object for the server, the encoded model weight can be obtained by encoding the encoding mode corresponding to each model weight determined by the importance of each model weight of the target model for the server, and the importance of the model weight can be used for representing the influence degree of the model weight on the model accuracy of the target model.
In S604, the written carrier object is extracted based on a preset extraction model, and a model weight of the encoded target model and a model structure of the target model are obtained.
The preset extraction model may be a trained extraction model sent by the server, and the extraction model may be an extraction model obtained by training the steganographic model and the extraction model in the above second embodiment.
In implementation, the target device may further receive an extraction model sent by the server, and extract the written carrier object based on the extraction model to obtain a model weight of the encoded target model and a model structure of the target model.
Furthermore, if the carrier objects comprise a first carrier object and a second carrier object, and the written carrier object comprises a first carrier object written with the encoded model weights and a second carrier object written with the model structure of the target model, then the target device may perform an extraction process on the first carrier object written with the encoded model weights based on the extraction model to obtain the encoded model weights.
And processing the second carrier object written in the model structure of the target model according to a preset analysis rule to obtain a coded model structure, wherein the target device can determine the model structure of the target model according to a preset coding rule sent by the server and the coded model structure, and then can input the obtained model weight into the model structure to obtain the target model.
In S606, a target model is determined based on the encoded model weight and the model structure of the target model, and a target service is processed based on the target model.
The embodiment of the present specification provides a data processing method, which receives a written carrier object sent by a server, where the written carrier object may be obtained by writing a model structure of a target model and a coded model weight into the carrier object for the server, the coded model weight may be obtained by coding a coding mode corresponding to each determined model weight by the server according to the importance of each model weight of the target model, the importance of the model weight may be used to represent the degree of influence of the model weight on the model accuracy of the target model, the written carrier object is subjected to extraction processing based on a preset extraction model to obtain a model weight of the coded target model and a model structure of the target model, and the target model is determined based on the coded model weight and the model structure of the target model, and a target service is processed based on the target model. Therefore, the server side can encode the model weight according to the encoding mode corresponding to the model weight, additional data are prevented from being introduced, the efficiency of model steganography can be improved, the corresponding encoding mode is determined according to the importance of the model weight, the encoding precision of the model weight with high importance can be improved, and the effect of model steganography is improved.
Example four
Based on the same idea, the data processing method provided in the embodiment of the present specification further provides a data processing apparatus, as shown in fig. 7.
The data processing apparatus includes: a model obtaining module 701, a weight coding module 702, an information writing module 703 and a data sending module 704, wherein:
the model obtaining module 701 is used for obtaining a target model to be steganographically, and determining the importance of each model weight of the target model, wherein the importance of the model weight is used for representing the influence degree of the model weight on the model accuracy of the target model;
a weight coding module 702, configured to determine, based on the importance of each model weight of the target model, a coding mode corresponding to each model weight, and code the model weight based on the coding mode to obtain a coded model weight;
an information writing module 703, configured to write the encoded model weight and the model structure of the target model into a carrier object, so as to obtain a written carrier object;
a data sending module 704, configured to send the written carrier object to a target device, where the written carrier object is used to trigger the target device to obtain the target model based on the written carrier object, so as to process a target service based on the target model.
In an embodiment of the present specification, the carrier objects comprise a first carrier object and a second carrier object, and the written carrier objects comprise the first carrier object written with the encoded model weights and the second carrier object written with the model structure of the target model.
In this embodiment of the present specification, the model obtaining module 701 is configured to:
obtaining a first training sample;
inputting the first training sample into the target model, and determining a first model accuracy of the target model based on a sample label of the first training sample and an output result of the target model;
setting a weight value of a target weight of the target model as a first weight value, and determining a changed target model based on the set target weight, wherein the target weight is any one model weight of the target model;
inputting the first training sample into the changed target model, and determining a second model accuracy of the changed target model based on a sample label of the first training sample and an output result of the changed target model;
determining an importance score for the target weight of the target model based on the first model accuracy and the second model accuracy.
In this embodiment of the present specification, the model obtaining module 701 is configured to:
determining a first score of a target weight for the target model based on the first model accuracy and the second model accuracy to determine a first score of each model weight for the target model;
respectively inputting the weight value of each model weight of the target model into a pre-trained first model to obtain the attention value of each model weight of the target model;
determining an importance score for each model weight of the target model based on the first score and the attention score for the each model weight.
In this embodiment of the present specification, the model obtaining module 701 is configured to:
respectively inputting the weight value of each model weight of the target model into a pre-trained first model to obtain the attention value of each model weight of the target model, wherein the first model is constructed by a preset number of full link layers with attention mechanisms;
determining an attention score for each model weight of the target model as an importance score for each model weight of the target model.
In an embodiment of this specification, the apparatus further includes:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a historical model to be steganographically;
the first determining module is used for respectively inputting the weight value of each model weight of the historical model into the first model to obtain the attention value of each model weight of the historical model;
a second determining module, configured to determine a target weight value for each model weight of the historical model based on the attention score and the weight value for each model weight of the historical model;
the third determining module is used for determining the changed history model based on the target weight value of the history model;
the second acquisition module is used for acquiring a second training sample;
a fourth determining module, configured to input the second training sample into the changed historical model, and determine a model accuracy of the changed historical model based on a sample label of the second training sample and an output result of the changed historical model;
and the first judging module is used for determining whether the first model is converged or not based on the model accuracy and the parameter sparsity of the changed historical model, and if not, continuing to train the first model based on the model weight of the historical model until the first model is converged to obtain the trained first model.
In this embodiment of the present specification, the weight encoding module 702 is configured to:
dividing the model weight of the target model into a first class weight and a second class weight based on the importance of each model weight of the target model;
acquiring a preset first coding mode corresponding to the first class weight, and coding the first class weight based on the preset first coding mode to obtain a coded first class weight;
acquiring a preset second coding mode corresponding to the second class weight, and coding the second class weight based on the preset second coding mode to obtain a coded second class weight;
obtaining the encoded model weight based on the encoded first class weight and the encoded second class weight;
and the number of the pixel points required for storing the encoded first-class weight is greater than that of the pixel points required for storing the encoded second-class weight.
In this embodiment of the present specification, the weight encoding module 702 is configured to:
clustering the second class of weights to obtain a plurality of subcategories, wherein each subcategory corresponds to one or more second class of weights;
determining a target label corresponding to each subcategory based on the corresponding relation between a preset label and a preset weight value and a second class weight corresponding to each subcategory;
updating the weight value of the second class of weight corresponding to the sub-category based on the target label corresponding to the sub-category to obtain the processed second class of weight;
coding the processed second class of weights based on the preset second coding mode to obtain the coded second class of weights;
the writing the encoded model weights to a carrier object includes:
and writing the coded first class weight, the coded second class weight and the corresponding relation between the preset label and the preset weight value into the first carrier object.
In this embodiment of the present specification, the information writing module 703 is configured to input the encoded model weight and the first carrier object into a pre-trained steganography model, so as to obtain the first carrier object written with the encoded model weight.
In an embodiment of this specification, the apparatus further includes:
the third acquisition module is used for acquiring the coded historical weight and the historical carrier object;
the object acquisition module is used for inputting the coded historical weight and the historical carrier object into the steganography model to obtain a written historical carrier object;
the weight extraction module is used for inputting the written historical carrier object into a weight extraction model to obtain an extracted historical weight;
and the second judgment module is used for determining whether the steganographic model is converged or not based on the history carrier object, the written history carrier object, the coded history weight and the extracted history weight, and if not, training the steganographic model and the extraction model based on the coded history weight and the history carrier object until the steganographic model and the extraction model are converged to obtain the trained steganographic model.
The embodiment of the specification provides a data processing device, which acquires a target model to be steganographically, and determines importance of each model weight of the target model, where the importance of the model weight can be used to represent influence degree of the model weight on model accuracy of the target model, determines a coding mode corresponding to each model weight based on the importance of each model weight of the target model, and codes the model weight based on the coding mode to obtain a coded model weight, writes the coded model weight and a model structure of the target model into a carrier object to obtain a written carrier object, sends the written carrier object to a target device, and the written carrier object is used to trigger the target device to obtain the target model based on the written carrier object so as to process a target service based on the target model. Therefore, the server side can encode the model weight according to the encoding mode corresponding to the model weight, so that the introduction of extra data is avoided, the efficiency of model steganography can be improved, the corresponding encoding mode is determined according to the importance of the model weight, the encoding precision of the model weight with high importance can be improved, and the effect of model steganography is improved.
EXAMPLE five
Based on the same idea, the data processing method provided in the embodiment of the present specification further provides a data processing apparatus, as shown in fig. 8.
The data processing apparatus includes: an object receiving module 801, a data extraction module 802, and a model determination module 803, wherein:
an object receiving module 801, configured to receive a written carrier object sent by a server, where the written carrier object is obtained by writing, by the server, a model structure of the target model and a coded model weight into a carrier object, the coded model weight is obtained by coding, by the server, a coding manner corresponding to each model weight determined by importance of each model weight of the target model, and the importance of the model weight is used to characterize an influence degree of the model weight on a model accuracy of the target model;
a data extraction module 802, configured to extract the written carrier object based on a preset extraction model, so as to obtain a model weight of the encoded target model and a model structure of the target model;
a model determining module 803, configured to determine the target model based on the encoded model weight and the model structure of the target model, and process a target service based on the target model.
The embodiment of the present specification provides a data processing apparatus, which receives a written carrier object sent by a server, where the written carrier object may be obtained by writing a model structure of a target model and a coded model weight into the carrier object for the server, the coded model weight may be obtained by coding a coding manner corresponding to each determined model weight by the server according to the importance of each model weight of the target model, the importance of the model weight may be used to represent the degree of influence of the model weight on the model accuracy of the target model, the written carrier object is subjected to extraction processing based on a preset extraction model to obtain a model weight of the coded target model and a model structure of the target model, and the target model is determined based on the coded model weight and the model structure of the target model, and a target service is processed based on the target model. Therefore, the server side can encode the model weight according to the encoding mode corresponding to the model weight, so that the introduction of extra data is avoided, the efficiency of model steganography can be improved, the corresponding encoding mode is determined according to the importance of the model weight, the encoding precision of the model weight with high importance can be improved, and the effect of model steganography is improved.
EXAMPLE six
Based on the same idea, embodiments of the present specification further provide a data processing apparatus, as shown in fig. 9.
Data processing apparatus may vary widely in configuration or performance and may include one or more processors 901 and memory 902, where memory 902 may store one or more stored applications or data. Memory 902 may be, among other things, transient storage or persistent storage. The application program stored in memory 902 may include one or more modules (not shown), each of which may include a series of computer-executable instructions for the data processing device. Still further, the processor 901 may be arranged in communication with the memory 902 for executing a series of computer executable instructions in the memory 902 on a data processing device. The data processing apparatus may also include one or more power supplies 903, one or more wired or wireless network interfaces 904, one or more input-output interfaces 905, one or more keyboards 906.
In particular, in this embodiment, the data processing apparatus includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions for the data processing apparatus, and the one or more programs configured to be executed by the one or more processors include computer-executable instructions for:
obtaining a target model to be steganographically, and determining the importance of each model weight of the target model, wherein the importance of the model weight is used for representing the influence degree of the model weight on the model accuracy of the target model;
determining a coding mode corresponding to each model weight based on the importance of each model weight of the target model, and coding the model weight based on the coding mode to obtain a coded model weight;
writing the coded model weight and the model structure of the target model into a carrier object to obtain a written carrier object;
and sending the written carrier object to target equipment, wherein the written carrier object is used for triggering the target equipment to obtain the target model based on the written carrier object so as to process a target service based on the target model.
Optionally, the carrier objects comprise a first carrier object and a second carrier object, and the written carrier objects comprise the first carrier object written with the encoded model weights and the second carrier object written with the model structure of the target model.
Optionally, the determining the importance of each model weight of the target model includes:
obtaining a first training sample;
inputting the first training sample into the target model, and determining a first model accuracy of the target model based on a sample label of the first training sample and an output result of the target model;
setting a weight value of a target weight of the target model as a first weight value, and determining a changed target model based on the set target weight, wherein the target weight is any one model weight of the target model;
inputting the first training sample into the changed target model, and determining a second model accuracy of the changed target model based on a sample label of the first training sample and an output result of the changed target model;
determining an importance score for the target weight of the target model based on the first model accuracy and the second model accuracy.
Optionally, said determining an importance score for a target weight of the target model based on the first model accuracy rate and the second model accuracy rate comprises:
determining a first score of a target weight for the target model based on the first model accuracy and the second model accuracy to determine a first score of each model weight for the target model;
respectively inputting the weight value of each model weight of the target model into a pre-trained first model to obtain the attention value of each model weight of the target model;
determining an importance score for each model weight of the target model based on the first score and the attention score for the each model weight.
Optionally, the determining the importance of each model weight of the target model includes:
respectively inputting the weight value of each model weight of the target model into a pre-trained first model to obtain the attention value of each model weight of the target model, wherein the first model is constructed by a preset number of full link layers with attention mechanisms;
determining an attention score for each model weight of the target model as an importance score for each model weight of the target model.
Optionally, before the weighting values of the model weights of the target model are respectively input into a pre-trained first model to obtain an attention score of each model weight of the target model, the method further includes:
acquiring a historical model to be steganographically;
respectively inputting the weight value of each model weight of the historical model into the first model to obtain the attention score of each model weight of the historical model;
determining a target weight value for each model weight of the historical model based on the attention score and the weight value for each model weight of the historical model;
determining a changed historical model based on the target weight value of the historical model;
acquiring a second training sample;
inputting the second training sample into the changed historical model, and determining the model accuracy of the changed historical model based on the sample label of the second training sample and the output result of the changed historical model;
and determining whether the first model is converged or not based on the model accuracy and the parameter sparsity of the changed historical model, and if not, continuing training the first model based on the model weight of the historical model until the first model is converged to obtain the trained first model.
Optionally, the determining, based on the importance score of each model weight of the target model, a coding mode corresponding to each model weight, and coding the model weight based on the target coding mode to obtain a coded model weight includes:
dividing the model weight of the target model into a first class weight and a second class weight based on the importance of each model weight of the target model;
acquiring a preset first coding mode corresponding to the first class weight, and coding the first class weight based on the preset first coding mode to obtain a coded first class weight;
acquiring a preset second coding mode corresponding to the second class weight, and coding the second class weight based on the preset second coding mode to obtain a coded second class weight;
obtaining the encoded model weight based on the encoded first class weight and the encoded second class weight;
and the number of the pixel points required for storing the encoded first-class weight is greater than that of the pixel points required for storing the encoded second-class weight.
Optionally, the encoding the second class of weights based on the preset second encoding mode to obtain encoded second class of weights includes:
clustering the second class of weights to obtain a plurality of subcategories, wherein each subcategory corresponds to one or more second class of weights;
determining a target label corresponding to each sub-category based on the corresponding relation between a preset label and a preset weight value and a second class weight corresponding to each sub-category;
updating the weight value of the second class of weight corresponding to the sub-category based on the target label corresponding to the sub-category to obtain the processed second class of weight;
coding the processed second class weight based on the preset second coding mode to obtain the coded second class weight;
the writing the encoded model weights to a carrier object includes:
and writing the coded first class weight, the coded second class weight and the corresponding relation between the preset label and the preset weight value into the first carrier object.
Optionally, the writing the encoded model weights into a carrier object to obtain a written carrier object includes:
and inputting the coded model weight and the first carrier object into a pre-trained steganography model to obtain the first carrier object written with the coded model weight.
Optionally, before the inputting a pre-trained steganographic model into the encoded model weight and the first carrier object to obtain the first carrier object written with the encoded model weight, the method further includes:
acquiring the coded historical weight and a historical carrier object;
inputting the coded historical weight and the historical carrier object into the steganographic model to obtain a written historical carrier object;
inputting the written historical carrier object into a weight extraction model to obtain an extracted historical weight;
and determining whether the steganographic model converges or not based on the historical carrier object, the written historical carrier object, the coded historical weight and the extracted historical weight, and if not, training the steganographic model and the extracted model based on the coded historical weight and the historical carrier object until the steganographic model and the extracted model converge to obtain the trained steganographic model.
Further, in particular embodiments, the data processing apparatus includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions for the data processing apparatus, and the one or more programs configured to be executed by the one or more processors include computer-executable instructions for:
receiving a written carrier object sent by a server, wherein the written carrier object is obtained by writing a model structure of the target model and a coded model weight into the carrier object by the server, the coded model weight is obtained by coding a coding mode corresponding to each model weight determined by the server according to the importance of each model weight of the target model, and the importance of the model weight is used for representing the influence degree of the model weight on the model accuracy of the target model;
extracting the written carrier object based on a preset extraction model to obtain the model weight of the coded target model and the model structure of the target model;
and determining the target model based on the coded model weight and the model structure of the target model, and processing the target service based on the target model.
The embodiment of the specification provides a data processing device, which acquires a target model to be steganographically, and determines importance of each model weight of the target model, where the importance of the model weight can be used to represent influence degree of the model weight on model accuracy of the target model, determines a coding mode corresponding to each model weight based on the importance of each model weight of the target model, and codes the model weight based on the coding mode to obtain a coded model weight, writes the coded model weight and a model structure of the target model into a carrier object to obtain a written carrier object, sends the written carrier object to the target device, and the written carrier object is used to trigger the target device to obtain the target model based on the written carrier object to process a target service based on the target model. Therefore, the server side can encode the model weight according to the encoding mode corresponding to the model weight, so that the introduction of extra data is avoided, the efficiency of model steganography can be improved, the corresponding encoding mode is determined according to the importance of the model weight, the encoding precision of the model weight with high importance can be improved, and the effect of model steganography is improved.
EXAMPLE seven
The embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the processes of the data processing method embodiments, and can achieve the same technical effects, and in order to avoid repetition, the details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
The embodiment of the present specification provides a computer-readable storage medium, which acquires a target model to be steganographically, and determines importance of each model weight of the target model, where the importance of the model weight may be used to represent an influence degree of the model weight on model accuracy of the target model, determines a coding mode corresponding to each model weight based on the importance of each model weight of the target model, and codes the model weight based on the coding mode to obtain a coded model weight, writes the coded model weight and a model structure of the target model into a carrier object to obtain a written carrier object, sends the written carrier object to a target device, and the written carrier object is used to trigger the target device to obtain the target model based on the written carrier object to process a target service based on the target model. Therefore, the server side can encode the model weight according to the encoding mode corresponding to the model weight, additional data are prevented from being introduced, the efficiency of model steganography can be improved, the corresponding encoding mode is determined according to the importance of the model weight, the encoding precision of the model weight with high importance can be improved, and the effect of model steganography is improved.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD) (e.g., a Field Programmable Gate Array (FPGA)) is an integrated circuit whose Logic functions are determined by a user programming the Device. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Hardware Description Language), traffic, pl (core universal Programming Language), HDCal (jhdware Description Language), lang, Lola, HDL, laspam, hardward Description Language (vhr Description Language), vhal (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in purely computer readable program code means, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be regarded as a hardware component and the means for performing the various functions included therein may also be regarded as structures within the hardware component. Or even means for performing the functions may be conceived to be both a software module implementing the method and a structure within a hardware component.
The systems, apparatuses, modules or units described in the above embodiments may be specifically implemented by a computer chip or an entity, or implemented by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the various elements may be implemented in the same one or more software and/or hardware implementations in implementing one or more embodiments of the present description.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present description are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the description. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
One or more embodiments of the present description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
All the embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present disclosure, and is not intended to limit the present disclosure. Various modifications and alterations to this description will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present specification should be included in the scope of the claims of the present specification.

Claims (17)

1. A method of data processing, comprising:
acquiring a target model to be steganographically, and determining the importance of each model weight of the target model, wherein the importance of the model weight is used for representing the influence degree of the model weight on the model accuracy of the target model;
determining a coding mode corresponding to each model weight based on the importance of each model weight of the target model, and coding the model weight based on the coding mode to obtain a coded model weight;
writing the coded model weight and the model structure of the target model into a carrier object to obtain a written carrier object;
and sending the written carrier object to target equipment, wherein the written carrier object is used for triggering the target equipment to obtain the target model based on the written carrier object so as to process a target service based on the target model.
2. The method according to claim 1, the carrier object comprising a first carrier object and a second carrier object, the written carrier object comprising a first carrier object to which the encoded model weights are written and a second carrier object to which the model structure of the target model is written.
3. The method of claim 2, the determining the importance of each model weight of the target model, comprising:
obtaining a first training sample;
inputting the first training sample into the target model, and determining a first model accuracy of the target model based on a sample label of the first training sample and an output result of the target model;
setting a weight value of a target weight of the target model as a first weight value, and determining a changed target model based on the set target weight, wherein the target weight is any one model weight of the target model;
inputting the first training sample into the changed target model, and determining a second model accuracy of the changed target model based on a sample label of the first training sample and an output result of the changed target model;
determining an importance score for the target weight of the target model based on the first model accuracy and the second model accuracy.
4. The method of claim 3, the determining an importance score for a target weight of the target model based on the first model accuracy rate and the second model accuracy rate, comprising:
determining a first score of a target weight for the target model based on the first model accuracy and the second model accuracy to determine a first score of each model weight for the target model;
respectively inputting the weight value of each model weight of the target model into a pre-trained first model to obtain the attention value of each model weight of the target model;
determining an importance score for each model weight of the target model based on the first score and the attention score for the each model weight.
5. The method of claim 2, the determining the importance of each model weight of the target model, comprising:
respectively inputting the weight value of each model weight of the target model into a pre-trained first model to obtain the attention value of each model weight of the target model, wherein the first model is constructed by a preset number of full link layers with attention mechanisms;
determining an attention score for each model weight of the target model as an importance score for each model weight of the target model.
6. The method of claim 5, before the inputting the weight values of the model weights of the target model into the pre-trained first model respectively to obtain the attention score of each model weight of the target model, the method further comprising:
acquiring a historical model to be steganographically;
respectively inputting the weight value of each model weight of the historical model into the first model to obtain the attention score of each model weight of the historical model;
determining a target weight value for each model weight of the historical model based on the attention score and the weight value for each model weight of the historical model;
determining a changed historical model based on the target weight value of the historical model;
acquiring a second training sample;
inputting the second training sample into the changed historical model, and determining the model accuracy of the changed historical model based on the sample label of the second training sample and the output result of the changed historical model;
and determining whether the first model is converged or not based on the model accuracy rate and the parameter sparsity rate of the changed historical model, and if not, continuing to train the first model based on the model weight of the historical model until the first model is converged to obtain the trained first model.
7. The method according to claim 2, wherein the determining, based on the importance score of each model weight of the target model, a coding mode corresponding to each model weight, and coding the model weight based on a target coding mode to obtain a coded model weight comprises:
dividing the model weight of the target model into a first class weight and a second class weight based on the importance of each model weight of the target model;
acquiring a preset first coding mode corresponding to the first class weight, and coding the first class weight based on the preset first coding mode to obtain a coded first class weight;
acquiring a preset second coding mode corresponding to the second class weight, and coding the second class weight based on the preset second coding mode to obtain a coded second class weight;
obtaining the encoded model weight based on the encoded first class weight and the encoded second class weight;
and the number of the pixel points required for storing the encoded first-class weight is greater than that of the pixel points required for storing the encoded second-class weight.
8. The method according to claim 7, wherein the encoding the second class of weights based on the preset second encoding manner to obtain the encoded second class of weights comprises:
clustering the second class of weights to obtain a plurality of subcategories, wherein each subcategory corresponds to one or more second class weights;
determining a target label corresponding to each sub-category based on the corresponding relation between a preset label and a preset weight value and a second class weight corresponding to each sub-category;
updating the weighted value of the second class of weights corresponding to the subcategory based on the target label corresponding to the subcategory to obtain the processed second class of weights;
coding the processed second class weight based on the preset second coding mode to obtain the coded second class weight;
the writing the encoded model weights to a carrier object comprises:
and writing the coded first class weight, the coded second class weight and the corresponding relation between the preset label and the preset weight value into the first carrier object.
9. The method of claim 8, wherein writing the encoded model weights to a carrier object, resulting in a written carrier object, comprises:
and inputting the coded model weight and the first carrier object into a pre-trained steganography model to obtain the first carrier object written with the coded model weight.
10. The method according to claim 9, before the inputting the encoded model weights and the first carrier object into a pre-trained steganographic model to obtain the first carrier object written with the encoded model weights, further comprising:
acquiring the coded historical weight and a historical carrier object;
inputting the coded historical weight and the historical carrier object into the steganography model to obtain a written historical carrier object;
inputting the written historical carrier object into a weight extraction model to obtain an extracted historical weight;
and determining whether the steganographic model converges or not based on the historical carrier object, the written historical carrier object, the coded historical weight and the extracted historical weight, and if not, training the steganographic model and the extracted model based on the coded historical weight and the historical carrier object until the steganographic model and the extracted model converge to obtain the trained steganographic model.
11. A method of data processing, comprising:
receiving a written carrier object sent by a server, wherein the written carrier object is obtained by writing a model structure of the target model and a coded model weight into the carrier object by the server, the coded model weight is obtained by coding a coding mode corresponding to each model weight determined by the server according to the importance of each model weight of the target model, and the importance of the model weight is used for representing the influence degree of the model weight on the model accuracy of the target model;
extracting the written carrier object based on a preset extraction model to obtain the model weight of the coded target model and the model structure of the target model;
and determining the target model based on the coded model weight and the model structure of the target model, and processing the target service based on the target model.
12. A data processing apparatus comprising:
the model obtaining module is used for obtaining a target model to be steganographically and determining the importance of each model weight of the target model, wherein the importance of the model weight is used for representing the influence degree of the model weight on the model accuracy of the target model;
the weight coding module is used for determining a coding mode corresponding to each model weight based on the importance of each model weight of the target model, and coding the model weight based on the coding mode to obtain a coded model weight;
an information writing module, configured to write the encoded model weight and the model structure of the target model into a carrier object, so as to obtain a written carrier object;
and the data sending module is used for sending the written carrier object to target equipment, and the written carrier object is used for triggering the target equipment to obtain the target model based on the written carrier object so as to process the target service based on the target model.
13. A data processing apparatus comprising:
an object receiving module, configured to receive a written carrier object sent by a server, where the written carrier object is obtained by writing, by the server, a model structure of the target model and a coded model weight into a carrier object, the coded model weight is obtained by coding, by the server, a coding manner corresponding to each model weight determined by importance of each model weight of the target model, and the importance of the model weight is used to represent an influence degree of the model weight on a model accuracy of the target model;
the data extraction module is used for extracting the written carrier object based on a preset extraction model to obtain the model weight of the coded target model and the model structure of the target model;
and the model determining module is used for determining the target model based on the coded model weight and the model structure of the target model and processing the target service based on the target model.
14. A data processing apparatus, the data processing apparatus comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
acquiring a target model to be steganographically, and determining the importance of each model weight of the target model, wherein the importance of the model weight is used for representing the influence degree of the model weight on the model accuracy of the target model;
determining a coding mode corresponding to each model weight based on the importance of each model weight of the target model, and coding the model weight based on the coding mode to obtain a coded model weight;
writing the coded model weight and the model structure of the target model into a carrier object to obtain a written carrier object;
and sending the written carrier object to target equipment, wherein the written carrier object is used for triggering the target equipment to obtain the target model based on the written carrier object so as to process a target service based on the target model.
15. A data processing apparatus, the data processing apparatus comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
receiving a written carrier object sent by a server, wherein the written carrier object is obtained by writing a model structure of the target model and a coded model weight into the carrier object by the server, the coded model weight is obtained by coding a coding mode corresponding to each model weight determined by the server according to the importance of each model weight of the target model, and the importance of the model weight is used for representing the influence degree of the model weight on the model accuracy of the target model;
extracting the written carrier object based on a preset extraction model to obtain the model weight of the coded target model and the model structure of the target model;
and determining the target model based on the coded model weight and the model structure of the target model, and processing the target service based on the target model.
16. A storage medium for storing computer-executable instructions that when executed perform the following:
acquiring a target model to be steganographically, and determining the importance of each model weight of the target model, wherein the importance of the model weight is used for representing the influence degree of the model weight on the model accuracy of the target model;
determining a coding mode corresponding to each model weight based on the importance of each model weight of the target model, and coding the model weight based on the coding mode to obtain a coded model weight;
writing the coded model weight and the model structure of the target model into a carrier object to obtain a written carrier object;
and sending the written carrier object to target equipment, wherein the written carrier object is used for triggering the target equipment to obtain the target model based on the written carrier object so as to process a target service based on the target model.
17. A storage medium for storing computer-executable instructions that when executed perform the following:
receiving a written carrier object sent by a server, wherein the written carrier object is obtained by writing a model structure of the target model and a coded model weight into the carrier object by the server, the coded model weight is obtained by coding a coding mode corresponding to each model weight determined by the server according to the importance of each model weight of the target model, and the importance of the model weight is used for representing the influence degree of the model weight on the model accuracy of the target model;
extracting the written carrier object based on a preset extraction model to obtain the model weight of the coded target model and the model structure of the target model;
and determining the target model based on the coded model weight and the model structure of the target model, and processing the target service based on the target model.
CN202210560452.XA 2022-05-23 2022-05-23 Data processing method, device and equipment Pending CN114926706A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210560452.XA CN114926706A (en) 2022-05-23 2022-05-23 Data processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210560452.XA CN114926706A (en) 2022-05-23 2022-05-23 Data processing method, device and equipment

Publications (1)

Publication Number Publication Date
CN114926706A true CN114926706A (en) 2022-08-19

Family

ID=82810406

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210560452.XA Pending CN114926706A (en) 2022-05-23 2022-05-23 Data processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN114926706A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115238250A (en) * 2022-09-15 2022-10-25 支付宝(杭州)信息技术有限公司 Model processing method, device and equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6327392B1 (en) * 1999-01-28 2001-12-04 Sharp Laboratories Of America, Inc. Method of visual progressive coding
CN110087075A (en) * 2019-04-22 2019-08-02 浙江大华技术股份有限公司 A kind of coding method of image, code device and computer storage medium
WO2021047471A1 (en) * 2019-09-10 2021-03-18 阿里巴巴集团控股有限公司 Image steganography method and apparatus, and image extraction method and apparatus, and electronic device
US20210287074A1 (en) * 2020-03-12 2021-09-16 Semiconductor Components Industries, Llc Neural network weight encoding
CN113657107A (en) * 2021-08-19 2021-11-16 长沙理工大学 Natural language information hiding method based on sequence to steganographic sequence
CN113961962A (en) * 2021-10-11 2022-01-21 百保(上海)科技有限公司 Model training method and system based on privacy protection and computer equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6327392B1 (en) * 1999-01-28 2001-12-04 Sharp Laboratories Of America, Inc. Method of visual progressive coding
CN110087075A (en) * 2019-04-22 2019-08-02 浙江大华技术股份有限公司 A kind of coding method of image, code device and computer storage medium
WO2021047471A1 (en) * 2019-09-10 2021-03-18 阿里巴巴集团控股有限公司 Image steganography method and apparatus, and image extraction method and apparatus, and electronic device
US20210287074A1 (en) * 2020-03-12 2021-09-16 Semiconductor Components Industries, Llc Neural network weight encoding
CN113657107A (en) * 2021-08-19 2021-11-16 长沙理工大学 Natural language information hiding method based on sequence to steganographic sequence
CN113961962A (en) * 2021-10-11 2022-01-21 百保(上海)科技有限公司 Model training method and system based on privacy protection and computer equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
高培贤;魏立线;刘佳;刘明明;: "针对图像隐写分析的卷积神经网络结构改进", 计算机工程, no. 10, 27 February 2018 (2018-02-27), pages 309 - 313 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115238250A (en) * 2022-09-15 2022-10-25 支付宝(杭州)信息技术有限公司 Model processing method, device and equipment

Similar Documents

Publication Publication Date Title
CN108305158B (en) Method, device and equipment for training wind control model and wind control
CN111881973A (en) Sample selection method and device, storage medium and electronic equipment
CN112672184A (en) Video auditing and publishing method
CN111753878A (en) Network model deployment method, equipment and medium
CN112200132A (en) Data processing method, device and equipment based on privacy protection
CN113435585A (en) Service processing method, device and equipment
CN111325444A (en) Risk prevention and control decision method, device, system and equipment
CN115618964A (en) Model training method and device, storage medium and electronic equipment
CN115712866A (en) Data processing method, device and equipment
CN114926706A (en) Data processing method, device and equipment
CN115774552A (en) Configurated algorithm design method and device, electronic equipment and readable storage medium
CN110705622A (en) Decision-making method and system and electronic equipment
CN116308738B (en) Model training method, business wind control method and device
CN111507726B (en) Message generation method, device and equipment
CN111538925B (en) Uniform resource locator URL fingerprint feature extraction method and device
CN115130621B (en) Model training method and device, storage medium and electronic equipment
CN115221523B (en) Data processing method, device and equipment
CN111242195B (en) Model, insurance wind control model training method and device and electronic equipment
CN107368281B (en) Data processing method and device
CN112307371B (en) Applet sub-service identification method, device, equipment and storage medium
CN109325127B (en) Risk identification method and device
CN113887719A (en) Model compression method and device
CN110321433B (en) Method and device for determining text category
CN111539520A (en) Method and device for enhancing robustness of deep learning model
CN115423485B (en) Data processing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination