CN112862067A - Method and device for processing business by utilizing business model based on privacy protection - Google Patents

Method and device for processing business by utilizing business model based on privacy protection Download PDF

Info

Publication number
CN112862067A
CN112862067A CN202110050937.XA CN202110050937A CN112862067A CN 112862067 A CN112862067 A CN 112862067A CN 202110050937 A CN202110050937 A CN 202110050937A CN 112862067 A CN112862067 A CN 112862067A
Authority
CN
China
Prior art keywords
neural network
loading
sub
processing
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110050937.XA
Other languages
Chinese (zh)
Other versions
CN112862067B (en
Inventor
曹佳炯
丁菁汀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202110050937.XA priority Critical patent/CN112862067B/en
Publication of CN112862067A publication Critical patent/CN112862067A/en
Application granted granted Critical
Publication of CN112862067B publication Critical patent/CN112862067B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/57Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioethics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

An embodiment of the specification provides a method and a device for processing a service by using a service model based on privacy protection, wherein the service model comprises N neural network layers which are sequentially connected, and the method comprises the following steps: and aiming at the N neural network layers, carrying out loading processing for multiple times, wherein any t-th loading processing comprises the following steps: loading network parameters of continuous L neural network layers after the neural network layer targeted by the previous loading processing into a memory, wherein L is less than N; acquiring input data corresponding to the loaded neural network layer; processing input data by using the loaded network parameters of the L neural network layers to obtain an output result of the current loading processing, wherein the output result is used as input data of the next loading processing; clearing the network parameters of the loaded L neural network layers; and after multiple times of loading processing, determining the output result of the last neural network layer in the N neural network layers as the service processing result of the service model.

Description

Method and device for processing business by utilizing business model based on privacy protection
Technical Field
One or more embodiments of the present specification relate to the field of machine learning and the field of data security, and in particular, to a method and an apparatus for processing a service using a service model based on privacy protection.
Background
In recent years, artificial intelligence systems with deep learning models as the core have been rapidly developed. At present, the method is widely applied to scenes such as travel, recommendation, payment and the like. At the same time, the security of the deep learning model itself becomes critical. Once the model is cracked, a chain reaction is caused, and a series of safety problems are caused. Therefore, the issue of model privacy protection is a popular research direction in recent years.
It is desirable to have an improved scheme for protecting the privacy and security of models and business data during the business process using the business model.
Disclosure of Invention
The embodiments in the present specification aim to provide a more effective privacy protection of a deep learning model and a method for reducing the memory occupation of the model, and solve the deficiencies in the prior art.
According to a first aspect, a method for processing a service by using a service model based on privacy protection is provided, where the service model includes N neural network layers connected in sequence, and the method includes:
and aiming at the N neural network layers, carrying out multiple loading treatments, wherein any t-th loading treatment comprises the following steps:
loading network parameters of continuous L neural network layers after the neural network layer targeted by the previous loading processing into a memory, wherein L is less than N;
acquiring input data corresponding to the loaded neural network layer;
processing the input data by using the loaded network parameters of the L neural network layers to obtain an output result of the current loading processing as input data of the next loading processing;
clearing the memory storage of the loaded network parameters of the L neural network layers;
and after the multiple loading processing, determining the output result of the last neural network layer in the N neural network layers as the service processing result of the service model.
In one embodiment, the method further includes, before performing multiple loading processes for the N neural network layers, obtaining sample data of a traffic sample through data acquisition and/or data preprocessing.
In one embodiment, when the tth loading process is a first loading process, the consecutive L neural network layers are first L neural network layers of the N neural network layers;
the obtaining of the input data corresponding to the loaded neural network layer includes obtaining sample data of the service sample as the input data corresponding to the loaded neural network layer.
In one embodiment, the traffic sample comprises one of: picture, audio, text, business object, the business object comprising one of: user, merchant, commodity.
In one embodiment, the network parameters include one or more of weight parameters, bias parameters, structural parameters; the structural parameters comprise one or more of input neuron number, output neuron number and activation function.
In one embodiment, there are at least two loading processes of the multiple loading processes, where the number of neural network layers loaded is different.
In one embodiment, processing the input data by using the loaded network parameters of the L neural network layers to obtain the current output result includes:
for each two continuous layers sequentially connected in the L loaded neural network layers, determining an output result of the previous layer according to the network parameters of the previous layer and input data thereof, and taking the output result as the input data of the next layer;
and determining the output result of the last layer in the L neural network layers loaded at the time according to the connection sequence as the output result of the loading processing at the time.
In one embodiment, the model runs in a trusted execution environment.
In one embodiment, clearing the memory storage of the network parameters of the L neural network layers comprises: and clearing the loaded network parameters of the L neural network layers from the memory through a memory release command and/or a garbage collection mechanism.
According to a second aspect, there is provided a method for processing a service by using a service model based on privacy protection, wherein the service model includes N sequentially connected neural network layers, including a first neural network layer, the method includes:
sequentially loading each neural network layer according to the connection sequence of the N neural network layers, wherein the loading process for the first neural network layer comprises a plurality of times of sub-loading processes, and any ith sub-loading process comprises:
loading a plurality of network parameters in a plurality of network parameters contained in the first neural network layer into a memory;
acquiring a plurality of data elements corresponding to the plurality of network parameters in the input data of the first neural network layer;
processing the data elements by using the loaded network parameters to obtain a sub-processing result of the sub-loading processing;
clearing the memory storage of the loaded network parameters;
after the multiple times of sub-loading processing, collecting the sub-processing results of each sub-loading processing to obtain the output result of the first neural network layer, and using the output result as the input data of the next neural network layer;
and determining the output result of the last neural network layer in the N neural network layers as the service processing result of the service model.
In one embodiment, the loading process for the first neural network layer further includes, before performing the sub-loading process for a plurality of times, dividing the plurality of network parameters and the plurality of data elements included in the input data of the first neural network layer into a plurality of sub-parameter sets and a plurality of sub-element sets according to correlations between the plurality of network parameters and the plurality of data elements included in the input data of the first neural network layer, where the sub-parameter sets and the sub-element sets correspond to each other one by one;
loading a plurality of network parameters of a plurality of network parameters included in the first neural network layer into a memory, including: selecting a target sub-parameter set from the plurality of sub-parameter sets, and loading parameters of the target sub-parameter set into a memory as the plurality of network parameters;
obtaining a plurality of data elements corresponding to the plurality of network parameters in the input data of the first neural network layer includes: and acquiring elements in the target sub-element set corresponding to the target sub-parameter set as the data elements.
In one embodiment, selecting the target sub-parameter set from the plurality of sub-parameter sets comprises randomly selecting the target sub-parameter set from an unloaded processed set of the plurality of sub-parameter sets.
In an embodiment, the method further includes, before sequentially performing loading processing on each neural network layer according to the connection order of the N neural network layers, obtaining sample data of a service sample by data acquisition and/or data preprocessing.
In one embodiment, the loading processing is sequentially performed on each neural network layer, including loading the network parameters and the input data thereof included in the first neural network layer into a memory;
the input data of the first neural network layer comprises the acquired sample data of the service sample.
In one embodiment, the traffic sample comprises one of: picture, audio, text, business object, the business object comprising one of: user, merchant, commodity.
In one embodiment, the network parameters include one or more of weight parameters, bias parameters, structural parameters; the structural parameters comprise one or more of input neuron number, output neuron number and activation function.
In one embodiment, the business model runs in a trusted execution environment.
In one embodiment, clearing the memory storage of the network parameters loaded this time includes: and clearing the loaded network parameters from the memory through a memory release command and/or a garbage collection mechanism.
According to a third aspect, there is provided an apparatus for performing service processing by using a service model based on privacy protection, where the service model includes N neural network layers connected in sequence, the apparatus including:
a first loading unit configured to perform a plurality of times of loading processing for the N neural network layers, and including a unit for performing any tth time of loading processing,
a layer parameter loading subunit configured to load network parameters of consecutive L neural network layers after the neural network layer targeted for the previous loading processing into a memory, where L < N;
the layer input data acquisition subunit is configured to acquire input data corresponding to the loaded neural network layer;
the layer result determining subunit is configured to process the input data by using the loaded network parameters of the L neural network layers to obtain an output result of the current loading process, and the output result is used as input data of the next loading process;
a clearing subunit configured to clear the memory storage of the loaded network parameters of the L neural network layers;
and the result determining unit is configured to determine an output result of the last neural network layer of the N neural network layers as a service processing result of the service model after the multiple loading processing.
According to a fourth aspect, there is provided an apparatus for performing service processing by using a service model based on privacy protection, the service model including N sequentially connected neural network layers, including a first neural network layer, the apparatus including:
a loading unit configured to sequentially perform loading processing on each neural network layer according to a connection order of the N neural network layers, wherein the loading processing for the first neural network layer includes performing sub-loading processing a plurality of times; the loading unit comprises a loading unit and a loading unit, wherein the loading unit is used for carrying out any ith sub-loading processing,
a sub-parameter loading subunit configured to load a plurality of network parameters included in the first neural network layer into a memory;
a sub-element loading sub-unit configured to obtain a plurality of data elements corresponding to the plurality of network parameters in the input data of the first neural network layer;
the sub-processing result acquisition subunit is configured to process the data elements by using the loaded network parameters to obtain a sub-processing result of the sub-loading processing;
a clearing subunit configured to clear the memory storage of the loaded network parameters;
and the number of the first and second groups,
a layer result determining subunit configured to collect sub-processing results of each sub-loading processing after the plurality of sub-loading processing to obtain an output result of the first neural network layer and use the output result as input data of a next neural network layer;
and the result determining unit is configured to determine an output result of the last neural network layer in the N neural network layers as a service processing result of the service model.
According to a fifth aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of the first or second aspect.
According to a sixth aspect, there is provided a computing device comprising a memory and a processor, wherein the memory has stored therein executable code, and the processor, when executing the executable code, implements the method of the first or second aspect.
By using one or more of the method, the device, the computing equipment and the storage medium in the aspects, the problem of model privacy protection caused by obtaining the model parameters by using a HOOK memory method can be solved more effectively, and the memory occupation amount of the model can be reduced effectively.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram illustrating a method for business processing using a business model based on privacy protection according to an embodiment of the present description;
FIG. 2 is a schematic diagram illustrating yet another method for business processing using a business model based on privacy protection in accordance with an embodiment of the present description;
FIG. 3 illustrates a flow diagram of a method for business processing using a business model based on privacy preserving in accordance with an embodiment of the present description;
FIG. 4 illustrates a flow diagram of yet another method for business processing using a business model based on privacy preserving in accordance with an embodiment of the present description;
FIG. 5 illustrates a block diagram of an apparatus for business processing using a business model based on privacy protection in accordance with an embodiment of the present description;
FIG. 6 is a block diagram illustrating yet another apparatus for business processing using a business model based on privacy protection in accordance with an embodiment of the present description;
Detailed Description
The solution provided by the present specification will be described below with reference to the accompanying drawings.
At present, two types of methods are mainly used for protecting model privacy, the first type is model encryption and model confusion, namely, the model is encrypted and transformed, so that the model is difficult to crack. The second type is to run the model in a safer environment, and the main method at present is to run the model in a Trusted Execution Environment (TEE), so that the security level of the system level is higher, and the difficulty of model cracking is increased.
After studying the existing privacy protection methods, the inventors considered that the existing methods have still some disadvantages. For example, with the first category of methods described above, it is not excluded that an attacker may break faster by brute force methods. For the second method, as TEE has been widely studied, it is not excluded that the model can be directly accessed by the method of HOOK memory, which brings the risk of model leakage. The "HOOK memory" refers to a technical means for obtaining the content stored in the memory by cracking an encrypted or unencrypted memory of a local or remote system, and generally, a memory space in a TEE environment is encrypted, and the memory space in the non-TEE environment is not encrypted and does not directly expose data thereof to the outside, but the content stored in the memory is obtained after a certain cracking cost is spent in the HOOK regardless of whether the memory space is encrypted or not. In addition, some model operating environments, such as TEE environments, have strict requirements on the memory usage of the model, and if the memory usage is higher than the limit, the model cannot be operated.
In view of the above problems, in the embodiments in this specification, a method and an apparatus for performing business processing using a business model based on privacy protection are provided. By the method and the device, the privacy leakage risk of the deep learning model can be reduced, and the memory occupation amount of the deep learning model in operation can be reduced. The method has the basic idea that the privacy disclosure risk and the memory occupation amount of the deep learning model are reduced by performing layered decoupling and same-layer decoupling on the deep learning model. The hierarchical decoupling means that a plurality of neural network layers included in the deep learning model are loaded into the memory in multiple times according to the connection sequence of the neural network layers, only one or a plurality of neural network layers are loaded each time, the previously loaded neural network layers are eliminated from the memory for each loading, only the loaded neural network layers and input data of the neural network layers are reserved, the output result of the neural network layers is obtained according to the loaded neural network layers and the input data of the neural network layers, and the output result is reserved as the input data of the next loaded neural network layer. Therefore, when the last neural network layer of the deep learning model is loaded into the memory and the output result of the last neural network layer is obtained, the output result can be used as the output result of the whole deep learning model. The same-layer decoupling is based on the layered decoupling, further loads the network parameters and the input data of each neural network layer into the memory for multiple times, and for each loading, the memory eliminates the parameters and the input data of the neural network layer loaded before, only keeps partial parameters of the neural network layer loaded at this time, partial input data which has processing or calculation correlation with the partial parameters, and obtains a processing or calculation sub-result according to the partial parameters and the partial input data. After the multiple times of loading, collecting the calculation sub-results of each time of loading, and obtaining the output result of the neural network layer.
The basic idea of the method is further explained below.
Fig. 1 is a schematic diagram illustrating a method for processing a service using a service model based on privacy protection according to an embodiment of the present specification. As shown in fig. 1, the business model has N neural network layers, which are 1 st to nth neural network layers in sequence (according to their connection order), and each neural network layer is loaded into the memory in sequence from the 1 st neural network layer, and only one (one shown in fig. 1) or more (less than N) of the N neural network layers are loaded at a time. In any one time, namely, in the tth loading, the network parameters of the tth neural network layer (shown as the tth layer in fig. 1, not limited to only the tth layer) are loaded into the memory, and meanwhile, the network parameters of the neural network layer loaded before in the memory (for example, the network parameters of the tth layer) are cleared, and the output result of the neural network layer loaded this time is obtained according to the output result of the neural network layer loaded last time (for example, the output result of the tth layer 1) and the network parameters loaded this time (the network parameters of the tth layer). And taking the output result as input data of a next loaded neural network layer. And after the 1 st to Nth neural network layers are loaded in sequence, taking the output result of the Nth neural network layer as the output result of the deep learning model.
From the above, through performing hierarchical decoupling on the deep learning model, on one hand, because each neural network layer forming the deep learning model is loaded into the memory for multiple times according to the connection sequence, only the network parameters of a part of layers of the deep learning model are reserved in the memory at any time, and therefore, if an attacker uses the method of the HOOK memory to obtain the structure and parameters of the deep learning model, the attacker can only obtain the structure and parameter information of the part of the deep learning model in the memory, that is, the fragment information of the deep learning model, but cannot obtain the complete structure and parameter information of the deep learning model.
On the other hand, because the structure and the parameter information of only part of the neural network layer are loaded into the memory each time, namely only a subset of the structure and the parameter information of the whole deep learning model is loaded each time, by adopting the method, the memory required to be occupied during the operation period of the deep learning model can be greatly smaller than the memory required to be occupied by loading the whole deep learning model in one step in the conventional method, so that the memory occupation amount during the operation of the deep learning model is greatly reduced.
Fig. 2 is a schematic diagram illustrating another method for processing a service using a service model based on privacy protection according to an embodiment of the present disclosure. As shown in fig. 2, the business model has N neural network layers, which are the 1 st to nth neural network layers in sequence. For any one of the neural network layers (the t-th neural network layer), the network parameters and the input data are respectively divided into a plurality of groups and are matched one by one according to the correlation, wherein the correlation refers to that in the processing or calculation of the neural network layer, the processing or calculation of any divided input data group is only related to the matched network parameter group. Then, the input data set and the matched network parameter set are loaded into the memory for multiple times. Only one (one shown in fig. 2) or more of the several network parameter sets are loaded at a time while the previously loaded network parameters are cleared from memory. And acquiring a sub-calculation result (for example, a sub-calculation result zi in fig. 2) of the current loading according to the network parameter set (for example, the network parameter set xi in fig. 2) loaded each time and the input data set (for example, the network input set yi in fig. 2) matched with the network parameter set. After loading each network parameter group and input data group into the memory for multiple times and obtaining each corresponding sub-calculation result, collecting each sub-calculation result to obtain the output result of the neural network layer and using the output result as the input data of the next neural network layer. And after the output results of the 1 st-Nth neural network layers are sequentially obtained, taking the output result of the Nth neural network layer as the output result of the deep learning model.
It should be noted that, since the network parameters loaded each time are only related to the matched input data sets in the calculation process, the order of loading the network parameter sets and the matched input data sets of one neural network layer into the memory may be random, or of course, may be ordered.
In addition, in one embodiment, if the network parameters and the input data of a neural network layer cannot be divided into a plurality of groups according to the correlation, for example, the network parameters and the input data of a fully-connected layer, the network parameters and the input data of the neural network layer may also be loaded into the memory as a network parameter group and an input data group.
From the above, by performing the same-layer decoupling on the deep learning model, the network parameters and the input data of each neural network layer are loaded into the memory for many times, and only part of the network parameters of the neural network layer are reserved in the memory at any time. Therefore, if an attacker uses the method of HOOK memory to obtain the structure and parameters of the deep learning model, the attacker can only obtain partial structure and parameter information of one neural network layer of the deep learning model in the memory. Therefore, compared with the above-mentioned hierarchical decoupling deep learning model, on one hand, model information that an attacker can obtain from the same-layer decoupling deep learning model is more fragmented, that is, complete structure and parameter information of the deep learning model cannot be obtained. On the other hand, during the operation of the same-layer decoupled deep learning model, the data volume loaded by the memory once (from the network parameters and the input data of the partial neural network layers loaded by the deep learning model once to the partial network parameters and the input data loaded by only one neural network layer at one time) is further reduced, and the memory occupation volume during the operation of the deep learning model is further greatly reduced.
The specific process of the method is further described below.
FIG. 3 illustrates a flow diagram of a method for business processing using a business model based on privacy protection in accordance with an embodiment of the present description. The service model includes N neural network layers connected in sequence, as shown in fig. 3, the method at least includes the following steps:
in step 31, the loading process is performed for N neural network layers in multiple times.
In this step, the loading process is performed on the N neural network layers for a plurality of times, and one or a plurality of neural network layers smaller than N are loaded each time. The neural network layers loaded each time may be the same or different. Thus, in one embodiment, of the multiple loading processes, there may also be at least two loading processes, where the number of neural network layers loaded is different.
According to different embodiments, the business model may have different specific operating environments. In one embodiment, the business model may run in a trusted execution environment. In one embodiment, the business model may run in an untrusted execution environment.
In the multiple loads in step 31, the process of any tth load processing includes steps 311 to 314:
in step 311, loading network parameters of consecutive L neural network layers after the neural network layer targeted by the previous loading process into a memory, where L < N;
in this step, when the loading is the first loading, what is loaded into the memory is the first L neural network layers of the service model. Thus, in one embodiment, when the tth loading process is the first loading process, the consecutive L neural network layers are the first L neural network layers of the N neural network layers;
when the loading is other than the first loading, the loaded continuous L neural networks are L neural networks behind the network layer loaded at the previous time.
In step 312, input data corresponding to the loaded neural network layer is obtained;
in this step, when the loading is the first loading, the acquired input data corresponding to the neural network layer loaded this time may be the input data of the service model.
When the loading is other than the first loading, the acquired input data corresponding to the neural network layer loaded this time may be an output result of the network layer loaded last time.
In step 313, processing the input data by using the loaded network parameters of the L neural network layers to obtain an output result of the current loading processing as input data of the next loading processing;
in this step, when only 1 neural network layer is loaded, that is, when L is 1, the input data of this layer can be directly processed by using the network parameters of this neural network layer to obtain the output result of this loading process, and this output result is used as the input data of the next loading process;
when a plurality of neural network layers are loaded, that is, when 1< L < N, the output results of the loaded neural network layers can be sequentially obtained, and the output result of the last layer is used as the output result of the current loading. Specifically, in one embodiment, for each two consecutive layers in the loaded L-layer neural network layer in the connection order, the output result of the previous layer may be determined according to the network parameter of the previous layer and the input data thereof, and the determined output result may be used as the input data of the next layer; and determining the output result of the last layer in the loaded L-layer neural network layer according to the connection sequence as the output result of the current loading processing.
In general terms, the network parameters of a neural network layer may include a plurality of types, and thus, in one embodiment, the network parameters of the loaded neural network layer may include one or more of a weight parameter, a bias parameter, and a structure parameter; the structural parameters may include one or more of the number of input and output neurons, and activation functions.
In step 314, memory storage of the network parameters of the currently loaded L neural network layers is cleared;
in this step, after the output results of the sub-loading L neural network layers have been obtained in step 313, the network parameters of the sub-loading L neural network layers are cleared from the memory. According to different embodiments, the network parameters in the memory may be cleared in different specific manners, for example, in an embodiment, the network parameters of the L neural network layers loaded this time may be cleared from the memory through a memory release command and/or a garbage collection mechanism.
In one embodiment, the input data of one/more layers of the loaded L neural network layers can be cleared.
Then, in step 32, after multiple loading processes, the output result of the last neural network layer of the N neural network layers is determined as the business process result of the business model.
In this step, since the service model is composed of N neural network layers, the output result of the last neural network layer is the output result of the service model.
According to an embodiment, the method may further include, before step 31, obtaining sample data of the service sample through data acquisition and/or data preprocessing. In one embodiment, when the tth loading process is the first loading process, the consecutive L neural network layers are the first L neural network layers of the N neural network layers in step 311; in step 312, sample data of the service sample may be obtained as input data corresponding to the loaded neural network layer. In one embodiment, the traffic sample may include one of: picture, audio, text, business object, the business object comprising one of: user, merchant, commodity.
FIG. 4 illustrates a flow diagram of yet another method for business processing using a business model based on privacy protection in accordance with an embodiment of the present description. The service model includes N neural network layers connected in sequence, including a first neural network layer, as shown in fig. 4, the method at least includes the following steps:
in step 41, sequentially performing loading processing on each neural network layer according to a connection order of the N neural network layers, wherein the loading processing on the first neural network layer includes performing sub-loading processing for multiple times, and any ith sub-loading processing includes steps 411 to 414; steps 411-414 will be described in detail later.
In general, since all network parameters of a neural network layer are not always associated with all elements of its input data in the calculation or processing of the neural network layer, for example, in a convolution operation using a convolution layer, a neuron of the convolution layer usually has an association with only a few elements of the input data.
Therefore, according to an embodiment, the loading process for the first neural network layer may further include, before performing the sub-loading process for a plurality of times, dividing the plurality of network parameters and the plurality of data elements included in the input data of the first neural network layer into a plurality of sub-parameter sets and a plurality of sub-element sets according to correlations between the plurality of network parameters and the plurality of data elements included in the input data of the first neural network layer, where the sub-parameter sets and the sub-element sets correspond to each other one by one.
Specifically, taking the first neural network layer as a convolutional layer as an example, the network parameters of the convolutional layer may be divided into a plurality of sub-parameter groups for one or more dependent neurons, the data elements included in the input data of the convolutional layer may be divided into a plurality of sub-element groups for the correlation with the one or more neurons in the convolutional calculation, and each sub-parameter group and each sub-element group may be matched according to the correlation in the calculation.
In one embodiment, the network parameters may include one or more of a weight parameter, a bias parameter, and a structural parameter, wherein the structural parameter may include one or more of a number of input and output neurons, and an activation function.
The business model may also have different specific operating environments, according to different embodiments. In one embodiment, the business model may run in a trusted execution environment. In one embodiment, the business model may run in an untrusted execution environment.
The steps 411 to 414 are specifically:
in step 411, a plurality of network parameters in the plurality of network parameters included in the first neural network layer are loaded into the memory;
according to the above embodiment, the target sub-parameter set may be selected from a plurality of sub-parameter sets obtained by grouping, and the parameters may be loaded into the memory as the network parameters. More specifically, in one embodiment, upon selection of a target sub-parameter set, the target sub-parameter set may be randomly selected from the unloaded processed set of the plurality of sub-parameter sets. In another embodiment, in each sub-loading process, each sub-parameter set may be sequentially selected as a target sub-parameter set, so as to obtain several network parameters of the sub-loading process.
According to another embodiment, the plurality of network parameters of the first neural network layer are not subject to pre-packet partitioning; in such a case, in this step, a part of the plurality of network parameters may be selected as the plurality of network parameters.
In step 412, data elements corresponding to network parameters in the input data of the first neural network layer are obtained.
Specifically, in the embodiment of dividing parameter groups, the elements in the target sub-element set corresponding to the target sub-parameter set may be obtained as the data elements.
In step 413, processing the data elements by using the loaded network parameters to obtain a sub-processing result of the current loading processing;
in the embodiment of grouping and dividing parameters, the sub-element set can be processed by using the loaded sub-parameter set to obtain the sub-processing result of the current loading processing;
in step 414, clearing the memory storage of the loaded network parameters;
according to the above embodiment, the sub-parameter sets loaded this time can be cleared from the memory. In one example, the set of sub-elements loaded this time may also be cleared from the memory.
In different embodiments, the network parameters in the memory may be cleared in different specific manners, for example, in one embodiment, the network parameters loaded this time may be cleared from the memory through a memory release command and/or a garbage collection mechanism.
Step 41 further includes step 415, specifically:
after multiple times of sub-loading processing, the sub-processing results of each sub-loading processing are collected to obtain the output result of the first neural network layer, and the output result is used as the input data of the next neural network layer.
In essence, the output result of a neural network layer, that is, the result of processing the input data of the neural network layer by using the parameters of the neural network included in the output result according to the calculated correlation. That is, the complete output result of the neural network layer can be determined by aggregating the sub-processing results (obtained by processing the corresponding partial input data with the respective partial network parameters according to the calculated correlation).
Then, in step 42, the output result of the last neural network layer in the N neural network layers is determined as the business processing result of the business model.
In this step, since the service model is composed of N neural network layers, the output result of the last neural network layer is the output result of the service model.
In one embodiment, before step 41, sample data of the service sample may also be obtained through data acquisition and/or data preprocessing. In one example, the input data of the first neural network layer may be sample data of the acquired traffic sample. In yet another example, the traffic sample may include one of: pictures, audio, text, business objects, which may include one of: user, merchant, commodity.
Fig. 5 is a block diagram illustrating an apparatus for performing business processing using a business model including N neural network layers connected in sequence based on privacy protection according to an embodiment of the present disclosure. As shown in fig. 5, the apparatus 500 includes: :
a first loading unit configured to perform a plurality of times of loading processing for the N neural network layers, and including a unit for performing any tth time of loading processing,
a layer parameter loading subunit configured to load network parameters of consecutive L neural network layers after the neural network layer targeted for the previous loading processing into a memory, where L < N;
the layer input data acquisition subunit is configured to acquire input data corresponding to the loaded neural network layer;
the layer result determining subunit is configured to process the input data by using the loaded network parameters of the L neural network layers to obtain an output result of the current loading process, and the output result is used as input data of the next loading process;
a clearing subunit configured to clear the memory storage of the loaded network parameters of the L neural network layers;
and the result determining unit is configured to determine an output result of the last neural network layer of the N neural network layers as a service processing result of the service model after the multiple loading processing.
In an embodiment, the apparatus may further include a sample obtaining unit configured to obtain sample data of the service sample through data acquisition and/or data preprocessing before performing multiple loading processes on the N neural network layers.
In one embodiment, the layer parameter loading subunit may be further configured to, when the tth loading process is a first loading process, determine that the consecutive L neural network layers are first L neural network layers of the N neural network layers;
the layer input data obtaining subunit may be further configured to obtain sample data of the service sample as input data corresponding to the loaded neural network layer.
In one embodiment, the traffic samples may include one of: picture, audio, text, business object, the business object comprising one of: user, merchant, commodity.
In one embodiment, the network parameters may include one or more of a weight parameter, a bias parameter, a structure parameter; the structural parameters may include one or more of the number of input and output neurons, and activation functions.
In one embodiment, the first loading unit may be further configured such that, of the multiple loading processes, there are at least two loading processes in which the number of neural network layers loaded is different.
In one embodiment, the layer result determination subunit may be further configured to:
for each two continuous layers sequentially connected in the L loaded neural network layers, determining an output result of the previous layer according to the network parameters of the previous layer and input data thereof, and taking the output result as the input data of the next layer;
and determining the output result of the last layer in the L neural network layers loaded at the time according to the connection sequence as the output result of the loading processing at the time.
In one embodiment, the business model may run in a trusted execution environment.
In an embodiment, the purging subunit may be further configured to purge the currently loaded network parameters of the L neural network layers from the memory through a memory release command and/or a garbage collection mechanism.
Fig. 6 is a block diagram illustrating an apparatus for performing business processing using a business model based on privacy protection according to an embodiment of the present disclosure. The service model includes N neural network layers connected in sequence, including a first neural network layer, as shown in fig. 6, the apparatus 600 includes:
a loading unit 61, configured to sequentially perform loading processing on each neural network layer according to a connection order of the N neural network layers, where the loading processing on the first neural network layer includes performing multiple sub-loading processing; the loading unit comprises a loading unit and a loading unit, wherein the loading unit is used for carrying out any ith sub-loading processing,
a sub-parameter loading sub-unit 611 configured to load a plurality of network parameters included in the first neural network layer into a memory;
a sub-element loading sub-unit 612 configured to obtain a plurality of data elements corresponding to the plurality of network parameters in the input data of the first neural network layer;
a sub-processing result obtaining sub-unit 613 configured to process the plurality of data elements by using the plurality of loaded network parameters to obtain a sub-processing result of the sub-loading processing at this time;
a clearing subunit 614, configured to clear the memory storage of the plurality of network parameters loaded this time;
and the number of the first and second groups,
a layer result determining subunit 615, configured to, after the multiple times of sub-loading processing, collect sub-processing results of each time of sub-loading processing, obtain an output result of the first neural network layer, and use it as input data of a next neural network layer;
a result determining unit 62 configured to determine an output result of the last neural network layer of the N neural network layers as a business processing result of the business model.
In one embodiment, the load unit may further include:
before performing sub-loading processing for multiple times, the parameter grouping unit is configured to divide the multiple network parameters and the multiple data elements into multiple sub-parameter sets and multiple sub-element sets according to correlations between the multiple network parameters included in the first neural network layer and the multiple data elements included in input data thereof, where the sub-parameter sets and the sub-element sets are in one-to-one correspondence;
the sub-parameter loading sub-unit is further configured to select a target sub-parameter set from the plurality of sub-parameter sets and load parameters of the target sub-parameter set into a memory as the plurality of network parameters;
and the sub-element loading sub-unit is further configured to acquire elements in the target sub-element set corresponding to the target sub-parameter set as the data elements.
In one embodiment, the sub-parameter loading sub-unit may be further configured to select the target sub-parameter set from the plurality of sub-parameter sets comprises randomly selecting the target sub-parameter set from an unloaded processed set of the plurality of sub-parameter sets.
In an embodiment, the apparatus may further include a sample obtaining unit configured to obtain sample data of the service sample through data acquisition and/or data preprocessing.
In one embodiment, the loading unit may be further configured to load the network parameters and the input data thereof included in the first neural network layer into the memory;
the input data of the first neural network layer comprises the acquired sample data of the service sample.
In one embodiment, the traffic samples may include one of: picture, audio, text, business object, the business object comprising one of: user, merchant, commodity.
In one embodiment, the network parameters may include one or more of weight parameters, bias parameters, structural parameters; the structural parameters comprise one or more of input neuron number, output neuron number and activation function.
In one embodiment, the business model may run in a trusted execution environment.
In one embodiment, the purge subunit may be further configured to include: and clearing the loaded network parameters from the memory through a memory release command and/or a garbage collection mechanism.
Another aspect of the present specification provides a computer readable storage medium having a computer program stored thereon, which, when executed in a computer, causes the computer to perform any one of the above methods.
Another aspect of the present specification provides a computing device comprising a memory having stored therein executable code, and a processor that, when executing the executable code, implements any of the methods described above.
It is to be understood that the terms "first," "second," and the like, herein are used for descriptive purposes only and not for purposes of limitation, to distinguish between similar concepts.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in this invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
The above-mentioned embodiments, objects, technical solutions and advantages of the present invention are further described in detail, it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the present invention should be included in the scope of the present invention.

Claims (22)

1. A method for processing a service by using a service model based on privacy protection, wherein the service model comprises N neural network layers which are connected in sequence, and the method comprises the following steps:
and aiming at the N neural network layers, carrying out multiple loading treatments, wherein any t-th loading treatment comprises the following steps:
loading network parameters of continuous L neural network layers after the neural network layer targeted by the previous loading processing into a memory, wherein L is less than N;
acquiring input data corresponding to the loaded neural network layer;
processing the input data by using the loaded network parameters of the L neural network layers to obtain an output result of the current loading processing as input data of the next loading processing;
clearing the memory storage of the loaded network parameters of the L neural network layers;
and after the multiple loading processing, determining the output result of the last neural network layer in the N neural network layers as the service processing result of the service model.
2. The method according to claim 1, wherein before performing the multiple loading processes for the N neural network layers, further comprising obtaining sample data of the traffic sample by data acquisition and/or data preprocessing.
3. The method according to claim 2, wherein, when the tth loading process is a first loading process, the consecutive L neural network layers are the first L neural network layers of the N neural network layers;
the obtaining of the input data corresponding to the loaded neural network layer includes obtaining sample data of the service sample as the input data corresponding to the loaded neural network layer.
4. The method of claim 2, wherein the traffic sample comprises one of: picture, audio, text, business object, the business object comprising one of: user, merchant, commodity.
5. The method of claim 1, wherein the network parameters include one or more of weight parameters, bias parameters, configuration parameters; the structural parameters comprise one or more of input neuron number, output neuron number and activation function.
6. The method of claim 1, wherein there are at least two loading processes of the multiple loading processes, wherein the number of neural network layers loaded differs.
7. The method of claim 1, wherein processing the input data using the loaded network parameters of the L neural network layers to obtain an output result of the current loading process comprises:
for each two continuous layers sequentially connected in the L loaded neural network layers, determining an output result of the previous layer according to the network parameters of the previous layer and input data thereof, and taking the output result as the input data of the next layer;
and determining the output result of the last layer in the L neural network layers loaded at the time according to the connection sequence as the output result of the loading processing at the time.
8. The method according to claim 1, wherein the business model runs in a trusted execution environment TEE.
9. The method of claim 1, wherein clearing the memory storage of the network parameters for the L neural network layers comprises: and clearing the loaded network parameters of the L neural network layers from the memory through a memory release command and/or a garbage collection mechanism.
10. A method for processing a service by using a service model based on privacy protection, wherein the service model comprises N neural network layers which are sequentially connected, wherein the N neural network layers comprise a first neural network layer, and the method comprises the following steps:
sequentially loading each neural network layer according to the connection sequence of the N neural network layers, wherein the loading process for the first neural network layer comprises a plurality of times of sub-loading processes, and any ith sub-loading process comprises:
loading a plurality of network parameters in a plurality of network parameters contained in the first neural network layer into a memory;
acquiring a plurality of data elements corresponding to the plurality of network parameters in the input data of the first neural network layer;
processing the data elements by using the loaded network parameters to obtain a sub-processing result of the sub-loading processing;
clearing the memory storage of the loaded network parameters;
after the multiple times of sub-loading processing, collecting the sub-processing results of each sub-loading processing to obtain the output result of the first neural network layer, and using the output result as the input data of the next neural network layer;
and determining the output result of the last neural network layer in the N neural network layers as the service processing result of the service model.
11. The method according to claim 10, wherein the loading process for the first neural network layer further comprises, before performing the sub-loading process for a plurality of times, dividing the plurality of network parameters and the plurality of data elements included in the input data of the first neural network layer into a plurality of sub-parameter sets and a plurality of sub-element sets according to correlations between the plurality of network parameters and the plurality of data elements included in the input data of the first neural network layer, the sub-parameter sets and the sub-element sets corresponding to one another;
loading a plurality of network parameters of a plurality of network parameters included in the first neural network layer into a memory, including: selecting a target sub-parameter set from the plurality of sub-parameter sets, and loading parameters of the target sub-parameter set into a memory as the plurality of network parameters;
obtaining a plurality of data elements corresponding to the plurality of network parameters in the input data of the first neural network layer includes: and acquiring elements in the target sub-element set corresponding to the target sub-parameter set as the data elements.
12. The method of claim 11, wherein selecting a target sub-parameter set from the plurality of sub-parameter sets comprises randomly selecting a target sub-parameter set from an unloaded processed set of the plurality of sub-parameter sets.
13. The method according to claim 10, wherein before the loading process is performed on each of the N neural network layers in sequence according to the connection order of the N neural network layers, the method further comprises obtaining sample data of the service sample through data acquisition and/or data preprocessing.
14. The method according to claim 13, wherein the loading process is performed on each neural network layer in sequence, and comprises loading network parameters and input data thereof included in a first neural network layer into a memory;
the input data of the first neural network layer comprises the acquired sample data of the service sample.
15. The method of claim 13, wherein the traffic sample comprises one of: picture, audio, text, business object, the business object comprising one of: user, merchant, commodity.
16. The method of claim 10, wherein the network parameters include one or more of weight parameters, bias parameters, configuration parameters; the structural parameters comprise one or more of input neuron number, output neuron number and activation function.
17. The method of claim 10, wherein the business model runs in a trusted execution environment.
18. The method of claim 10, wherein clearing the memory storage of the plurality of network parameters of the current load comprises: and clearing the loaded network parameters from the memory through a memory release command and/or a garbage collection mechanism.
19. An apparatus for performing a business process using a business model based on privacy protection, the business model including N neural network layers connected in sequence, the apparatus comprising:
a first loading unit configured to perform a plurality of times of loading processing for the N neural network layers, and including a unit for performing any tth time of loading processing,
a layer parameter loading subunit configured to load network parameters of consecutive L neural network layers after the neural network layer targeted for the previous loading processing into a memory, where L < N;
the layer input data acquisition subunit is configured to acquire input data corresponding to the loaded neural network layer;
the layer result determining subunit is configured to process the input data by using the loaded network parameters of the L neural network layers to obtain an output result of the current loading process, and the output result is used as input data of the next loading process;
a clearing subunit configured to clear the memory storage of the loaded network parameters of the L neural network layers;
and the result determining unit is configured to determine an output result of the last neural network layer of the N neural network layers as a service processing result of the service model after the multiple loading processing.
20. An apparatus for performing a business process using a business model based on privacy protection, the business model including N neural network layers connected in sequence, including a first neural network layer, the apparatus comprising:
a loading unit configured to sequentially perform loading processing on each neural network layer according to a connection order of the N neural network layers, wherein the loading processing for the first neural network layer includes performing sub-loading processing a plurality of times; the loading unit comprises a loading unit and a loading unit, wherein the loading unit is used for carrying out any ith sub-loading processing,
a sub-parameter loading subunit configured to load a plurality of network parameters included in the first neural network layer into a memory;
a sub-element loading sub-unit configured to obtain a plurality of data elements corresponding to the plurality of network parameters in the input data of the first neural network layer;
the sub-processing result acquisition subunit is configured to process the data elements by using the loaded network parameters to obtain a sub-processing result of the sub-loading processing;
a clearing subunit configured to clear the memory storage of the loaded network parameters;
and the number of the first and second groups,
a layer result determining subunit configured to collect sub-processing results of each sub-loading processing after the plurality of sub-loading processing to obtain an output result of the first neural network layer and use the output result as input data of a next neural network layer;
and the result determining unit is configured to determine an output result of the last neural network layer in the N neural network layers as a service processing result of the service model.
21. A computer-readable storage medium, having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of any of claims 1-18.
22. A computing device comprising a memory and a processor, wherein the memory has stored therein executable code that, when executed by the processor, performs the method of any of claims 1-18.
CN202110050937.XA 2021-01-14 2021-01-14 Method and device for processing business by utilizing business model based on privacy protection Active CN112862067B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110050937.XA CN112862067B (en) 2021-01-14 2021-01-14 Method and device for processing business by utilizing business model based on privacy protection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110050937.XA CN112862067B (en) 2021-01-14 2021-01-14 Method and device for processing business by utilizing business model based on privacy protection

Publications (2)

Publication Number Publication Date
CN112862067A true CN112862067A (en) 2021-05-28
CN112862067B CN112862067B (en) 2022-04-12

Family

ID=76005755

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110050937.XA Active CN112862067B (en) 2021-01-14 2021-01-14 Method and device for processing business by utilizing business model based on privacy protection

Country Status (1)

Country Link
CN (1) CN112862067B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104239225A (en) * 2014-09-04 2014-12-24 浪潮(北京)电子信息产业有限公司 Method and device for managing heterogeneous hybrid memory
CN107578098A (en) * 2017-09-01 2018-01-12 中国科学院计算技术研究所 Neural network processor based on systolic arrays
CN110322000A (en) * 2018-03-30 2019-10-11 倍加科技股份有限公司 Method and computer installation, the recording medium of compressed data identification model
CN110866589A (en) * 2018-08-10 2020-03-06 高德软件有限公司 Operation method, device and framework of deep neural network model
US20200228336A1 (en) * 2018-03-07 2020-07-16 Private Identity Llc Systems and methods for privacy-enabled biometric processing
CN111897815A (en) * 2020-07-15 2020-11-06 中国建设银行股份有限公司 Service processing method and device
CN112148693A (en) * 2020-10-19 2020-12-29 腾讯科技(深圳)有限公司 Data processing method, device and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104239225A (en) * 2014-09-04 2014-12-24 浪潮(北京)电子信息产业有限公司 Method and device for managing heterogeneous hybrid memory
CN107578098A (en) * 2017-09-01 2018-01-12 中国科学院计算技术研究所 Neural network processor based on systolic arrays
US20200228336A1 (en) * 2018-03-07 2020-07-16 Private Identity Llc Systems and methods for privacy-enabled biometric processing
CN110322000A (en) * 2018-03-30 2019-10-11 倍加科技股份有限公司 Method and computer installation, the recording medium of compressed data identification model
CN110866589A (en) * 2018-08-10 2020-03-06 高德软件有限公司 Operation method, device and framework of deep neural network model
CN111897815A (en) * 2020-07-15 2020-11-06 中国建设银行股份有限公司 Service processing method and device
CN112148693A (en) * 2020-10-19 2020-12-29 腾讯科技(深圳)有限公司 Data processing method, device and storage medium

Also Published As

Publication number Publication date
CN112862067B (en) 2022-04-12

Similar Documents

Publication Publication Date Title
CN112003870B (en) Network encryption traffic identification method and device based on deep learning
CN109922032B (en) Method, device, equipment and storage medium for determining risk of logging in account
CN111818093B (en) Neural network system, method and device for risk assessment
WO2017148196A1 (en) Anomaly detection method and device
CN103019929A (en) Computer software analysis system
CN106789837A (en) Network anomalous behaviors detection method and detection means
CN113379042A (en) Business prediction model training method and device for protecting data privacy
CN115314265B (en) Method and system for identifying TLS (transport layer security) encryption application based on traffic and time sequence
CN114490302B (en) Threat behavior analysis method based on big data analysis and server
CN112862067B (en) Method and device for processing business by utilizing business model based on privacy protection
CN112154415A (en) Efficient event management in a mainframe computer system
CN118018260A (en) Network attack detection method, system, equipment and medium
CN117892340A (en) Federal learning attack detection method, system and device based on feature consistency
CN110880150A (en) Community discovery method, device, equipment and readable storage medium
US6810357B2 (en) Systems and methods for mining model accuracy display for multiple state prediction
CN115567305B (en) Sequential network attack prediction analysis method based on deep learning
CN117171757A (en) Model construction method for software vulnerability discovery and software vulnerability discovery method
CN116938536A (en) Network attack object detection method, system, device, equipment and medium
CN116232656A (en) Internet of vehicles intrusion detection model training method, detection method and equipment based on generation of countermeasure network
CN115470504A (en) Data risk analysis method and server combined with artificial intelligence
Madani et al. Study on the different types of neural networks to improve the classification of ransomwares
CN108108615A (en) Using detection method, device and detection device
CN112560085A (en) Privacy protection method and device of business prediction model
CN116821966B (en) Privacy protection method, device and equipment for training data set of machine learning model
CN114978616B (en) Construction method and device of risk assessment system, and risk assessment method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant