CN113468130A - Block chain-based federal learning model compression defense method and device - Google Patents

Block chain-based federal learning model compression defense method and device Download PDF

Info

Publication number
CN113468130A
CN113468130A CN202110552251.0A CN202110552251A CN113468130A CN 113468130 A CN113468130 A CN 113468130A CN 202110552251 A CN202110552251 A CN 202110552251A CN 113468130 A CN113468130 A CN 113468130A
Authority
CN
China
Prior art keywords
model
super node
compression
cutting
block chain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110552251.0A
Other languages
Chinese (zh)
Inventor
蔡亮
李伟
邱炜伟
张帅
匡立中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Qulian Technology Co Ltd
Original Assignee
Hangzhou Qulian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Qulian Technology Co Ltd filed Critical Hangzhou Qulian Technology Co Ltd
Priority to CN202110552251.0A priority Critical patent/CN113468130A/en
Publication of CN113468130A publication Critical patent/CN113468130A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/174Redundancy elimination performed by the file system
    • G06F16/1744Redundancy elimination performed by the file system using compression, e.g. sparse files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Security & Cryptography (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Virology (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a block chain-based federal learning model compression defense method and a block chain-based federal learning model compression defense device, which comprise the following steps: local training of the model is carried out at the edge end, cutting compression of the model is carried out according to the rank of the output characteristic diagram, a model compression index table is recorded, and local optimization is carried out on the cut model after cutting compression; performing super node division on the optimized cutting model, determining workload certification of the cutting model according to a model compression index table of the cutting model, performing super node internal aggregation on the cutting model according to the workload certification of the cutting model to obtain a super node aggregation model, and determining the workload certification of the super node according to the workload certification of the cutting model; and screening and determining 1 super node as a server with block chain account book right according to the workload certification of the super node, and aggregating all super node aggregation models by the server according to the workload certification of the super node to obtain a global model and recording the global model in the block chain account book so as to improve the robustness of the model.

Description

Block chain-based federal learning model compression defense method and device
Technical Field
The invention belongs to the safety field of federal learning, and particularly relates to a block chain-based compressed defense method and device for a federal learning model.
Background
To solve the data islanding problem, federal learning has emerged as a potential solution, and its main innovation is to provide a distributed machine learning framework with privacy protection features and to be able to cooperate with thousands of participants in a distributed manner to perform iterative training for a particular machine learning model. Meanwhile, in order to improve the communication efficiency of the server and the client, model compression is carried out on the uploaded and downloaded models, but after the models are compressed, potential safety hazards still exist in the models.
Although federal learning can increase the training speed of the model using uniform quantization compression, its fixed compression mode may inadvertently provide a new attack site. In particular, the fact that access to individual party data is restricted due to privacy concerns or regulatory restrictions may be helpful in performing label reversal attacks and backdoor attacks on shared models trained using federal learning. But the root reason is that federal learning does not aim at a trust mechanism of the client, the credit of the client does not have uniform score evaluation, the selection of a high-quality client is seriously influenced, and the potential safety hazard after model compression still exists.
The block chain is a bottom layer technology of the bitcoin and is applied to solving the problems of safe storage and trust of various data. Federal learning through the chain of integration blocks can record Federal learning model compression records and model updates in a safe, highly interrupt resistant and auditable manner, providing accountability and non-repudiation for the system framework.
At present, the research of the deep learning model compression method can be mainly divided into the following directions, and the design of the model is more refined: many networks at present have modular designs, and are large in depth and width, which causes much parameter redundancy, so that many researches on model design, such as squeezeenet and MobileNet, use more detailed and efficient model design, can reduce the model size to a great extent, and have good performance. Model cutting: the network with a complex structure has very good performance, and the parameters of the network also have redundancy, so that an effective judgment means can be found for the trained model network, and unimportant connections or filters can be cut to reduce the redundancy of the model. Thinning of the kernel: in the training process, the updating of the weight is induced to be more sparse, and for the sparse matrix, a more compact storage mode such as CSC can be used, but the sparse matrix operation is not efficient on a hardware platform, and is easily influenced by bandwidth, so that the acceleration is not obvious.
Disclosure of Invention
In view of the foregoing, an object of the present invention is to provide a block chain-based federal learning model compression defense method and apparatus, which prevent a model from being attacked in federal learning by using a block chain technique and a compression technique, and further improve robustness of the model.
In a first aspect, the block chain-based federal learning model compression defense method provided in an embodiment of the present invention includes the following steps:
local training of the model is carried out at the edge end, cutting compression of the model is carried out according to the rank of the output characteristic diagram, a model compression index table is recorded, and local optimization is carried out on the cut model after cutting compression;
performing super node division on the optimized cutting model, determining workload certification of the cutting model according to a model compression index table of the cutting model, performing super node internal aggregation on the cutting model according to the workload certification of the cutting model to obtain a super node aggregation model, and determining the workload certification of the super node according to the workload certification of the cutting model;
and screening and determining 1 super node as a server with block chain account book right according to the workload certification of the super node, and aggregating all super node aggregation models by the server according to the workload certification of the super node to obtain a global model and recording the global model in the block chain account book.
In a second aspect, an embodiment of the present invention provides a block chain-based federal learning model compression defense apparatus, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the block chain-based federal learning model compression defense method described in the first aspect when executing the computer program.
The beneficial effects of the technical scheme provided by the above embodiment at least comprise:
(1) after the decentralized operation of the block chain is utilized, the update of the global model is equivalent to that only a fixed server is utilized, so that the server can be effectively prevented from deliberately reducing the model weight, and after the decentralized operation, the poisoning model of the edge end can not directly attack the global model of the target. After the super nodes are adopted, a plurality of models adopting the same attack strategy can be clustered together, and the phenomenon that a plurality of malicious edge end clusters cause large influence on the overall model is prevented.
(2) The updated value of the poisoning model is more important for extracting the characteristics of the poisoning backdoor in order to continuously strengthen the existence of the poisoning backdoor, the rank of the characteristic graph is larger, the reserved possibility is higher, therefore, the filter selection of the cutting model can reveal the existence of the poisoning patch, on the basis, the poisoning model is observed by using the model compression index table of the cutting model, the characteristic extraction tendency of the model is seen from the model compression index table of the cutting model, the workload certification is carried out according to the model compression index table to screen the poisoning model,
(3) and the accounting right of the competition block chain is realized by adopting a workload proving mechanism so as to determine a temporary server and record a global model, so that the global model can be prevented from being changed.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow diagram of a block chain-based federated learning model compression defense method in an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
At present, the application field of federal learning is more and more extensive, but because the communication overhead of federal learning is larger, the model needs to be compressed and then uploaded and downloaded. Meanwhile, the model is implanted at the back door in a mode of training and optimizing the poisoning model, so that the poisoning attack is too hidden, and the cost of training the model is greatly increased. Based on the above, considering the effects of model compression and maximization without affecting model training, the embodiment adopts a block chain-based federated learning model compression defense method and device, and prevents the model from being attacked in federated learning through a block chain technology and a compression technology, thereby improving the robustness of the model.
FIG. 1 is a flow diagram of a block chain-based federated learning model compression defense method in an embodiment. As shown in fig. 1, the block chain-based federal learning model compression defense method provided in the embodiment includes the following steps:
and S101, local training of the model is carried out by the edge terminal.
Before local model training, the edge end needs model initialization, specifically including setting an overall training round E, a local data set D, an overall edge end total number M participating in federal learning, and an edge end number M participating in training in the t-th roundt
When the local model is trained, the edge terminal will set the local data DkFeed edge end CkAnd generating a local model
Figure BDA0003075572010000051
Namely:
Figure BDA0003075572010000052
wherein (x, y) are respectively local data sets DkL is used to calculate the cross entropy loss function of the predicted result and the real result,
Figure BDA0003075572010000053
as a local model
Figure BDA0003075572010000054
The updated weight parameter is used to determine the weight of the image,
Figure BDA0003075572010000055
the model to be trained for the t round is generally a global model to be issued, and k is an index of a local model, since the edge, the local data set and the local model have a one-to-one correspondence relationship, k is also an index of the edge and the local data set.
And S102, performing cutting compression on the model according to the rank of the output characteristic diagram, recording a model compression index table, and performing local optimization on the cut and compressed cut model.
In an embodiment, performing the clipping compression of the model according to the output feature map includes:
and calculating the rank of each feature map, and screening the filter corresponding to the feature map with the rank smaller than the threshold value according to the threshold value, or screening the filter corresponding to the feature map with the minimum rank to remove so as to realize cutting compression on the model and obtain the cutting model.
In the embodiment, the following formula is adopted to find the filter with smaller rank of the characteristic diagram:
Figure BDA0003075572010000056
wherein the content of the first and second substances,
Figure BDA0003075572010000057
represents the evaluation of the jth feature map o of the ith layerij(x) Is an effective information metric, defined as
Figure BDA0003075572010000058
Rank () is a Rank finding operation, δijRepresentation feature map oij(x) The more information contained in the feature map, the more important the corresponding filter, x representing the input data, and p (x) representing the probability distribution to which the input data obeys.
In the embodiment, the clipping compression of the model is realized by filtering the filter corresponding to the feature map, and specifically, the mapping parameters included in the filter corresponding to the feature map are removed to realize the removal of the filter. When the mapping parameters contained in the filter are removed and the model is optimized, the input sample data does not pass through the mapping channel, and the filter corresponding to the mapping channel is filtered. When the filter is filtered, a model compression index table for recording mapping parameters corresponding to the filter needs to be recorded, and the model compression index table is used as a basis for proving the calculation workload.
In order to reduce the decrease in model accuracy caused by filter loss, in the embodiment, the clipping compressed model is also locally optimized to realize fine tuning of the model, and the specific process is as follows:
Figure BDA0003075572010000061
wherein the content of the first and second substances,
Figure BDA0003075572010000062
representing the optimized local model parameters.
And S103, performing super node division on the optimized cutting model, and determining the workload certification of the cutting model according to a model compression index table of the cutting model.
In the embodiment, Q super nodes are formed by cutting models with similar training data sets, at least K/Q cutting models are needed for forming one super node, a model compression index table at the current moment is used as a key for calculating workload certification POW, the recorded model compression index table is recorded into a block chain account book when the model is cut and compressed in each training round, and the workload certification of the cutting models is updated by comparing the consistency of the model compression index table of the current round and the model compression index table of the previous round. In specific implementation, when the model compression index table of the current round is inconsistent with the model compression index table of the previous round, the workload proof of the cutting model is reduced.
And S104, performing super node internal aggregation on the cutting model according to the workload certification of the cutting model to obtain a super node aggregation model.
After the super node is obtained, performing super node internal aggregation on the cutting model according to the workload certification of the cutting model to obtain a super node aggregation model, which specifically comprises the following steps:
and performing weighted summation on all the cutting models contained in the super node according to the weight of the reciprocal of the workload proof of the cutting models to obtain a super node aggregation model. The super node aggregation model which is used as a representative of the super node to carry out the aggregation of the global model is obtained by adopting the following formula;
Figure BDA0003075572010000071
wherein the content of the first and second substances,
Figure BDA0003075572010000072
representing the nth clipping model in the s-th super node
Figure BDA0003075572010000073
Is a measure of the workload, Q, an index representing a clipping model within a super nodesRepresenting the total number of clipping models in the s-th super node,
Figure BDA0003075572010000074
representing a super node aggregate model.
And S105, determining the workload certification of the super node according to the workload certification of the cutting model.
In an embodiment, determining the workload attestation of the supernode from the workload attestation of the clipping model comprises:
and screening the minimum workload proof of the cutting model from all cutting models contained in the super nodes as an updating amount, and adding the minimum workload proof to the workload proof value of the super nodes in the previous round to determine the workload proof of the super nodes in the current round. Namely, the following formula is adopted to determine the workload proof of the current round of super nodes:
Figure BDA0003075572010000075
wherein the content of the first and second substances,
Figure BDA0003075572010000076
representing proof of workload determined based on a model compression index table of a clipping model, min () represents a minimum function,
Figure BDA0003075572010000077
representing the workload justification with the previous and current round of supernodes, respectively.
The calculation formula proved by the workload of the current round super node can be obtained, and the PoW of the super node is determined by the most dissimilar PoW of the model compression index table contained in the super node
Figure BDA0003075572010000078
The PoW gains of the models trained in the current round are accumulated, the PoW gains are accumulated and summed for multiple times, and the lower the model similarity contained in the super node is, the smaller the PoW gain obtained in the current round is.
And S106, screening and determining 1 super node as a server with block chain account book right according to the workload certification of the super node.
In an embodiment, screening and determining 1 super node as a server having block chain account right according to workload certification of the super node includes:
and screening the super node with the largest workload proved from all the super nodes to serve as a server side with block chain account right. Namely, the super node parameter W corresponding to the server with block chain account right is determined by the following formula:
Figure BDA0003075572010000081
and S107, the server side aggregates all super node aggregation models according to the workload certification of the super nodes to obtain a global model, and records the global model in a block chain account book.
In the embodiment, the step of aggregating all super node aggregation models by the server according to the workload certification of the super nodes to obtain the global model includes:
and (3) carrying out weighted summation on the super node aggregation models except the super node aggregation model corresponding to the service end according to the super node workload proving reciprocal as weight, and adding the weighted summation value and the super node aggregation model corresponding to the service end to obtain a global model. Namely, the global model is obtained by adopting the following formula:
Figure BDA0003075572010000082
and after the server is selected, the super node aggregation model corresponding to the server is used as an updated main model, and the other super node aggregation models are used as secondary models to be simply updated according to the weight.
And recording the updated aggregation model into the block chain account book, issuing the aggregation model to each edge end as a source model for next round of edge end training, and iterating until the whole training round E is reached to obtain a final federal learning model.
The block chain-based federal learning model compression defense method can be applied to the fields of telecommunications, finance and medical health. In the field of telecommunications, the method can be used for converging model updates of intelligent equipment, and a public AI model is integrated and generated on the premise of protecting data privacy, so that the generalization capability of the model is improved; in the financial field, financial institution collaborators co-establish anti-money laundering models, local training is carried out by utilizing respective anti-money laundering samples, and a common anti-money laundering model is optimized on the premise of not revealing local data; in the field of medical health, each hospital uses the respective patient visit record to carry out local model training, and trains a more effective disease prediction model together after aggregating all parameters.
And aiming at privacy problems of the federal study, the local data of the edge terminal cannot be accessed. Furthermore, as an edge, the model compression method of the server cannot be detected by an effective method, and thus the security of the trained compression model cannot be ensured. Based on the situation, in the block chain-based federal learning model compression defense method provided by the embodiment, firstly, the decentralized operation of the block chain is used for preventing the server from being damaged, and meanwhile, the super node is used for preventing the malicious client from clustering to defend the federal lower compression model; and secondly, introducing an excitation mechanism and workload verification by using an uploaded index table of the compression model, pruning the low-rank filter, observing whether the ledger index table of the next round is consistent with the historical result or not after pruning, and if not, reducing the PoW value of the ledger index table. And finally, according to the value of the PoW, recentering the book, and taking the trained model as the model of the server side for next updating. Furthermore, from the perspective of decentralized account book, each node added to the bitcoin network should maintain a complete account book. The bitcoin block chain solves the problem of centralized account book consistency in a competitive accounting mode. The competitive billing is a process without demonstrating the competitive outcome. And adopting a workload proving mechanism to realize competition result judgment.
Embodiments also provide a block chain-based federal learning model compression defense device, which includes a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the block chain-based federal learning model compression defense method when executing the computer program.
In a specific application, the memory may be a volatile memory at the near end, such as a RAM, a non-volatile memory, such as a ROM, a FLASH, a floppy disk, a mechanical hard disk, and the like, and may also be a remote storage cloud. The processor may be a Central Processing Unit (CPU), a microprocessor unit (MPU), a Digital Signal Processor (DSP), or a Field Programmable Gate Array (FPGA), i.e., the step of block chain based federal learning model compression defense may be implemented by these processors.
The above-mentioned embodiments are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only the most preferred embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions, equivalents, etc. made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A block chain-based federal learning model compression defense method is characterized by comprising the following steps:
local training of the model is carried out at the edge end, cutting compression of the model is carried out according to the rank of the output characteristic diagram, a model compression index table is recorded, and local optimization is carried out on the cut model after cutting compression;
performing super node division on the optimized cutting model, determining workload certification of the cutting model according to a model compression index table of the cutting model, performing super node internal aggregation on the cutting model according to the workload certification of the cutting model to obtain a super node aggregation model, and determining the workload certification of the super node according to the workload certification of the cutting model;
and screening and determining 1 super node as a server with block chain account book right according to the workload certification of the super node, and aggregating all super node aggregation models by the server according to the workload certification of the super node to obtain a global model and recording the global model in the block chain account book.
2. The block chain-based federated learning model compression defense method of claim 1, wherein the model-by-model tailoring compression according to the output feature map comprises:
and calculating the rank of each feature map, and screening the filter corresponding to the feature map with the rank smaller than the threshold value according to the threshold value, or screening the filter corresponding to the feature map with the minimum rank to remove so as to realize cutting compression on the model and obtain the cutting model.
3. The block chain-based federal learning model compression defense method as claimed in claim 2, wherein the filter corresponding to the feature map comprises mapping parameters which are removed to realize the removal of the filter.
4. The block chain-based federated learning model compression defense method of claim 1, wherein the super-node intra-aggregation of the clipping model according to the workload certification of the clipping model to obtain the super-node aggregation model comprises:
and performing weighted summation on all the cutting models contained in the super node according to the weight of the reciprocal of the workload proof of the cutting models to obtain a super node aggregation model.
5. The block chain-based federated learning model compression defense method of claim 1, wherein the determining a workload certification of a super node from a workload certification of a clipping model comprises:
and screening the minimum workload proof of the cutting model from all cutting models contained in the super nodes as an updating amount, and adding the minimum workload proof to the workload proof value of the super nodes in the previous round to determine the workload proof of the super nodes in the current round.
6. The block chain-based federal learning model compression defense method of claim 1, wherein the determining of 1 super node as a server side with block chain ledger authority by super node workload certification screening comprises:
and screening the super node with the largest workload proved from all the super nodes to serve as a server side with block chain account right.
7. The block chain-based federal learning model compression defense method of claim 1, wherein the step of aggregating all super node aggregation models by the server according to the workload certification of the super nodes to obtain a global model comprises the steps of:
and (3) carrying out weighted summation on the super node aggregation models except the super node aggregation model corresponding to the service end according to the super node workload proving reciprocal as weight, and adding the weighted summation value and the super node aggregation model corresponding to the service end to obtain a global model.
8. The method as claimed in claim 1, wherein in each training round, when a model is clipped and compressed, a recording model compression index table is recorded into a blockchain book, and the workload certification of clipping the model is updated by comparing the consistency of the model compression index table of the current round with the model compression index table of the previous round.
9. The block chain-based federated learning model compression defense method of claim 8, characterized in that when the model compression index table of the current round is inconsistent with the model compression index table of the previous round, the workload certification of the clipping model is reduced.
10. A block chain-based federal learning model compression defense apparatus, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the block chain-based federal learning model compression defense method of any one of claims 1 to 9 when executing the computer program.
CN202110552251.0A 2021-05-20 2021-05-20 Block chain-based federal learning model compression defense method and device Pending CN113468130A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110552251.0A CN113468130A (en) 2021-05-20 2021-05-20 Block chain-based federal learning model compression defense method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110552251.0A CN113468130A (en) 2021-05-20 2021-05-20 Block chain-based federal learning model compression defense method and device

Publications (1)

Publication Number Publication Date
CN113468130A true CN113468130A (en) 2021-10-01

Family

ID=77871125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110552251.0A Pending CN113468130A (en) 2021-05-20 2021-05-20 Block chain-based federal learning model compression defense method and device

Country Status (1)

Country Link
CN (1) CN113468130A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114492831A (en) * 2021-12-23 2022-05-13 北京百度网讯科技有限公司 Method and device for generating federal learning model

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114492831A (en) * 2021-12-23 2022-05-13 北京百度网讯科技有限公司 Method and device for generating federal learning model
CN114492831B (en) * 2021-12-23 2023-04-07 北京百度网讯科技有限公司 Method and device for generating federal learning model

Similar Documents

Publication Publication Date Title
Debout et al. Polydomy in ants: what we know, what we think we know, and what remains to be done
CN104809408B (en) A kind of histogram dissemination method based on difference privacy
Buckley et al. Demography and management of the invasive plant species Hypericum perforatum. II. Construction and use of an individual‐based model to predict population dynamics and the effects of management strategies
CN108133418A (en) Real-time credit risk management system
CN106897930A (en) A kind of method and device of credit evaluation
Reed et al. Long-term population trends of endangered Hawaiian waterbirds
CN109993414A (en) A kind of appraisal procedure, device and the storage medium of electric power enterprise innovation and development
Vandermeer et al. The dynamics of the coffee rust disease: an epidemiological approach using network theory
DE112020002552T5 (en) SYSTEM AND PROCEDURES FOR A SIEM RULE ORDER AND CONDITIONAL EXECUTION
Büyüktahtakın et al. An age-structured bio-economic model of invasive species management: insights and strategies for optimal control
CN113468130A (en) Block chain-based federal learning model compression defense method and device
Faulkner et al. A spatial approach to combatting wildlife crime
DE112021005364T5 (en) DEFENSE TARGETED DATABASE ATTACKS THROUGH DYNAMIC HONEYPOT DATABASE RESPONSE GENERATION
Ma et al. Determinants of tree survival at local scale in a sub-tropical forest
Zhu et al. A survey: Reward distribution mechanisms and withholding attacks in Bitcoin pool mining.
Larson et al. Spatial patterns of overstory trees in late-successional conifer forests
Ostrom The evolution of norms, rules and rights
Xu et al. Use of SADIE statistics to study spatial dynamics of plant disease epidemics
Banshal et al. Computer science research in India: a scientometric study
Li Protecting the breadbasket with trees? The effect of the great plains shelterbelt project on agriculture
Ding et al. Bird guild loss and its determinants on subtropical land-bridge islands, China
Woodward et al. Practical precautionary resource management using robust optimization
CN114495137B (en) Bill abnormity detection model generation method and bill abnormity detection method
He et al. Multi-objective spatially constrained clustering for regionalization with particle swarm optimization
Barbosa et al. Geographical distribution of Stryphnodendron adstringens Mart. Coville (Fabaceae): modeling effects of climate change on past, present and future

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination