CN114861817A - Multi-source heterogeneous data fusion method based on federal learning - Google Patents

Multi-source heterogeneous data fusion method based on federal learning Download PDF

Info

Publication number
CN114861817A
CN114861817A CN202210581519.8A CN202210581519A CN114861817A CN 114861817 A CN114861817 A CN 114861817A CN 202210581519 A CN202210581519 A CN 202210581519A CN 114861817 A CN114861817 A CN 114861817A
Authority
CN
China
Prior art keywords
feature
heterogeneous data
module
model
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210581519.8A
Other languages
Chinese (zh)
Inventor
侯瑞春
魏振辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ocean University of China
Original Assignee
Ocean University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ocean University of China filed Critical Ocean University of China
Priority to CN202210581519.8A priority Critical patent/CN114861817A/en
Publication of CN114861817A publication Critical patent/CN114861817A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a multi-source heterogeneous data fusion method based on federal learning, which aims to solve the problems of large network bandwidth occupation and leakage risk of user data in the existing heterogeneous data fusion method and comprises the following steps: in an initialization stage, the central control node randomly initializes network parameters for the feature extraction module, the feature fusion module and the feature decision module and sends the network parameters to the edge node; in the model training stage, the edge node selects a corresponding feature extraction module according to the local data set structure, and trains the selected feature extraction module, the received feature fusion module and the received feature decision module by using the local data set; after training is finished, returning the trained model to the central control node; in the model aggregation stage, the central control node aggregates the trained models by adopting an average aggregation algorithm to form a shared model with global heterogeneous data characteristics, and issues the shared model to the edge nodes again for a new round of training.

Description

Multi-source heterogeneous data fusion method based on federal learning
Technical Field
The invention belongs to the technical field of data processing, and particularly relates to a method for fusing multi-source heterogeneous data.
Background
Heterogeneous data fusion is a technology for solving the problem of fusion between data with different structures under different data sources. The goal of heterogeneous data fusion is to implement merging and sharing of data information resources, hardware device resources and human resources among data of different structures.
The existing heterogeneous data fusion method mainly adopts a data centralized processing mode taking a cloud computing model as a core, and has the advantages that heterogeneous data fusion can be operated on the cloud server level, so that the maintenance and deployment cost of services is reduced. However, this heterogeneous data fusion method has the following problems:
firstly, all data of all edge devices need to be uploaded to a cloud for uniform processing, so that not only is the efficiency low, but also extra bandwidth overhead is caused, and meanwhile, network delay is increased;
secondly, with the improvement of the privacy awareness of the user, the data of the edge device is likely to be divulged when the data is uploaded to a communication link, so that the security problem of the personal privacy cannot be guaranteed.
Disclosure of Invention
The invention aims to provide a multi-source heterogeneous data fusion method based on federal learning, and aims to solve the problems that the existing heterogeneous data fusion method is large in occupied network bandwidth and high in leakage risk of user data.
In order to solve the technical problems, the invention adopts the following technical scheme:
a multi-source heterogeneous data fusion method based on federal learning comprises an initialization stage, a model training stage and a model aggregation stage; in the initialization stage, the central control node randomly initializes network parameters for the feature extraction module, the feature fusion module and the feature decision module and sends the initialized feature extraction module, the initialized feature fusion module and the initialized feature decision module to the edge node; in the model training stage, the edge node selects a corresponding feature extraction module according to the local data set structure, and trains the selected feature extraction module, the received feature fusion module and the received feature decision module by using the local data set; after training is finished, returning the trained feature extraction module, feature fusion module and feature decision module to the central control node for model aggregation; in the model aggregation stage, the central control node aggregates the trained models by adopting an average aggregation algorithm, and then issues the aggregated feature extraction module, the feature fusion module and the feature decision module to the edge node again for a new round of training.
In some embodiments of the present application, in the model training phase, it is preferable to configure that the condition for the edge node training to end is that the number of training rounds of the local node exceeds the number of training rounds given by the central control node.
In some embodiments of the present application, it is preferable to configure an audio, visual feature sub-network and a text feature sub-network in the feature extraction module; the audio and visual feature sub-networks respectively adopt a COVAREP acoustic analysis framework and a FACET facial expression analysis framework to perform feature sampling extraction on the data set aiming at the audio information and the visual information; the text feature sub-network firstly adopts global word vectors to preprocess spoken words in a coding part, then uses a long-term and short-term memory artificial neural network to learn language representation related to time, uses the language representation as the input of a CNN convolutional neural network, and carries out local feature extraction on text information through convolutional check in a convolutional layer.
In some embodiments of the present application, it is preferable to introduce a memory unit W having a heterogeneous data feature space in the feature fusion module, and each modality of the memory unit W is configured to correspond to a spatial mapping of a heterogeneous data feature; when the heterogeneous data characteristics are fused, the heterogeneous data characteristics of a certain mode can be subjected to modular multiplication with a characteristic space corresponding to the memory unit W to obtain a memory unit with the heterogeneous data characteristics of the mode; for the heterogeneous data features of the remaining modalities, modular multiplication can be sequentially performed on the feature space corresponding to the memory unit with the heterogeneous data features of the previous modality to obtain the memory unit with the heterogeneous data features of the next modality.
In some embodiments of the present application, for a trimodal feature, the fusion operation of the feature fusion module may be divided into three phases: in the first stage, the memory unit W is subjected to modular multiplication along the first order and the first mode heterogeneous data characteristics to obtain a new memory unit W1 with the first mode heterogeneous data characteristics; in the second stage, the new memory unit W1 is modulo-multiplied along the second-order and second-mode heterogeneous data characteristics to obtain a memory unit W2 with two-mode heterogeneous data characteristics; in the third stage, the memory cell W2 is modulo-multiplied along the third order and the third mode heterogeneous data characteristics, so that the memory cell W3 with three mode heterogeneous data characteristics can be obtained.
In some embodiments of the present application, it is preferable to configure the feature decision module to perform decision making on the basis of global features by using a fully-connected layer of a CNN convolutional neural network for the fused data, where the decision making includes prediction of a regression model and probability prediction of a classification model; in the regression model module, the error between the target value and the predicted value is preferably measured by using an L1 norm loss function.
In some embodiments of the present application, because each edge node is trained on a feature extraction module by using a self-adaptive selection mechanism, in the model aggregation stage, a central control node firstly needs to merge feature extraction sub-networks selected and trained by each edge node, so that data extraction features of the same modality have similarity; and then, aggregating the feature extraction module, the feature fusion module and the feature decision module by adopting an average aggregation algorithm to obtain a sharing model with global heterogeneous data features.
Compared with the prior art, the invention has the advantages and positive effects that:
1. the method has stronger adaptability to multi-source heterogeneous data. Compared with the traditional algorithm, the method does not need to input all types of heterogeneous data simultaneously when the model is trained, and therefore, the method is more suitable for being applied to different types of edge nodes in federal learning.
2. The data privacy of the user can be better protected. By adopting the method, the heterogeneous data in the edge node does not need to be sent to the central control node for training, so that the risk that the private data of the user is possibly leaked when the private data is uploaded to a communication link is avoided.
3. The transmission bandwidth is greatly reduced. By adopting the method, the sub-network model parameters are extracted by only transmitting the characteristics corresponding to the heterogeneous data owned by each edge node without transmitting the model parameters for extracting the characteristics of all the heterogeneous data, so that the method has the advantages of less network bandwidth occupation, high data transmission efficiency and no obvious network delay.
Other features and advantages of the present invention will become more apparent from the detailed description of the embodiments of the present invention when taken in conjunction with the accompanying drawings.
Drawings
Fig. 1 is an overall architecture flowchart of an embodiment of a multi-source heterogeneous data fusion method based on federal learning according to the present invention;
FIG. 2 is a flow chart of a model training phase.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings.
The federated learning is a distributed learning framework, raw data of which can be collected and stored on a plurality of edge nodes, and a training process of a model is executed at the edge nodes, and the trained model can realize gradual optimization through interaction between the edge nodes and a central control node.
Based on the above framework structure of federal study, the multi-source heterogeneous data fusion system is designed in the embodiment, and mainly comprises edge equipment, the internet of things, a cloud server and the like. The Internet of things can be a gateway or a router and the like; the edge device may be a client computer or a client server, etc. The edge device is used as an edge node in a federal learning framework and is interconnected with a cloud server used as a central control node through the Internet of things, and the fusion problem among multi-source heterogeneous data is solved aiming at different types of heterogeneous data collected by the edge device on the premise of not carrying out data intercommunication.
Specifically, a federal learning algorithm can be introduced into the edge devices, each edge device trains a learning model issued by the cloud server by using local data of the edge device, and the trained learning model is uploaded to the cloud server, so that a generalized sharing model is aggregated. The edge device utilizes the model to replace data to interact with the cloud server, so that the risk of user data leakage can be avoided, and the problem of user privacy safety is solved.
The overall design flow of the multi-source heterogeneous data fusion method of the embodiment is described in detail below with reference to fig. 1.
The mathematical model related to the multi-source heterogeneous data fusion algorithm of the embodiment mainly comprises a feature extraction module, a feature fusion module and a feature decision module. The feature extraction module is mainly composed of a feature extraction sub-network designed for heterogeneous data of various modes.
In the initialization stage, a central control node (cloud server) randomly initializes network parameters of a feature extraction module, a feature fusion module and a feature decision module in a model, and then sends the initialized feature extraction module, feature fusion module and feature decision module to each edge node (edge device).
In the model training stage, after the edge node receives the model issued by the central control node, the corresponding feature extraction module is selected according to the data set structure on the local node, and the feature extraction module, the feature fusion module and the feature decision module are trained by using the local data set. The end condition of the new round of training of the edge nodes is that the number of training rounds of the local nodes exceeds the number of training rounds given by the central control node. After the training is finished, the edge nodes return the respective trained models to the central control node for model aggregation.
In the model aggregation stage, the central control node can directly adopt an average aggregation algorithm to perform model aggregation on the trained feature fusion module and the feature decision module. For the feature extraction module, each edge node adaptively selects a corresponding feature extraction sub-network for training according to the type of heterogeneous data of the edge node, so that the feature extraction sub-networks selected and trained by each edge node are merged during model aggregation, and then the feature extraction module is subjected to average aggregation by adopting an average aggregation algorithm, so as to ensure that the features extracted by heterogeneous data in the same modality have similarity. And finally, the central control node issues the updated model to the edge node again for a new round of training.
The specific configurations of the feature extraction module, the feature fusion module, and the feature decision module are explained in detail below.
(I) feature extraction Module
The feature extraction module is used for carrying out feature sampling and extraction on heterogeneous data in different modes, so that the establishment of the model needs to be designed in a targeted manner according to the features of the heterogeneous data.
Assuming that the heterogeneous data types at the edge node are audio data, visual data and text data, respectively, in the feature extraction module, different feature extraction sub-networks are adopted to perform feature extraction on the audio, visual and text information according to the heterogeneous data features of different modes.
As a preferred embodiment, the audio and visual feature sub-networks designed for the audio information and the visual information may respectively use covanep acoustic analysis framework and facial expression analysis framework to perform feature sampling and extraction on the data set. The key to dealing with spoken language, a language with variability, is to build models that can operate in unreliable situations, and to characterize particular speech by focusing on important words, considering that spoken text differs grammatically and expressively from written text. Therefore, the text feature sub-network of the present application first pre-processes spoken words with global word vectors in its coding part, then learns the time-dependent language representation using long-short term memory artificial neural (LSTM) network, and uses this as input to the CNN convolutional neural network, and performs local feature extraction on text information by convolutional checking at convolutional layer.
The Long Short-Term Memory (LSTM) network is a time-cycle neural network, which is specially designed to solve the Long-Term dependence problem of the general RNN (cyclic neural network), and all RNNs have a chain form of repeated neural network modules.
(II) feature fusion module
In the embodiment, in the feature fusion module, a higher-order tensor having a heterogeneous data feature space is introduced, and each mode of the tensor corresponds to a spatial mapping of the heterogeneous data feature. Therefore, when each heterogeneous data characteristic is fused, the high-order tensor not only can introduce the characteristics of the rest heterogeneous data modes for correction, but also can memorize the ongoing heterogeneous data mode characteristics.
In this embodiment, the higher-order tensor is denoted as a memory unit W, and when the heterogeneous data features are fused, the memory unit with the heterogeneous data features is obtained by performing modular multiplication on the heterogeneous data features and the feature space corresponding to the memory unit W, and a further feature fusion operation is performed by performing modular multiplication on the heterogeneous data features of the remaining modalities and the feature space corresponding to the memory unit with the heterogeneous data features of the previous modality in sequence, so as to obtain the memory unit with the heterogeneous data features of the next modality.
For example, the following steps are carried out: the characteristics of the heterogeneous data to be processed are respectively assumed as follows: audio data features, visual data features and text data features, the fusion operation can be divided into the following three phases for this case:
firstly, performing modular multiplication on the memory unit W along a first order and a certain modal heterogeneous data characteristic (for example, an audio data characteristic) to obtain a memory unit W1 having the modal heterogeneous data characteristic;
secondly, performing modular multiplication on the memory unit W1 along the second order and the heterogeneous data characteristics (e.g. visual data characteristics) of the other modality to obtain a memory unit W2 with heterogeneous data characteristics of the two modalities;
finally, the memory unit W2 is modulo-multiplied by the heterogeneous data feature (e.g. text data feature) of the remaining one modality along three orders, so that the memory unit W3 with heterogeneous data features of three modalities can be obtained.
In the embodiment, by constructing a high-order tensor with the spatial latitude of the heterogeneous data, the fusion and the memory of the characteristics of the multimodal heterogeneous data can be realized.
(III) feature decision module
In the embodiment, in the feature decision module, for the fused heterogeneous data features, a decision is made on the basis of global features by using a full connection layer of a CNN convolutional neural network, including prediction of a regression model and probability prediction of a classification model. In the regression model module, the error between the target value and the predicted value can be measured by using an L1 norm loss function.
The following specifically explains the three stages involved in the shared model training process of the multi-source heterogeneous data fusion method of the embodiment by combining the above construction modes of the feature extraction module, the feature fusion module, and the feature decision module.
Suppose that N edge devices participate in the training of the shared model, and all the edge devices acquire M kinds of heterogeneous data.
In an initialization stage, the cloud server designs a corresponding feature extraction module F, a feature fusion module I and a feature decision module C according to the acquired M kinds of heterogeneous data, and the shared model G can be represented as G = < F, I, C >. Specifically, in the feature extraction module F, corresponding feature extraction sub-networks F1, F2, …, Fm are designed for M kinds of heterogeneous data, which may be denoted as F ═ F1, F2, …, Fm >. In the feature fusion model I, a high-order tensor with heterogeneous data space dimension characteristics, namely a memory unit W, is constructed. The parameters of the memory unit W after being expanded along the ith order can reflect the spatial dimension characteristics of the ith heterogeneous data, so that the fusion of multi-modal data is realized. In the feature decision module C, a regression model and a classification model of the CNN convolutional neural network are configured to be used for carrying out deeper mining on potential relation between fused heterogeneous data, and the problem that heterogeneous data are difficult to fuse due to uncertainty of the heterogeneous data in the calculation of edge equipment is solved by improving feature expression of the model on the multi-source heterogeneous data.
In the model training stage, N edge devices (node 1, …, node N) participating in training adaptively select corresponding feature extraction sub-networks from the feature extraction module to train according to own heterogeneous data types. Fig. 2 shows the adaptive selection mechanism of the feature extraction module.
As shown in fig. 2, it is assumed that all the heterogeneous data types to be processed are Za, Zv, and Zt, and the three feature spaces of the corresponding memory unit W correspond to the three types of heterogeneous data Za, Zv, and Zt, respectively. Assume that the heterogeneous data types collected by node 1 and node N are different. In the model training stage, the node 1 selects corresponding feature extraction sub-networks Fa and Fv to train according to the own heterogeneous data types Za and Zv, and obtains feature maps Fa and Fv respectively. According to the number of heterogeneous data types owned by the node 1, a feature fusion stage is divided into two parts: first, the memory cell W is modulo multiplied by the fa characteristic along the first order to obtain a new memory cell W with the fa characteristic 11 (ii) a Next, the memory cell W 11 Performing modular multiplication along the second-order and fv characteristics to obtain a memory unit W with fa and fv heterogeneous data characteristics 12
Similarly, the node N selects the corresponding feature extraction sub-networks Ft and Fv for training according to the own heterogeneous data types Zt and Zv, and obtains feature maps Ft and Fv respectively. According to the number of heterogeneous data types owned by the node N, a feature fusion stage is divided into two parts: first, the memory unit W is modulo multiplied along the first order and ft characteristics to obtain a new memory unit W with ft characteristics N1 (ii) a Next, the memory cell W N1 Performing modular multiplication along the second-order and fv characteristics to obtain a memory unit W with ft and fv heterogeneous data characteristics N2
In the model training process, the memory unit not only can learn the spatial latitude characteristics of various heterogeneous data, but also can capture potential relations among different heterogeneous data. And aiming at the fused data, a decision is made on the basis of the global features by adopting a full connection layer, and a trained feature extraction module, a feature fusion module and a feature decision module are formed and returned to the cloud server for model aggregation.
In the model aggregation stage, because each edge device trains the feature extraction module by adopting a self-adaptive selection mechanism, when the model is aggregated, the cloud server firstly merges the feature extraction sub-networks selected and trained by each edge device, and then aggregates the trained feature extraction module, the feature fusion module and the feature decision module by adopting an average aggregation algorithm to obtain the shared model with global heterogeneous data features.
The shared model may be re-issued to each edge device for a new round of training.
Aiming at the data privacy problem of network edge equipment, the multi-source heterogeneous data fusion method is designed based on a federal learning model, and a high-order memory unit with heterogeneous data space dimension characteristics is constructed by introducing a tensor decomposition theory, so that the memory unit is used for realizing effective fusion of multi-source heterogeneous data under the condition of no need of data communication, and the data communication barrier brought by the privacy safety problem is broken. Meanwhile, the method can adaptively process different types of heterogeneous data on the premise of not increasing redundant model training scale according to the heterogeneous data structure of the edge device, thereby more efficiently realizing the utilization rate of communication bandwidth in distributed training, reducing unnecessary data transmission and lowering the requirements on the computing capacity and storage capacity of the network edge device.
Of course, the above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, many modifications and embellishments can be made without departing from the principle of the present invention, and these should also be regarded as the protection scope of the present invention.

Claims (7)

1. A multi-source heterogeneous data fusion method based on federal learning is characterized by comprising the following steps:
an initialization stage: the central control node randomly initializes the network parameters for the feature extraction module, the feature fusion module and the feature decision module and sends the initialized feature extraction module, the initialized feature fusion module and the initialized feature decision module to the edge node;
a model training stage: the edge node selects a corresponding feature extraction module according to a local data set structure, and trains the selected feature extraction module, the received feature fusion module and the received feature decision module by using the local data set; after training is finished, returning the trained feature extraction module, feature fusion module and feature decision module to the central control node for model aggregation;
a model polymerization stage: and the central control node adopts an average aggregation algorithm to aggregate the trained models, and then the aggregated feature extraction module, the feature fusion module and the feature decision module are issued to the edge nodes again for a new round of training.
2. The multi-source heterogeneous data fusion method based on federated learning of claim 1, wherein in the model training phase, the condition for the edge node training to end is that the number of local node training rounds exceeds the number of training rounds given by a central control node.
3. The multi-source heterogeneous data fusion method based on federated learning of claim 1, wherein the feature extraction module comprises:
the audio and visual feature sub-networks are used for respectively adopting a COVAREP acoustic analysis framework and a FACET facial expression analysis framework to perform feature sampling extraction on the data set aiming at the audio information and the visual information;
the text feature sub-network is characterized in that the global word vector is adopted to preprocess spoken words in the coding part, then the long-term and short-term memory artificial neural network is used to learn language expression related to time, the language expression is used as the input of the CNN convolutional neural network, and the text information is subjected to local feature extraction through convolutional check in the convolutional layer.
4. The multi-source heterogeneous data fusion method based on federated learning of claim 1, wherein the feature fusion module includes a memory unit W with a heterogeneous data feature space, each modality of the memory unit W corresponding to a spatial mapping of heterogeneous data features; when the heterogeneous data characteristics are fused, performing modular multiplication on the heterogeneous data characteristics of a certain mode and a characteristic space corresponding to the memory unit W to obtain a memory unit with the modal heterogeneous data characteristics; and performing modular multiplication on the heterogeneous data characteristics of the rest modes and the characteristic space corresponding to the memory unit with the heterogeneous data characteristics of the previous mode in sequence to obtain the memory unit with the heterogeneous data characteristics of the next mode.
5. The multi-source heterogeneous data fusion method based on federated learning of claim 4, wherein for tri-modal features, the fusion operation of the feature fusion module is divided into three phases:
the first stage is as follows: performing modular multiplication on the memory unit W along the first-order heterogeneous data characteristics and the first mode heterogeneous data characteristics to obtain a new memory unit W1 with the first mode heterogeneous data characteristics;
and a second stage: the memory unit W1 performs modular multiplication along the second-order and second-mode heterogeneous data characteristics to obtain a memory unit W2 with two-mode heterogeneous data characteristics;
and a third stage: the memory cell W2 is obtained by performing modular multiplication on the third order and the third mode heterogeneous data characteristics to obtain a memory cell W3 with three mode heterogeneous data characteristics.
6. The multi-source heterogeneous data fusion method based on federal learning of claim 1, wherein the feature decision module makes a decision on the basis of global features by adopting a full connection layer of a CNN convolutional neural network for fused data, and the decision includes prediction of a regression model and probability prediction of a classification model; in the regression model module, the error between the target value and the predicted value is measured by using an L1 norm loss function.
7. The multi-source heterogeneous data fusion method based on federated learning according to any one of claims 1 to 6, characterized in that, in the model aggregation phase, a central control node firstly merges feature extraction sub-networks selected and trained by each edge node, so that data extraction features of the same modality have similarity; and then, aggregating the feature extraction module, the feature fusion module and the feature decision module by adopting an average aggregation algorithm to obtain a sharing model with global heterogeneous data features.
CN202210581519.8A 2022-05-26 2022-05-26 Multi-source heterogeneous data fusion method based on federal learning Pending CN114861817A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210581519.8A CN114861817A (en) 2022-05-26 2022-05-26 Multi-source heterogeneous data fusion method based on federal learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210581519.8A CN114861817A (en) 2022-05-26 2022-05-26 Multi-source heterogeneous data fusion method based on federal learning

Publications (1)

Publication Number Publication Date
CN114861817A true CN114861817A (en) 2022-08-05

Family

ID=82640502

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210581519.8A Pending CN114861817A (en) 2022-05-26 2022-05-26 Multi-source heterogeneous data fusion method based on federal learning

Country Status (1)

Country Link
CN (1) CN114861817A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115859367A (en) * 2023-02-16 2023-03-28 广州优刻谷科技有限公司 Multi-mode federal learning privacy protection method and system
CN116318465A (en) * 2023-05-25 2023-06-23 广州南方卫星导航仪器有限公司 Edge computing method and system in multi-source heterogeneous network environment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115859367A (en) * 2023-02-16 2023-03-28 广州优刻谷科技有限公司 Multi-mode federal learning privacy protection method and system
CN116318465A (en) * 2023-05-25 2023-06-23 广州南方卫星导航仪器有限公司 Edge computing method and system in multi-source heterogeneous network environment
CN116318465B (en) * 2023-05-25 2023-08-29 广州南方卫星导航仪器有限公司 Edge computing method and system in multi-source heterogeneous network environment

Similar Documents

Publication Publication Date Title
Ergen et al. Online training of LSTM networks in distributed systems for variable length data sequences
CN114861817A (en) Multi-source heterogeneous data fusion method based on federal learning
CN106897268B (en) Text semantic understanding method, device and system
JP7383803B2 (en) Federated learning using heterogeneous model types and architectures
CN106062786A (en) Computing system for training neural networks
Olshevsky Efficient information aggregation strategies for distributed control and signal processing
KR102234850B1 (en) Method and apparatus for complementing knowledge based on relation network
Xu et al. Anomaly traffic detection based on communication-efficient federated learning in space-air-ground integration network
US20230162098A1 (en) Schema-Guided Response Generation
CN113905391A (en) Ensemble learning network traffic prediction method, system, device, terminal, and medium
CN104424507B (en) Prediction method and prediction device of echo state network
CN113191530B (en) Block link point reliability prediction method and system with privacy protection function
CN111767472A (en) Method and system for detecting abnormal account of social network
CN114091667A (en) Federal mutual learning model training method oriented to non-independent same distribution data
CN112733043A (en) Comment recommendation method and device
Yi et al. Consensus in Markovian jump second‐order multi‐agent systems with random communication delay
CN113537400B (en) Distribution and exit method of edge computing nodes based on branch neural network
CN113133038B (en) Power Internet of things link backup method, device, equipment and storage medium
CN113541986B (en) Fault prediction method and device for 5G slice and computing equipment
Chang et al. Dynamic practical byzantine fault tolerance and its blockchain system: A large-scale markov modeling
CN111368995A (en) General network compression framework and compression method based on sequence recommendation system
CN115758643A (en) Network flow prediction method and device based on temporal-spatial feature fusion and storage medium
Clarkson Applications of neural networks in telecommunications
Abate et al. Situation awareness in critical infrastructures
Shang et al. DeepAutonet: self-driving reconfigurable HPC system with deep reinforcement learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication