WO2021221242A1 - Système et procédé d'apprentissage fédéré - Google Patents
Système et procédé d'apprentissage fédéré Download PDFInfo
- Publication number
- WO2021221242A1 WO2021221242A1 PCT/KR2020/013548 KR2020013548W WO2021221242A1 WO 2021221242 A1 WO2021221242 A1 WO 2021221242A1 KR 2020013548 W KR2020013548 W KR 2020013548W WO 2021221242 A1 WO2021221242 A1 WO 2021221242A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- learning
- model
- management unit
- server
- Prior art date
Links
- 230000013016 learning Effects 0.000 title claims abstract description 162
- 238000000034 method Methods 0.000 title claims description 37
- 238000013523 data management Methods 0.000 claims abstract description 81
- 238000012549 training Methods 0.000 claims description 43
- 238000013528 artificial neural network Methods 0.000 description 7
- 238000013473 artificial intelligence Methods 0.000 description 6
- 230000004044 response Effects 0.000 description 5
- 230000002776 aggregation Effects 0.000 description 4
- 238000004220 aggregation Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 210000002569 neuron Anatomy 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000035045 associative learning Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000013136 deep learning model Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000004931 aggregating effect Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000009545 invasion Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 210000000225 synapse Anatomy 0.000 description 1
- 230000000946 synaptic effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N10/00—Quantum computing, i.e. information processing based on quantum-mechanical phenomena
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/098—Distributed learning, e.g. federated learning
Definitions
- the present invention relates to a federated learning system and method.
- AI artificial intelligence
- Training of artificial intelligence models requires numerous computer resources to perform large-scale calculations.
- Cloud computing service is the best solution that can easily provide computing infrastructure to train artificial intelligence models without complex hardware and software installation.
- cloud computing is based on centralization of resources, all necessary data should be stored in cloud memory and utilized for model training. Although data centralization offers many advantages in terms of maximizing efficiency, there is a risk of leakage of user personal data, which is becoming an increasingly important business issue as data transmission increases.
- Federated learning is a learning method in the form of centrally collecting a model learned based on user personal data in a user terminal, rather than learning by collecting user personal data in the center as in the past. Since this federated learning does not centrally collect user personal data, there is little possibility of invasion of privacy.
- federated learning systems require considering not only algorithmic aspects such as how to update parameters and learning schedules, but also system aspects such as independent data management for each device and how to efficiently communicate with heterogeneous systems.
- the network dependency between the server and the user terminal is another problem to be solved. That is, in order to perform federated learning, each server and a plurality of user terminals must be closely connected to each other. In this case, when an unstable network or connection problem occurs, it is difficult to respond. In addition, there is a problem in that the user terminal has an additional burden of maintaining data transmitted to the server until the data transmission with the server is completed even if the resource is insufficient and the network state is unstable.
- an object of the present invention is to provide a federated learning system and method in which a server and a user terminal can asynchronously perform a learning task.
- a plurality of user terminals that generate training data by learning a global model based on user data, create a global model, collect the training data, and use the global model It provides a federated learning system including a server to improve, and a data management unit that stores and manages model data and learning data related to the global model, transmits the model data to a plurality of user terminals, and transmits the learning data to the server.
- the model data includes global parameters of the global model, learning time of the user terminal, and type and size information of user data to be used for learning.
- the user terminal establishes a learning plan based on the model data and performs learning according to the learning plan.
- the training data is a local model or a local parameter of the local model.
- the data management unit generates metadata including the size of the training data, the creation date and time, and the distribution characteristics.
- the server determines the range and number of learning data, selects the learning data to be collected, or establishes or changes a collection plan of the learning data.
- the data management unit manages the model data and the training data for each version.
- the server creates a global model, registering model data related to the global model to the data management unit, the data management unit transmitting the model data to a plurality of user terminals, the plurality of user terminals Generating training data by learning a global model based on user data, a plurality of user terminals registering the training data to the data management unit, transmitting the training data to the server by the data management unit, and the server learning It provides a federated learning method comprising the step of aggregating data to improve a global model.
- the step of registering the model data to the data management unit includes the server requesting the data management unit to register the model data, and the data management unit registering the model data for each version.
- the step of registering the learning data to the data management unit includes the step of the user terminal requesting the registration of the learning data to the data management unit, and the data management unit registering the learning data for each version.
- the step of transmitting the learning data to the server includes the step of the server requesting the learning data to the data management unit, and the data management unit transmitting the latest version of the learning data or the version of the learning data requested by the server to the server. .
- the user terminal and the server independently perform tasks without considering each other's work state, so that federated learning can be flexibly performed and the performance of the global model can be improved.
- the server can perform federated learning by bringing only the data stored in the data management unit regardless of the user terminal and the network connection state, thereby reducing the burden on the server.
- the present invention by changing the communication connection between the server and the data management unit and between the user terminal and the data management unit, it is possible to reduce bandwidth, increase network efficiency, and prepare for network failure.
- FIG. 1 is a block diagram of a conventional federated learning system.
- FIG. 2 is a flowchart of a conventional associative learning method.
- FIG. 3 is a block diagram of a federated learning system according to an embodiment of the present invention.
- FIG. 4 is a flowchart of a federated learning method according to an embodiment of the present invention.
- FIG. 5 is a detailed flowchart of the step of registering the model data of FIG. 4 .
- FIG. 6 is a detailed flowchart of a step of transmitting the model data of FIG. 4 .
- FIG. 7 is a detailed flowchart of the step of registering the learning data of FIG.
- FIG. 8 is a detailed flowchart of a step of transmitting the learning data of FIG. 4 .
- 'first' and 'second' may be used to describe various elements, but the elements should not be limited by the above terms. The above term may be used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, a 'first component' may be referred to as a 'second component', and similarly, a 'second component' may also be referred to as a 'first component'. can Also, the singular expression includes the plural expression unless the context clearly dictates otherwise. Unless otherwise defined, terms used in the embodiments of the present invention may be interpreted as meanings commonly known to those of ordinary skill in the art.
- FIG. 1 is a block diagram of a conventional federated learning system
- FIG. 2 is a flowchart of a conventional federated learning method.
- the conventional federated learning system may be configured to include a plurality of user terminals 10 , a server 20 and a storage 30 .
- the server 20 generates a global model and stores the generated global model in the storage 30 . Then, the server 20 transmits the global model stored in the storage 30 to the plurality of user terminals 10 .
- the plurality of user terminals 10 generate a learning model by learning a global model based on user data. In addition, the plurality of user terminals 10 transmit the learning model to the server 20 .
- the server 20 collects the training data and uses it to improve the global model. Then, the server 20 stores the improved global model in the storage 30 , and transmits the improved global model to the plurality of user terminals 10 again. This process may be repeated until the global model performance reaches a certain level or higher.
- the conventional federated learning method consists of a selection (Selection) step, a configuration (Configuration) step and a reporting (Reporting) step.
- the server 20 stores the model data including the global parameters of the global model, the learning plan, the data structure, and the work to be performed in the storage 30 .
- a plurality of user terminals 10a to 10e capable of performing the federated learning notifies that the learning is ready by sending a message to the server 20 (1).
- the server 20 collects information of a plurality of user terminals 10a to 10e, and according to a rule such as the number of participating terminals, a user terminal 10 most suitable for participating in learning among a plurality of user terminals 10a to 10e. ⁇ 10c) (selection step).
- the server 20 reads the model data stored in the storage 30 (2), and transmits it to the selected user terminals 10 to 10c (3). Then, the user terminals 10a to 10c perform learning by applying the user data to the global model according to the model data (4) (configuration step).
- the user terminals 10a to 10c transmit training data, for example, a local model or a local parameter of the local model, to the server 20 when learning is completed.
- transmission of some user terminals 10b may fail due to an unstable network or connection problem.
- the server 20 receives the training data from the user terminals 10a and 10c, the server 20 collects the training data and improves the model data of the global model using this (5). Then, the server 20 stores the model data of the improved global model in the storage 30 (report step).
- the storage 30 is used only for storing model data of the global model generated by the server 20 . Then, the server 20 checks the status of the plurality of user terminals 10, selects a suitable user terminal 10, determines whether a sufficient amount of learning data to be collected has been collected, and transmits the model data to the plurality of users. It performs many roles, such as transmitting to the terminal 10 .
- the conventional federated learning method may be reasonable when the number of user terminals 10 to be managed by the server 20 is small, but the number of user terminals 10 participating in federated learning is greatly increased or the user terminals 10 When the number of and its characteristics are flexible, it becomes a great burden on the server 20 for the server 20 to manage all of them.
- the server 20 determines the exact number and timing of individual responses of the user terminals 10 . cannot predict As such, since the server 20 cannot predict the exact number and timing of individual responses of the user terminals 10, it is inefficient for the server 20 to manage the responses of all the user terminals 10.
- the server 20 and the plurality of user terminals 10 have a dependency. That is, the server 20 can proceed to update the global model by collecting the learning data only after collection of all responses of the user terminal 10 is completed, and when a failure occurs in the user terminal 10 or the network, federated learning can also be stopped. have. Accordingly, it is difficult to modify and optimize the learning plan.
- FIG. 3 is a block diagram of a federated learning system according to an embodiment of the present invention.
- the federated learning system may be configured to include a plurality of user terminals 110 , a server 120 , and a data management unit 130 .
- the user terminal 110 and the server 120 are computing devices capable of learning a neural network, and may be implemented in various electronic devices.
- the neural network may be designed to simulate a human brain structure on a computer, and may include a plurality of network nodes having parameters that simulate neurons of a human neural network.
- the plurality of network modes may transmit and receive data according to a connection relationship, respectively, so as to simulate a synaptic activity of a neuron in which a neuron sends and receives a signal through a synapse.
- the neural network may include a deep learning model developed from a neural network model. In a deep learning model, a plurality of network nodes may exchange data according to a convolutional connection relationship while being located in different layers.
- neural network models include deep neural networks (DNN), convolutional deep neural networks (CNN), Recurrent Boltzmann Machine (RNN), Restricted Boltzmann Machine (RBM), deep trust It includes various deep learning techniques such as neural networks (DBN, deep belief networks) and deep Q-networks, and can be applied to fields such as computer vision, speech recognition, natural language processing, and speech signal processing.
- DNN deep neural networks
- CNN convolutional deep neural networks
- RNN Recurrent Boltzmann Machine
- RBM Restricted Boltzmann Machine
- DNN deep trust It includes various deep learning techniques such as neural networks (DBN, deep belief networks) and deep Q-networks, and can be applied to fields such as computer vision, speech recognition, natural language processing, and speech signal processing.
- the plurality of user terminals 110 generates training data by learning the global model based on the user data.
- the training data may be a local model or a local parameter of the local model.
- the server 20 selects the user terminal 10, and only the selected user terminal 10 participates in learning, but the federated learning system according to the embodiment of the present invention selects the user terminal 10 All user terminals 110 having a resource that can be learned without it can participate in learning. Accordingly, the server 120 may alleviate the burden of selecting the user terminal 110 .
- the plurality of user terminals 110 transmits the learning data to the data management unit 130 when learning is completed.
- the plurality of user terminals 110 may transmit the generated local model itself or transmit local parameters of the local model.
- the server 120 generates a global model, collects training data, and uses this to improve the global model.
- the server 120 transmits the model data of the global model to the data management unit 130 , and receives training data from the data management unit 130 .
- the data management unit 130 stores and manages model data and training data related to the global model, transmits the model data to the plurality of user terminals 110 , and transmits the training data to the server 120 .
- the model data may include global parameters of the global model, a learning time of the user terminal 110 and information on the type and size of user data to be used for learning.
- the plurality of user terminals 110 may establish a learning plan based on the model data and perform learning according to the learning plan.
- the data management unit 130 may generate metadata including the size of the training data, the creation date and time, and the distribution characteristics, and manage the training data based on the generated metadata.
- the server 120 may determine the range and amount of the learning data, select the learning data to be collected, establish or change a collection plan of the learning data, based on the metadata, and collect the learning data according to the collection plan. can do. For example, the server 120 may select training data having a certain amount and a certain level of reliability or higher based on the metadata.
- the data management unit 130 may manage the model data and the training data for each version, which will be described in detail later.
- the federated learning system asynchronously performs a task between the user terminal 110 and the server 120 . That is, the user terminal 110 and the server 120 independently perform the operation without considering the operation state of each other. Accordingly, federated learning can be flexibly performed and the performance of the global model can be improved.
- the federated learning system stores the data generated by the user terminal 110 and the server 120 respectively in the data management unit 130 , and the data management unit 130 is the user terminal 110 . and serves as a hub for transferring data stored in the server 120 .
- the user terminal 110 and the server 120 do not communicate with each other.
- the server 120 can perform federated learning by importing only the data stored in the data management unit 130 regardless of the state of the user terminal 110 and the network connection state, thereby reducing the burden on the server 120 . have.
- the federated learning system establishes a communication connection between the conventional server 20 and the storage 30 and the server 20 and the user terminal 10 between the server 120 and the data management unit. (130)
- the federated learning system establishes a communication connection between the conventional server 20 and the storage 30 and the server 20 and the user terminal 10 between the server 120 and the data management unit. (130)
- FIG. 4 is a flowchart of a federated learning method according to an embodiment of the present invention
- FIG. 5 is a detailed flowchart of the step of registering the model data of FIG. 4
- FIG. 6 is a detailed flowchart of the step of transferring the model data of FIG.
- FIG. 7 is a detailed flowchart of the step of registering the learning data of FIG. 4
- FIG. 8 is a detailed flowchart of the step of transferring the learning data of FIG. 4 .
- a task name (Task_name), a version (Version), a model location (Model location), and a device name (Device) name) must be transmitted to the data management unit 130 .
- the data management unit 130 may provide the user terminal 110 and the server 120 with conditions necessary to perform the learning task corresponding to the task name.
- the user terminal 110 and the server 120 may access the data management unit 130 through the task name to find a desired learning task.
- the version is a value used when the user terminal 110 and the server 120 update model data and training data of the global model, and is in a form of a float.
- this version becomes a standard for managing the learning results.
- the model location is information about a location where model data or training data is generated.
- the location where the model data of the global model is generated is the server 120
- the location where the training data of the local model is generated is the user terminal 110 .
- the device name is a unique ID or name of the user terminal 110 and the server 120 .
- the data management unit 130 may help the server 120 to select the learning data generated by the user terminal 110 by providing the performance and characteristics of each device corresponding to the device name.
- the data management unit 130 registers model data or user data corresponding to the received information, or the user terminal 110 or the server forward to (120).
- the server 120 creates a global model and registers model data related to the global model in the data management unit 130 (S10).
- the server 120 requests the data management unit 130 to register the model data (S11). Then, the data management unit 130 registers the model data for each version. That is, the data management unit 130 checks whether there is storage of the model data (S12). At this time, if there is no storage, a storage is created (S13), and the model data is stored in the created storage (S14). And, if there is storage, the data management unit 130 checks the version of the model data ( S15 ), and compares the version of the model data with the latest version stored in the data management unit 130 .
- the version of the model data is upgraded (S17), and if the version of the model data is lower than or equal to the latest version, the model data of the corresponding version is updated (S18).
- the data management unit 130 transmits the model data to the plurality of user terminals 110 (S20).
- a plurality of user terminals 110 request the model data to the data management unit 130 (S21).
- the data management unit 130 transmits the latest version of the model data or the model data of the version requested by the plurality of user terminals 110 to the plurality of user terminals 110 . That is, the data management unit 130 checks whether the specific version of the model data (S22). At this time, in the case of the specific version of the model data, the specific version of the model data is searched (S23), and if it is not the specific version of the model data, the latest version of the model data is searched (S24).
- the data management unit 130 has completed finding the specific version of the model data or the latest version of the model data (S25).
- the found model data is transmitted to the user terminal 110 (S26)
- the latest version of the model data is requested from the server 120 (S27) and delivered and delivered
- the received model data is transmitted to the user terminal 110 (S26).
- the plurality of user terminals 110 generates training data by learning the global model based on the user data (S30).
- the plurality of user terminals 110 register the learning data to the data management unit 130 (S40).
- the user terminal 110 requests the data management unit 130 to register the learning data (S41). Then, the data management unit 130 registers the learning data for each version. That is, the data management unit 130 checks the version of the training data (S42), and checks whether there is a storage in which the training data of the corresponding version is stored (S43). At this time, if there is storage, the learning data is stored in the corresponding storage (S44), and if there is no storage, the storage is created (S45) and the training data is stored in the created storage (S44).
- the data management unit 130 transmits the learning data to the server 120 .
- the server 120 requests the learning data to the data management unit 130 (S51). Then, the data management unit 130 transmits the latest version of the training data or the training data of the version requested by the server 120 to the server 120 . That is, the data management unit 130 searches for the latest version of the training data (S52), and checks whether the training data satisfies the aggregation condition (S53). That is, it is checked whether the training data is greater than a certain amount and the reliability of the training data is greater than or equal to a certain level.
- the learning data when the learning data does not satisfy the aggregation condition, it waits until the aggregation condition is satisfied ( S54 ), and when the learning data satisfies the aggregation condition, the learning data is transmitted to the server 120 . Then, the server 120 collects the training data and improves the global model based on this (S60).
- the server 120 registers the model data of the improved global model in the data management unit 130 .
- the data management unit 130 transmits the model data of the improved global model to the plurality of user terminals (10). This process may be repeated until the global model performance reaches a certain level or higher.
- the federated learning system according to the present invention can be used in various fields such as artificial intelligence technology.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Condensed Matter Physics & Semiconductors (AREA)
- Mathematical Analysis (AREA)
- Pure & Applied Mathematics (AREA)
- Computational Mathematics (AREA)
- Mathematical Optimization (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Information Transfer Between Computers (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
Abstract
La présente invention concerne un système d'apprentissage fédéré comprenant : une pluralité de terminaux utilisateurs qui génèrent des données d'apprentissage par apprentissage d'un modèle d'ensemble en fonction de données d'utilisateur ; un serveur qui génère un modèle d'ensemble, collecte les données d'apprentissage, et utilise les données d'apprentissage pour améliorer le modèle d'ensemble ; et une unité de gestion de données qui stocke et gère les données d'apprentissage et les données de modèle relatives au modèle d'ensemble, fournit les données de modèle à la pluralité de terminaux utilisateurs et fournit les données d'apprentissage au serveur.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2020-0050980 | 2020-04-27 | ||
KR1020200050980A KR102544531B1 (ko) | 2020-04-27 | 2020-04-27 | 연합 학습 시스템 및 방법 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021221242A1 true WO2021221242A1 (fr) | 2021-11-04 |
Family
ID=78332056
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2020/013548 WO2021221242A1 (fr) | 2020-04-27 | 2020-10-06 | Système et procédé d'apprentissage fédéré |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102544531B1 (fr) |
WO (1) | WO2021221242A1 (fr) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024138398A1 (fr) * | 2022-12-27 | 2024-07-04 | 北京小米移动软件有限公司 | Procédé et appareil d'entraînement de modèle |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102646338B1 (ko) | 2021-11-03 | 2024-03-11 | 한국과학기술원 | 클라이언트의 개별 데이터 맞춤형 연합 학습 시스템, 방법, 컴퓨터 판독 가능한 기록 매체 및 컴퓨터 프로그램 |
KR20230064535A (ko) | 2021-11-03 | 2023-05-10 | 한국과학기술원 | 글로벌 모델의 방향성을 따르는 로컬 모델 연합 학습 시스템, 방법, 컴퓨터 판독 가능한 기록 매체 및 컴퓨터 프로그램 |
KR102413116B1 (ko) * | 2021-12-02 | 2022-06-23 | 세종대학교산학협력단 | 인공 신경망의 계층 특성에 기반한 연합 학습 방법 |
KR102485748B1 (ko) * | 2022-05-30 | 2023-01-09 | 주식회사 어니스트펀드 | 통계 모델을 위한 연합 학습 방법 및 장치 |
CN114707430B (zh) * | 2022-06-02 | 2022-08-26 | 青岛鑫晟汇科技有限公司 | 一种基于多用户加密的联邦学习可视化系统与方法 |
KR102517728B1 (ko) * | 2022-07-13 | 2023-04-04 | 주식회사 애자일소다 | 연합 학습에 기반한 상품 추천 장치 및 방법 |
KR102573880B1 (ko) * | 2022-07-21 | 2023-09-06 | 고려대학교 산학협력단 | 다중-너비 인공신경망에 기반한 연합 학습 시스템 및 연합 학습 방법 |
KR20240045837A (ko) | 2022-09-30 | 2024-04-08 | 한국과학기술원 | 향상된 표상을 위한 연합 학습 시스템, 클라이언트 장치 및 방법 |
KR20240087336A (ko) | 2022-12-12 | 2024-06-19 | 국립부경대학교 산학협력단 | 연합 학습을 위한 게더 스캐터 기반의 데이터 패턴 분석 공유를 위한 장치 및 방법 |
KR102585904B1 (ko) | 2022-12-14 | 2023-10-06 | 주식회사 딥노이드 | 자기 주도 중앙 제어 기반의 인공지능을 이용한 방사선 영상을 판독하기 위한 장치 및 이를 위한 방법 |
KR102684383B1 (ko) * | 2022-12-22 | 2024-07-12 | 서울과학기술대학교 산학협력단 | 블록체인 기반 부분 모델 동기화 방법 |
KR20240107806A (ko) | 2022-12-30 | 2024-07-09 | 명지대학교 산학협력단 | 공정성 기반 연합학습을 위한 시스템, 이를 위한 장치 및 이를 위한 방법 |
WO2024162728A1 (fr) * | 2023-01-30 | 2024-08-08 | 울산과학기술원 | Appareil et procédé de méta-apprentissage pour apprentissage fédéré personnalisé |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150324690A1 (en) * | 2014-05-08 | 2015-11-12 | Microsoft Corporation | Deep Learning Training System |
US20170220949A1 (en) * | 2016-01-29 | 2017-08-03 | Yahoo! Inc. | Method and system for distributed deep machine learning |
WO2018057302A1 (fr) * | 2016-09-26 | 2018-03-29 | Google Llc | Apprentissage fédéré à communication efficace |
WO2019219846A1 (fr) * | 2018-05-17 | 2019-11-21 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Concepts pour l'apprentissage distribué de réseaux neuronaux et/ou la transmission de mises à jour de paramétrage associées |
US20190385043A1 (en) * | 2018-06-19 | 2019-12-19 | Adobe Inc. | Asynchronously training machine learning models across client devices for adaptive intelligence |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102414602B1 (ko) * | 2016-11-03 | 2022-06-30 | 삼성전자주식회사 | 데이터 인식 모델 구축 장치 및 이의 데이터 인식 모델 구축 방법과, 데이터 인식 장치 및 이의 데이터 인식 방법 |
WO2018150550A1 (fr) * | 2017-02-17 | 2018-08-23 | 株式会社日立製作所 | Dispositif et procédé de gestion de données d'apprentissage |
KR102369416B1 (ko) * | 2017-09-18 | 2022-03-03 | 삼성전자주식회사 | 복수의 사용자 각각에 대응하는 개인화 레이어를 이용하여 복수의 사용자 각각의 음성 신호를 인식하는 음성 신호 인식 시스템 |
KR20190081373A (ko) * | 2017-12-29 | 2019-07-09 | (주)제이엘케이인스펙션 | 인공 신경망에 기반한 단말 장치 및 데이터 처리 방법 |
KR20190103088A (ko) | 2019-08-15 | 2019-09-04 | 엘지전자 주식회사 | 연합학습을 통한 단말의 명함을 인식하는 방법 및 이를 위한 장치 |
-
2020
- 2020-04-27 KR KR1020200050980A patent/KR102544531B1/ko active IP Right Grant
- 2020-10-06 WO PCT/KR2020/013548 patent/WO2021221242A1/fr active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150324690A1 (en) * | 2014-05-08 | 2015-11-12 | Microsoft Corporation | Deep Learning Training System |
US20170220949A1 (en) * | 2016-01-29 | 2017-08-03 | Yahoo! Inc. | Method and system for distributed deep machine learning |
WO2018057302A1 (fr) * | 2016-09-26 | 2018-03-29 | Google Llc | Apprentissage fédéré à communication efficace |
WO2019219846A1 (fr) * | 2018-05-17 | 2019-11-21 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Concepts pour l'apprentissage distribué de réseaux neuronaux et/ou la transmission de mises à jour de paramétrage associées |
US20190385043A1 (en) * | 2018-06-19 | 2019-12-19 | Adobe Inc. | Asynchronously training machine learning models across client devices for adaptive intelligence |
Non-Patent Citations (1)
Title |
---|
KANG JIAWEN; XIONG ZEHUI; NIYATO DUSIT; ZOU YUZE; ZHANG YANG; GUIZANI MOHSEN: "Reliable Federated Learning for Mobile Networks", IEEE WIRELESS COMMUNICATIONS, COORDINATED SCIENCE LABORATORY; DEPT. ELECTRICAL AND COMPUTER ENGINEERING; UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN, US, vol. 27, no. 2, 1 April 2020 (2020-04-01), US , pages 72 - 80, XP011786131, ISSN: 1536-1284, DOI: 10.1109/MWC.001.1900119 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024138398A1 (fr) * | 2022-12-27 | 2024-07-04 | 北京小米移动软件有限公司 | Procédé et appareil d'entraînement de modèle |
Also Published As
Publication number | Publication date |
---|---|
KR20210132500A (ko) | 2021-11-04 |
KR102544531B1 (ko) | 2023-06-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021221242A1 (fr) | Système et procédé d'apprentissage fédéré | |
CN110191148B (zh) | 一种面向边缘计算的统计函数分布式执行方法及系统 | |
WO2021054514A1 (fr) | Système de questions-réponses personnalisées par l'utilisateur basé sur un graphe de connaissances | |
WO2020162680A1 (fr) | Système et procédé de microservice | |
Huang et al. | Enabling DNN acceleration with data and model parallelization over ubiquitous end devices | |
WO2019095448A1 (fr) | Système de surveillance pour un parc de serveurs de système d'enseignement à distance | |
WO2024019474A1 (fr) | Onduleur bidirectionnel à fonction d'onduleur solaire | |
Li | Retracted: Design and implementation of music teaching assistant platform based on Internet of Things | |
Yao et al. | Forecasting assisted VNF scaling in NFV-enabled networks | |
Chen | Design of computer big data processing system based on genetic algorithm | |
WO2023224205A1 (fr) | Procédé de génération de modèle commun par synthèse de résultat d'apprentissage de modèle de réseau neuronal artificiel | |
Al-Kasassbeh et al. | Analysis of mobile agents in network fault management | |
CN116095007A (zh) | 负载调度方法、装置、计算机设备及存储介质 | |
CN110213778B (zh) | 一种网元主备智能配对的方法及装置 | |
WO2018216828A1 (fr) | Système de gestion de mégadonnées énergétiques, et procédé associé | |
CN116089079A (zh) | 一种基于大数据的计算机资源分配管理系统及方法 | |
WO2013085089A1 (fr) | Procédé d'utilisation de ressource de réseau de communication dans un environnement de nuage m2m et système correspondant | |
Luo et al. | Efficient inter-datacenter ALLReduce with multiple trees | |
CN109510877B (zh) | 一种动态资源群的维护方法、装置及存储介质 | |
WO2020075907A1 (fr) | Procédé de compensation d'agrégateur permettant la sécurisation de ressources énergétiques distribuées | |
WO2021101055A1 (fr) | Procédé de fourniture de service dans un réseau périphérique comprenant de multiples points d'accès, et système associé | |
CN112217664A (zh) | 一种基于自治的分布式并行仿真方法、装置及系统 | |
CN111782322A (zh) | 基于云桌面服务器的内外网消息通讯服务器及系统 | |
WO2024136107A1 (fr) | Procédé de fourniture d'un service efficace d'intelligence artificielle dans un environnement multi-dispositifs | |
WO2023033229A1 (fr) | Procédé et système de traitement par lot adaptatif |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20934220 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20934220 Country of ref document: EP Kind code of ref document: A1 |