WO2021221242A1 - Système et procédé d'apprentissage fédéré - Google Patents

Système et procédé d'apprentissage fédéré Download PDF

Info

Publication number
WO2021221242A1
WO2021221242A1 PCT/KR2020/013548 KR2020013548W WO2021221242A1 WO 2021221242 A1 WO2021221242 A1 WO 2021221242A1 KR 2020013548 W KR2020013548 W KR 2020013548W WO 2021221242 A1 WO2021221242 A1 WO 2021221242A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
learning
model
management unit
server
Prior art date
Application number
PCT/KR2020/013548
Other languages
English (en)
Korean (ko)
Inventor
문재원
금승우
김영기
Original Assignee
한국전자기술연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국전자기술연구원 filed Critical 한국전자기술연구원
Publication of WO2021221242A1 publication Critical patent/WO2021221242A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N10/00Quantum computing, i.e. information processing based on quantum-mechanical phenomena
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/098Distributed learning, e.g. federated learning

Definitions

  • the present invention relates to a federated learning system and method.
  • AI artificial intelligence
  • Training of artificial intelligence models requires numerous computer resources to perform large-scale calculations.
  • Cloud computing service is the best solution that can easily provide computing infrastructure to train artificial intelligence models without complex hardware and software installation.
  • cloud computing is based on centralization of resources, all necessary data should be stored in cloud memory and utilized for model training. Although data centralization offers many advantages in terms of maximizing efficiency, there is a risk of leakage of user personal data, which is becoming an increasingly important business issue as data transmission increases.
  • Federated learning is a learning method in the form of centrally collecting a model learned based on user personal data in a user terminal, rather than learning by collecting user personal data in the center as in the past. Since this federated learning does not centrally collect user personal data, there is little possibility of invasion of privacy.
  • federated learning systems require considering not only algorithmic aspects such as how to update parameters and learning schedules, but also system aspects such as independent data management for each device and how to efficiently communicate with heterogeneous systems.
  • the network dependency between the server and the user terminal is another problem to be solved. That is, in order to perform federated learning, each server and a plurality of user terminals must be closely connected to each other. In this case, when an unstable network or connection problem occurs, it is difficult to respond. In addition, there is a problem in that the user terminal has an additional burden of maintaining data transmitted to the server until the data transmission with the server is completed even if the resource is insufficient and the network state is unstable.
  • an object of the present invention is to provide a federated learning system and method in which a server and a user terminal can asynchronously perform a learning task.
  • a plurality of user terminals that generate training data by learning a global model based on user data, create a global model, collect the training data, and use the global model It provides a federated learning system including a server to improve, and a data management unit that stores and manages model data and learning data related to the global model, transmits the model data to a plurality of user terminals, and transmits the learning data to the server.
  • the model data includes global parameters of the global model, learning time of the user terminal, and type and size information of user data to be used for learning.
  • the user terminal establishes a learning plan based on the model data and performs learning according to the learning plan.
  • the training data is a local model or a local parameter of the local model.
  • the data management unit generates metadata including the size of the training data, the creation date and time, and the distribution characteristics.
  • the server determines the range and number of learning data, selects the learning data to be collected, or establishes or changes a collection plan of the learning data.
  • the data management unit manages the model data and the training data for each version.
  • the server creates a global model, registering model data related to the global model to the data management unit, the data management unit transmitting the model data to a plurality of user terminals, the plurality of user terminals Generating training data by learning a global model based on user data, a plurality of user terminals registering the training data to the data management unit, transmitting the training data to the server by the data management unit, and the server learning It provides a federated learning method comprising the step of aggregating data to improve a global model.
  • the step of registering the model data to the data management unit includes the server requesting the data management unit to register the model data, and the data management unit registering the model data for each version.
  • the step of registering the learning data to the data management unit includes the step of the user terminal requesting the registration of the learning data to the data management unit, and the data management unit registering the learning data for each version.
  • the step of transmitting the learning data to the server includes the step of the server requesting the learning data to the data management unit, and the data management unit transmitting the latest version of the learning data or the version of the learning data requested by the server to the server. .
  • the user terminal and the server independently perform tasks without considering each other's work state, so that federated learning can be flexibly performed and the performance of the global model can be improved.
  • the server can perform federated learning by bringing only the data stored in the data management unit regardless of the user terminal and the network connection state, thereby reducing the burden on the server.
  • the present invention by changing the communication connection between the server and the data management unit and between the user terminal and the data management unit, it is possible to reduce bandwidth, increase network efficiency, and prepare for network failure.
  • FIG. 1 is a block diagram of a conventional federated learning system.
  • FIG. 2 is a flowchart of a conventional associative learning method.
  • FIG. 3 is a block diagram of a federated learning system according to an embodiment of the present invention.
  • FIG. 4 is a flowchart of a federated learning method according to an embodiment of the present invention.
  • FIG. 5 is a detailed flowchart of the step of registering the model data of FIG. 4 .
  • FIG. 6 is a detailed flowchart of a step of transmitting the model data of FIG. 4 .
  • FIG. 7 is a detailed flowchart of the step of registering the learning data of FIG.
  • FIG. 8 is a detailed flowchart of a step of transmitting the learning data of FIG. 4 .
  • 'first' and 'second' may be used to describe various elements, but the elements should not be limited by the above terms. The above term may be used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, a 'first component' may be referred to as a 'second component', and similarly, a 'second component' may also be referred to as a 'first component'. can Also, the singular expression includes the plural expression unless the context clearly dictates otherwise. Unless otherwise defined, terms used in the embodiments of the present invention may be interpreted as meanings commonly known to those of ordinary skill in the art.
  • FIG. 1 is a block diagram of a conventional federated learning system
  • FIG. 2 is a flowchart of a conventional federated learning method.
  • the conventional federated learning system may be configured to include a plurality of user terminals 10 , a server 20 and a storage 30 .
  • the server 20 generates a global model and stores the generated global model in the storage 30 . Then, the server 20 transmits the global model stored in the storage 30 to the plurality of user terminals 10 .
  • the plurality of user terminals 10 generate a learning model by learning a global model based on user data. In addition, the plurality of user terminals 10 transmit the learning model to the server 20 .
  • the server 20 collects the training data and uses it to improve the global model. Then, the server 20 stores the improved global model in the storage 30 , and transmits the improved global model to the plurality of user terminals 10 again. This process may be repeated until the global model performance reaches a certain level or higher.
  • the conventional federated learning method consists of a selection (Selection) step, a configuration (Configuration) step and a reporting (Reporting) step.
  • the server 20 stores the model data including the global parameters of the global model, the learning plan, the data structure, and the work to be performed in the storage 30 .
  • a plurality of user terminals 10a to 10e capable of performing the federated learning notifies that the learning is ready by sending a message to the server 20 (1).
  • the server 20 collects information of a plurality of user terminals 10a to 10e, and according to a rule such as the number of participating terminals, a user terminal 10 most suitable for participating in learning among a plurality of user terminals 10a to 10e. ⁇ 10c) (selection step).
  • the server 20 reads the model data stored in the storage 30 (2), and transmits it to the selected user terminals 10 to 10c (3). Then, the user terminals 10a to 10c perform learning by applying the user data to the global model according to the model data (4) (configuration step).
  • the user terminals 10a to 10c transmit training data, for example, a local model or a local parameter of the local model, to the server 20 when learning is completed.
  • transmission of some user terminals 10b may fail due to an unstable network or connection problem.
  • the server 20 receives the training data from the user terminals 10a and 10c, the server 20 collects the training data and improves the model data of the global model using this (5). Then, the server 20 stores the model data of the improved global model in the storage 30 (report step).
  • the storage 30 is used only for storing model data of the global model generated by the server 20 . Then, the server 20 checks the status of the plurality of user terminals 10, selects a suitable user terminal 10, determines whether a sufficient amount of learning data to be collected has been collected, and transmits the model data to the plurality of users. It performs many roles, such as transmitting to the terminal 10 .
  • the conventional federated learning method may be reasonable when the number of user terminals 10 to be managed by the server 20 is small, but the number of user terminals 10 participating in federated learning is greatly increased or the user terminals 10 When the number of and its characteristics are flexible, it becomes a great burden on the server 20 for the server 20 to manage all of them.
  • the server 20 determines the exact number and timing of individual responses of the user terminals 10 . cannot predict As such, since the server 20 cannot predict the exact number and timing of individual responses of the user terminals 10, it is inefficient for the server 20 to manage the responses of all the user terminals 10.
  • the server 20 and the plurality of user terminals 10 have a dependency. That is, the server 20 can proceed to update the global model by collecting the learning data only after collection of all responses of the user terminal 10 is completed, and when a failure occurs in the user terminal 10 or the network, federated learning can also be stopped. have. Accordingly, it is difficult to modify and optimize the learning plan.
  • FIG. 3 is a block diagram of a federated learning system according to an embodiment of the present invention.
  • the federated learning system may be configured to include a plurality of user terminals 110 , a server 120 , and a data management unit 130 .
  • the user terminal 110 and the server 120 are computing devices capable of learning a neural network, and may be implemented in various electronic devices.
  • the neural network may be designed to simulate a human brain structure on a computer, and may include a plurality of network nodes having parameters that simulate neurons of a human neural network.
  • the plurality of network modes may transmit and receive data according to a connection relationship, respectively, so as to simulate a synaptic activity of a neuron in which a neuron sends and receives a signal through a synapse.
  • the neural network may include a deep learning model developed from a neural network model. In a deep learning model, a plurality of network nodes may exchange data according to a convolutional connection relationship while being located in different layers.
  • neural network models include deep neural networks (DNN), convolutional deep neural networks (CNN), Recurrent Boltzmann Machine (RNN), Restricted Boltzmann Machine (RBM), deep trust It includes various deep learning techniques such as neural networks (DBN, deep belief networks) and deep Q-networks, and can be applied to fields such as computer vision, speech recognition, natural language processing, and speech signal processing.
  • DNN deep neural networks
  • CNN convolutional deep neural networks
  • RNN Recurrent Boltzmann Machine
  • RBM Restricted Boltzmann Machine
  • DNN deep trust It includes various deep learning techniques such as neural networks (DBN, deep belief networks) and deep Q-networks, and can be applied to fields such as computer vision, speech recognition, natural language processing, and speech signal processing.
  • the plurality of user terminals 110 generates training data by learning the global model based on the user data.
  • the training data may be a local model or a local parameter of the local model.
  • the server 20 selects the user terminal 10, and only the selected user terminal 10 participates in learning, but the federated learning system according to the embodiment of the present invention selects the user terminal 10 All user terminals 110 having a resource that can be learned without it can participate in learning. Accordingly, the server 120 may alleviate the burden of selecting the user terminal 110 .
  • the plurality of user terminals 110 transmits the learning data to the data management unit 130 when learning is completed.
  • the plurality of user terminals 110 may transmit the generated local model itself or transmit local parameters of the local model.
  • the server 120 generates a global model, collects training data, and uses this to improve the global model.
  • the server 120 transmits the model data of the global model to the data management unit 130 , and receives training data from the data management unit 130 .
  • the data management unit 130 stores and manages model data and training data related to the global model, transmits the model data to the plurality of user terminals 110 , and transmits the training data to the server 120 .
  • the model data may include global parameters of the global model, a learning time of the user terminal 110 and information on the type and size of user data to be used for learning.
  • the plurality of user terminals 110 may establish a learning plan based on the model data and perform learning according to the learning plan.
  • the data management unit 130 may generate metadata including the size of the training data, the creation date and time, and the distribution characteristics, and manage the training data based on the generated metadata.
  • the server 120 may determine the range and amount of the learning data, select the learning data to be collected, establish or change a collection plan of the learning data, based on the metadata, and collect the learning data according to the collection plan. can do. For example, the server 120 may select training data having a certain amount and a certain level of reliability or higher based on the metadata.
  • the data management unit 130 may manage the model data and the training data for each version, which will be described in detail later.
  • the federated learning system asynchronously performs a task between the user terminal 110 and the server 120 . That is, the user terminal 110 and the server 120 independently perform the operation without considering the operation state of each other. Accordingly, federated learning can be flexibly performed and the performance of the global model can be improved.
  • the federated learning system stores the data generated by the user terminal 110 and the server 120 respectively in the data management unit 130 , and the data management unit 130 is the user terminal 110 . and serves as a hub for transferring data stored in the server 120 .
  • the user terminal 110 and the server 120 do not communicate with each other.
  • the server 120 can perform federated learning by importing only the data stored in the data management unit 130 regardless of the state of the user terminal 110 and the network connection state, thereby reducing the burden on the server 120 . have.
  • the federated learning system establishes a communication connection between the conventional server 20 and the storage 30 and the server 20 and the user terminal 10 between the server 120 and the data management unit. (130)
  • the federated learning system establishes a communication connection between the conventional server 20 and the storage 30 and the server 20 and the user terminal 10 between the server 120 and the data management unit. (130)
  • FIG. 4 is a flowchart of a federated learning method according to an embodiment of the present invention
  • FIG. 5 is a detailed flowchart of the step of registering the model data of FIG. 4
  • FIG. 6 is a detailed flowchart of the step of transferring the model data of FIG.
  • FIG. 7 is a detailed flowchart of the step of registering the learning data of FIG. 4
  • FIG. 8 is a detailed flowchart of the step of transferring the learning data of FIG. 4 .
  • a task name (Task_name), a version (Version), a model location (Model location), and a device name (Device) name) must be transmitted to the data management unit 130 .
  • the data management unit 130 may provide the user terminal 110 and the server 120 with conditions necessary to perform the learning task corresponding to the task name.
  • the user terminal 110 and the server 120 may access the data management unit 130 through the task name to find a desired learning task.
  • the version is a value used when the user terminal 110 and the server 120 update model data and training data of the global model, and is in a form of a float.
  • this version becomes a standard for managing the learning results.
  • the model location is information about a location where model data or training data is generated.
  • the location where the model data of the global model is generated is the server 120
  • the location where the training data of the local model is generated is the user terminal 110 .
  • the device name is a unique ID or name of the user terminal 110 and the server 120 .
  • the data management unit 130 may help the server 120 to select the learning data generated by the user terminal 110 by providing the performance and characteristics of each device corresponding to the device name.
  • the data management unit 130 registers model data or user data corresponding to the received information, or the user terminal 110 or the server forward to (120).
  • the server 120 creates a global model and registers model data related to the global model in the data management unit 130 (S10).
  • the server 120 requests the data management unit 130 to register the model data (S11). Then, the data management unit 130 registers the model data for each version. That is, the data management unit 130 checks whether there is storage of the model data (S12). At this time, if there is no storage, a storage is created (S13), and the model data is stored in the created storage (S14). And, if there is storage, the data management unit 130 checks the version of the model data ( S15 ), and compares the version of the model data with the latest version stored in the data management unit 130 .
  • the version of the model data is upgraded (S17), and if the version of the model data is lower than or equal to the latest version, the model data of the corresponding version is updated (S18).
  • the data management unit 130 transmits the model data to the plurality of user terminals 110 (S20).
  • a plurality of user terminals 110 request the model data to the data management unit 130 (S21).
  • the data management unit 130 transmits the latest version of the model data or the model data of the version requested by the plurality of user terminals 110 to the plurality of user terminals 110 . That is, the data management unit 130 checks whether the specific version of the model data (S22). At this time, in the case of the specific version of the model data, the specific version of the model data is searched (S23), and if it is not the specific version of the model data, the latest version of the model data is searched (S24).
  • the data management unit 130 has completed finding the specific version of the model data or the latest version of the model data (S25).
  • the found model data is transmitted to the user terminal 110 (S26)
  • the latest version of the model data is requested from the server 120 (S27) and delivered and delivered
  • the received model data is transmitted to the user terminal 110 (S26).
  • the plurality of user terminals 110 generates training data by learning the global model based on the user data (S30).
  • the plurality of user terminals 110 register the learning data to the data management unit 130 (S40).
  • the user terminal 110 requests the data management unit 130 to register the learning data (S41). Then, the data management unit 130 registers the learning data for each version. That is, the data management unit 130 checks the version of the training data (S42), and checks whether there is a storage in which the training data of the corresponding version is stored (S43). At this time, if there is storage, the learning data is stored in the corresponding storage (S44), and if there is no storage, the storage is created (S45) and the training data is stored in the created storage (S44).
  • the data management unit 130 transmits the learning data to the server 120 .
  • the server 120 requests the learning data to the data management unit 130 (S51). Then, the data management unit 130 transmits the latest version of the training data or the training data of the version requested by the server 120 to the server 120 . That is, the data management unit 130 searches for the latest version of the training data (S52), and checks whether the training data satisfies the aggregation condition (S53). That is, it is checked whether the training data is greater than a certain amount and the reliability of the training data is greater than or equal to a certain level.
  • the learning data when the learning data does not satisfy the aggregation condition, it waits until the aggregation condition is satisfied ( S54 ), and when the learning data satisfies the aggregation condition, the learning data is transmitted to the server 120 . Then, the server 120 collects the training data and improves the global model based on this (S60).
  • the server 120 registers the model data of the improved global model in the data management unit 130 .
  • the data management unit 130 transmits the model data of the improved global model to the plurality of user terminals (10). This process may be repeated until the global model performance reaches a certain level or higher.
  • the federated learning system according to the present invention can be used in various fields such as artificial intelligence technology.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Mathematical Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

La présente invention concerne un système d'apprentissage fédéré comprenant : une pluralité de terminaux utilisateurs qui génèrent des données d'apprentissage par apprentissage d'un modèle d'ensemble en fonction de données d'utilisateur ; un serveur qui génère un modèle d'ensemble, collecte les données d'apprentissage, et utilise les données d'apprentissage pour améliorer le modèle d'ensemble ; et une unité de gestion de données qui stocke et gère les données d'apprentissage et les données de modèle relatives au modèle d'ensemble, fournit les données de modèle à la pluralité de terminaux utilisateurs et fournit les données d'apprentissage au serveur.
PCT/KR2020/013548 2020-04-27 2020-10-06 Système et procédé d'apprentissage fédéré WO2021221242A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020200050980A KR102544531B1 (ko) 2020-04-27 2020-04-27 연합 학습 시스템 및 방법
KR10-2020-0050980 2020-04-27

Publications (1)

Publication Number Publication Date
WO2021221242A1 true WO2021221242A1 (fr) 2021-11-04

Family

ID=78332056

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/013548 WO2021221242A1 (fr) 2020-04-27 2020-10-06 Système et procédé d'apprentissage fédéré

Country Status (2)

Country Link
KR (1) KR102544531B1 (fr)
WO (1) WO2021221242A1 (fr)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20230064535A (ko) 2021-11-03 2023-05-10 한국과학기술원 글로벌 모델의 방향성을 따르는 로컬 모델 연합 학습 시스템, 방법, 컴퓨터 판독 가능한 기록 매체 및 컴퓨터 프로그램
KR102646338B1 (ko) 2021-11-03 2024-03-11 한국과학기술원 클라이언트의 개별 데이터 맞춤형 연합 학습 시스템, 방법, 컴퓨터 판독 가능한 기록 매체 및 컴퓨터 프로그램
KR102413116B1 (ko) * 2021-12-02 2022-06-23 세종대학교산학협력단 인공 신경망의 계층 특성에 기반한 연합 학습 방법
KR102485748B1 (ko) * 2022-05-30 2023-01-09 주식회사 어니스트펀드 통계 모델을 위한 연합 학습 방법 및 장치
CN114707430B (zh) * 2022-06-02 2022-08-26 青岛鑫晟汇科技有限公司 一种基于多用户加密的联邦学习可视化系统与方法
KR102517728B1 (ko) * 2022-07-13 2023-04-04 주식회사 애자일소다 연합 학습에 기반한 상품 추천 장치 및 방법
KR102573880B1 (ko) * 2022-07-21 2023-09-06 고려대학교 산학협력단 다중-너비 인공신경망에 기반한 연합 학습 시스템 및 연합 학습 방법
KR20240045837A (ko) 2022-09-30 2024-04-08 한국과학기술원 향상된 표상을 위한 연합 학습 시스템, 클라이언트 장치 및 방법
KR102585904B1 (ko) 2022-12-14 2023-10-06 주식회사 딥노이드 자기 주도 중앙 제어 기반의 인공지능을 이용한 방사선 영상을 판독하기 위한 장치 및 이를 위한 방법

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150324690A1 (en) * 2014-05-08 2015-11-12 Microsoft Corporation Deep Learning Training System
US20170220949A1 (en) * 2016-01-29 2017-08-03 Yahoo! Inc. Method and system for distributed deep machine learning
WO2018057302A1 (fr) * 2016-09-26 2018-03-29 Google Llc Apprentissage fédéré à communication efficace
WO2019219846A1 (fr) * 2018-05-17 2019-11-21 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Concepts pour l'apprentissage distribué de réseaux neuronaux et/ou la transmission de mises à jour de paramétrage associées
US20190385043A1 (en) * 2018-06-19 2019-12-19 Adobe Inc. Asynchronously training machine learning models across client devices for adaptive intelligence

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102414602B1 (ko) * 2016-11-03 2022-06-30 삼성전자주식회사 데이터 인식 모델 구축 장치 및 이의 데이터 인식 모델 구축 방법과, 데이터 인식 장치 및 이의 데이터 인식 방법
WO2018150550A1 (fr) * 2017-02-17 2018-08-23 株式会社日立製作所 Dispositif et procédé de gestion de données d'apprentissage
KR102369416B1 (ko) * 2017-09-18 2022-03-03 삼성전자주식회사 복수의 사용자 각각에 대응하는 개인화 레이어를 이용하여 복수의 사용자 각각의 음성 신호를 인식하는 음성 신호 인식 시스템
KR20190081373A (ko) * 2017-12-29 2019-07-09 (주)제이엘케이인스펙션 인공 신경망에 기반한 단말 장치 및 데이터 처리 방법
KR20190103088A (ko) 2019-08-15 2019-09-04 엘지전자 주식회사 연합학습을 통한 단말의 명함을 인식하는 방법 및 이를 위한 장치

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150324690A1 (en) * 2014-05-08 2015-11-12 Microsoft Corporation Deep Learning Training System
US20170220949A1 (en) * 2016-01-29 2017-08-03 Yahoo! Inc. Method and system for distributed deep machine learning
WO2018057302A1 (fr) * 2016-09-26 2018-03-29 Google Llc Apprentissage fédéré à communication efficace
WO2019219846A1 (fr) * 2018-05-17 2019-11-21 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Concepts pour l'apprentissage distribué de réseaux neuronaux et/ou la transmission de mises à jour de paramétrage associées
US20190385043A1 (en) * 2018-06-19 2019-12-19 Adobe Inc. Asynchronously training machine learning models across client devices for adaptive intelligence

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KANG JIAWEN; XIONG ZEHUI; NIYATO DUSIT; ZOU YUZE; ZHANG YANG; GUIZANI MOHSEN: "Reliable Federated Learning for Mobile Networks", IEEE WIRELESS COMMUNICATIONS, COORDINATED SCIENCE LABORATORY; DEPT. ELECTRICAL AND COMPUTER ENGINEERING; UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN, US, vol. 27, no. 2, 1 April 2020 (2020-04-01), US , pages 72 - 80, XP011786131, ISSN: 1536-1284, DOI: 10.1109/MWC.001.1900119 *

Also Published As

Publication number Publication date
KR102544531B1 (ko) 2023-06-16
KR20210132500A (ko) 2021-11-04

Similar Documents

Publication Publication Date Title
WO2021221242A1 (fr) Système et procédé d'apprentissage fédéré
CN110191148B (zh) 一种面向边缘计算的统计函数分布式执行方法及系统
WO2021054514A1 (fr) Système de questions-réponses personnalisées par l'utilisateur basé sur un graphe de connaissances
WO2019103199A1 (fr) Système intelligent personnalisé et procédé de fonctionnement associé
WO2021201370A1 (fr) Appareil et système de gestion de ressources d'apprentissage fédéré et procédé d'efficacité de ressource associé
WO2021169294A1 (fr) Procédé et appareil de mise à jour de modèle de reconnaissance d'application, et support de stockage
WO2019095448A1 (fr) Système de surveillance pour un parc de serveurs de système d'enseignement à distance
Li Retracted: Design and implementation of music teaching assistant platform based on Internet of Things
WO2024019474A1 (fr) Onduleur bidirectionnel à fonction d'onduleur solaire
Huang et al. Enabling dnn acceleration with data and model parallelization over ubiquitous end devices
Yao et al. Forecasting assisted VNF scaling in NFV-enabled networks
Chen et al. Heterogeneous semi-asynchronous federated learning in Internet of Things: A multi-armed bandit approach
CN110233870A (zh) 一种班牌系统客户端长连接处理方法及装置
Chen Design of computer big data processing system based on genetic algorithm
CN116095007A (zh) 负载调度方法、装置、计算机设备及存储介质
CN110213778B (zh) 一种网元主备智能配对的方法及装置
CN116089079A (zh) 一种基于大数据的计算机资源分配管理系统及方法
US6925491B2 (en) Facilitator having a distributed configuration, a dual cell apparatus used for the same, and an integrated cell apparatus used for the same
WO2013085089A1 (fr) Procédé d'utilisation de ressource de réseau de communication dans un environnement de nuage m2m et système correspondant
CN109510877B (zh) 一种动态资源群的维护方法、装置及存储介质
WO2020075907A1 (fr) Procédé de compensation d'agrégateur permettant la sécurisation de ressources énergétiques distribuées
WO2021101055A1 (fr) Procédé de fourniture de service dans un réseau périphérique comprenant de multiples points d'accès, et système associé
WO2018216828A1 (fr) Système de gestion de mégadonnées énergétiques, et procédé associé
WO2023224205A1 (fr) Procédé de génération de modèle commun par synthèse de résultat d'apprentissage de modèle de réseau neuronal artificiel
CN111782322A (zh) 基于云桌面服务器的内外网消息通讯服务器及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20934220

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20934220

Country of ref document: EP

Kind code of ref document: A1